Securosis

Research

Firestarter: Multicloud Deployment Structures and Blast Radius

In this, our second Firestarter on multicloud deployments, we start digging into the technological differences between the cloud providers. We start with the concept of how to organize your account(s). Each provider uses different terminology but all support similar hierarchies. From the overlay of AWS organizations to the org-chart-from-the-start of an Azure tenant we dig into the details and make specific recommendations. We also discuss the inherent security barriers and cover a wee bit of IAM. Watch or listen: Share:

Share:
Read Post

DisruptOps: Breaking Attacker Kill Chains in AWS: IAM Roles

Breaking Attacker Kill Chains in AWS: IAM Roles Over the past year I’ve seen a huge uptick in interest for concrete advice on handling security incidents inside the cloud, with cloud native techniques. As organizations move their production workloads to the cloud, it doesn’t take long for the security professionals to realize that the fundamentals, while conceptually similar, are quite different in practice. One of those core concepts is that of the kill chain, a term first coined by Lockheed Martin to describe the attacker’s process. Break any link and you break the attack, so this maps well to combining defense in depth with the active components of incident response. Read the full post at DisruptOps. Share:

Share:
Read Post

Firestarter: So you want to multicloud?

This is our first in a series of Firestarters covering multicloud. Using more than one IaaS cloud service provider is, well, a bit of a nightmare. Although this is widely recognized by anyone with hands-on cloud experience that doesn’t mean reality always matches our desires. From executives worried about lock in to M&A activity we are finding that most organizations are being pulled into multicloud deployments. In this first episode we lay out the top level problems and recommend some strategies for approaching them. Watch or listen: Share:

Share:
Read Post

What We Know about the Capital One Data Breach

I’m not a fan of dissecting complex data breaches when we don’t have any information. In this case we do know more than usual due to the details in the complaint filed by the FBI. I want to be very clear that this post isn’t to blame anyone and we have only the most basic information on what happened. The only person we know is worthy of blame here is the attacker. As many people know Capital One makes heavy use of Amazon Web Services. We know AWS was involved in the attack because the federal complaint specifically mentions S3. But this wasn’t a public S3 bucket. Again, all from the filed complaint: The attacker discovered a server (likely an instance – it had an IAM role) with a misconfigured firewall. It presumably had a software vulnerability or was vulnerable due to to a credential exposure. The attacker compromised the server and extracted out its IAM role credentials. These ephemeral credentials allow AWS API calls. Role credentials are rotated automatically by AWS, and much more secure than static credentials. But with persistent access you can obviously update credentials as needed. Those credentials (an IAM role with ‘WAF’ in the title) allowed listing S3 buckets and read access to at least some of them. This is how the attacker exfiltrated the files. Some buckets (maybe even all) were apparently encrypted, and a lot of the data within those files (which included credit card applications) was encrypted or tokenized. But the impact was still severe. The attacker exfiltrated the data and then discussed it in Slack and on social media. Someone in contact with the attacker saw that information, including attack details in GitHub. This person reported it to Capital One through their reporting program. Capital One immediately involved the FBI and very quickly closed the misconfigurations. They also began their own investigation. They were able to determine exactly what happened very quickly, likely through CloudTrail logs. Those contained the commands issued by that IAM role from that server (which are very easy to find). They could then trace back the associated IP addresses. There are many other details on how they found the attacker in the complaint, and it looks like Capital One did quite a bit of the investigation themselves. So misconfigured firewall (Security Group?) > compromised instance > IAM role credential extraction > bucket enumeration > data exfiltration. Followed by a rapid response and public notification. As a side note, it looks like the attacker may have been a former AWS employee, but nothing indicates that was a factor in the breach. People will say the cloud failed here, but we saw breaches like this long before the cloud was a thing. Containment and investigation seem to have actually run far faster than would have been possible on traditional infrastructure. For example Capital One didn’t need to worry about the attacker turning off local logging – CloudTrail captures everything that touches AWS APIs. Normally we hear about these incidents months or years later, but in this case we went from breach to arrest and disclosure in around two weeks. I hope that someday Capital One will be able to talk about the details publicly so the rest of us can learn. No matter how good you are, mistakes happen. The hardest problem in security is solving simple problems at scale. Because simple doesn’t scale, and what we do is damn hard to get right every single time. Share:

Share:
Read Post

DisruptOps: Build Your Own Multi-Cloud Security Monitoring in 30 Minutes or Less with StreamAlert

Build Your Own Multi-Cloud Security Monitoring in 30 Minutes or Less with StreamAlert One of the most difficult problems in cloud security is building comprehensive multi-account/multi-cloud security monitoring and alerting. I’d say maybe 1 out of 10 organizations I assess or work with have something effective in place when I first show up. That’s why I added a major monitoring lab based on AirBnB’s StreamAlert project to the Securosis Advanced Cloud Security and Applied DevSecOps training class (we still have some spots available for our Black Hat 2019 class). Read the full post at DisruptOps Share:

Share:
Read Post

Apple Flexes Its Privacy Muscles

Apple events follow a very consistent pattern, which rarely changes beyond the details of the content. This consistency has gradually become its own language. Attend enough events and you start to pick up the deliberate undertones Apple wants to communicate, but not express directly. They are the facial and body expressions beneath the words of the slides, demos, and videos. Five years ago I walked out of the WWDC keynote with a feeling that those undertones were screaming a momentous shift in Apple’s direction. That privacy was emerging as a foundational principle for the company. I wrote up my thoughts at Macworld, laying out my interpretation of Apple’s privacy principles. Privacy was growing in importance at Apple for years before that, but that WWDC keynote was the first time they so clearly articulated that privacy not only mattered, but was being built into foundational technologies. This year I sat in the WWDC keynote, reading the undertones, and realized that Apple was upping their privacy game to levels never before seen from a major technology company. That beyond improving privacy in their own products, the company is starting to use its market strength to pulse privacy throughout the tendrils that touch the Apple ecosystem. Regardless of motivations – whether it be altruism, the personal principles of Apple executives, or simply shrewd business strategy – Apple’s stance on privacy is historic and unique in the annals of consumer technology. The real question now isn’t whether they can succeed at a technical level, but whether Apple’s privacy push can withstand the upcoming onslaught from governments, regulators, the courts, and competitors. Apple has clearly explained that they consider privacy a fundamental human right. Yet history is strewn with the remains of well-intentioned champions of such rights. How privacy at Apple changed at WWDC19 When discussing these shifts in strategy, at Apple or any other technology firm, it’s important to keep in mind that the changes typically start years before outsiders can see them, and are more gradual than we can perceive. Apple’s privacy extension efforts started at least a couple years before WWDC14, when Apple first started requiring privacy protections to participate in HomeKit and HealthKit. The most important privacy push from WWDC19 is Sign In with Apple, which offers benefits to both consumers and developers. In WWDC sessions it became clear that Apple is using a carrot and stick approach with developers: the stick is that App Review will require support for Apple’s new service in apps which leverage competing offerings from Google and Facebook, but in exchange developers gain Apple’s high security and fraud prevention. Apple IDs are vetted by Apple and secured with two-factor authentication, and Apple provides developers with the digital equivalent of a thumbs-up or thumbs-down on whether the request is coming from a real human being. Apple uses the same mechanisms to secure iCloud, iTunes, and App Store purchases, so this seems to be a strong indicator. Apple also emphasized they extend this privacy to developers themselves. That it isn’t Apple’s business to know how developers engage with users inside their apps. Apple serves as an authentication provider and collects no telemetry on user activity. This isn’t to say that Google and Facebook abuse their authentication services, Google denies this accusation and offers features to detect suspicious activity. Facebook, on the other hand, famously abused phone numbers supplied for two-factor authentication, as well as a wide variety of other user data. The difference between Sign In with Apple and previous privacy requirements within the iOS and Mac ecosystems is that the feature extends Apple’s privacy reach beyond its own walled garden. Previous requirements, from HomeKit to data usage limitations on apps in the App Store, really only applied to apps on Apple devices. This is technically true for Sign In with Apple, but practically speaking the implications extend much further. When developers add Apple as an authentication provider on iOS they also need to add it on other platforms if they expect customers to ever use anything other than Apple devices. Either that or support a horrible user experience (which, I hate to say, we will likely see plenty of). Once you create your account with an Apple ID, there are considerable technical complexities to supporting non-Apple login credentials for that account. So providers will likely support Sign In with Apple across their platforms, extending Apple’s privacy reach beyond its own platforms. Beyond sign-in Privacy permeated WWDC19 in both presentations and new features, but two more features stand out as examples of Apple extending its privacy reach: a major update to Intelligent Tracking Prevention for web advertising, and HomeKit Secure Video. Privacy preserving ad click attribution is a surprisingly ambitious effort to drive privacy into the ugly user and advertising tracking market, and HomeKit Secure Video offers a new privacy-respecting foundation for video security firms which want to be feature competitive without the mess of building (and securing) their own back-end cloud services. Intelligent Tracking Prevention is a Safari feature to reduce the ability of services to track users across websites. The idea is that you can and should be able to enable cookies for one trusted site, without having additional trackers monitor you as you browse to other sites. Cross-site tracking is endemic to the web, with typical sites embedding dozens of trackers. This is largely to support advertising and answer a key marketing question: did an ad lead to you visit a target site and buy something? Effective tracking prevention is an existential risk to online advertisements and the sites which rely on them for income, but this is almost completely the fault of overly intrusive companies. Intelligent Tracking Prevention (combined with other browser privacy and security features) is a stick and privacy preserving ad click attribution is the corresponding carrot. It promises to enable advertisers to track conversion rates without violating user privacy. An upcoming feature of Safari, and a proposed web standard, Apple promises that browsers will remember ad clicks for seven days. If

Share:
Read Post

DisruptOps: The Security Pro’s Quick Comparison: AWS vs. Azure vs. GCP

I’ve seen a huge increase in the number of questions about cloud providers beyond AWS over the past year, especially in recent months. I decided to write up an overview comparison over at DisruptOps. This will be part of a slow-roll series going into the differences across the major security program domains – including monitoring, perimeter security, and security management. Here’s an excerpt: The problem for security professionals is that security models and controls vary widely across providers, are often poorly documented, and are completely incompatible. Anyone who tells you they can pick up on these nuances in a few weeks or months with a couple training classes is either lying or ignorant. It takes years of hands-on experience to really understand the security ins and outs of a cloud provider. … AWS is the oldest and most mature major cloud provider. This is both good and bad, because some of their enterprise-level options were basically kludged together from underlying services weren’t architected for the scope of modern cloud deployments. But don’t worry – their competitors are often kludged together at lower levels, creating entirely different sets of issues. … Azure is the provider I run into the most when running projects and assessments. Azure can be maddening at times due to lack of consistency and poor documentation. Many services also default to less secure configurations. For example if you create a new virtual network and a new virtual machine on it, all ports and protocols are open. AWS and GCP always start with default deny, but Azure starts with default allow. … Like Azure, GCP is better centralized, because many capabilities were planned out from the start – compared to AWS feature which were only added a few years ago. Within your account Projects are isolated from each other except where you connect services. Overall GCP isn’t as mature as AWS, but some services – notably container management and AI – are class leaders. Share:

Share:
Read Post

Firestarter: 2019: Insert Winter is Coming Meme Here

In this year-end/start firestarter the gang jumps into our expectations for the coming year. Spoiler alert- the odds are some consolidation and contraction in security markets are impending… and not just because the Chinese are buying fewer iPhones. Watch or listen: Share:

Share:
Read Post

Firestarter: re:Invent Security Review

It’s that time of year again. The time when Amazon takes over our lives. No, not the holiday shopping season but the annual re:Invent conference where Amazon Web Services takes over Las Vegas (really, all of it) and dumps a firehouse of updates on the world. Listen in to hear our take on new services like Transit Hub, Security Hub, and Control Tower. Watch or listen: Share:

Share:
Read Post

DisruptOps: Something You Probably Should Include When Building Your Next Threat Models

Something You Probably Should Include When Building Your Next Threat Models We are working on our threat modeling here at DisruptOps and I decided to refresh my knowledge of different approaches. One thing that quickly stood out is that nearly none of the threat modeling documentation or tools I’ve seen cover the CI/CD pipeline. Read the full post at DisruptOps Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.