Securosis

Research

And then a not-a-miracle occurs…

It’s a perfect fall Sunday morning here in Phoenix. After a brutally hot summer the air is cool, the sky is clear, and the fresh air is drifting into the hotel ballroom while I wait for my daughter to take the stage in the Irish dance regionals competition. The schedule is a little behind, so I’m sitting here on my iPad catching up on the security newsletters and posts that usually pile up during the week when I’m more focused on my own deliverables. During the week I tend to do a decent job of keeping up with the latest cloud security feature releases, but I tend to fall behind on the breach and vulnerability reports. There are… a lot of breach and vulnerability reports. One of the wild things about having started early in cloud security is that I’ve witnessed the progression from a nascent technology that neither attackers nor defenders really had a good handle on, to our current high-stakes eternal cat and mouse game. As a defender I consider it absolutely essential to read every teardown of every breach I can get my hands on (usually published by incident response service companies), while also keeping up on the latest vulnerability research. Which brings me back to the ballroom. As I close the last browser tab, the accordion and keyboardist playing an Irish reel (or maybe a jig, I really am bad at music), I realized I was instinctively mentally sorting these reports into three buckets: Yada yada “exposed credentials” yada yada a complex series of steps yada yada. Yada yada “excessive privileges” yada yada a complex series of steps yada yada. Yada yada “public facing with a known vulnerability” yada yada a complex series of steps yada yada. Of the dozen reports I’ve sorted through today, a mix of breach walkthroughs and novel attack patterns from vulnerability researchers, every single one fit into these three buckets. Like they do every week. Look, learning about more advanced attack patterns is important. Knowing how to trace and contain an incident once it progresses past the initial access is an increasingly rare and valuable skill set. And finding and fixing all the drops of misconfigurations and exposures in my buckets, at scale, ain’t easy. But it’s all too easy to get lost in the complexity, especially when you haven’t been doing this cloud security stuff for a while. Just remember, this isn’t rocket science. Stamp out static credentials, turn on MFA, and stop putting vulnerable crap on the Internet. That should keep you busy for a while. Maybe someday Chris and I will need to re-prioritize the Universal Cloud Threat Model. But today I’ll watch a little dance, enjoy a little sun, and maybe catch up on some comic books during the breaks. Share:

Share:
Read Post

Enterprise Governance Is Failing Cloud Security

We have a major problem. It isn’t really getting better, and soon a critical window of opportunity will close that we can’t afford to lose. I don’t say this lightly, and I think anyone who has read my prior work knows I am not prone to FUD. No one can possibly know the actual percentage of enterprise workloads and applications that have moved to cloud, but every statistic I could find estimates that, at most, it is somewhere in the range of 25% (here’s one Gartner take). I think under 25% is likely accurate, but I estimate that well over 90% of organizations have some production workloads in cloud, including SaaS and PaaS/IaaS. The lake is wide but only deep for a relatively small number of enterprises. This is natural and expected; it takes decades to transition existing workloads, especially when they are running happily in datacenters and there’s no major driver to move them out. This is our window. Most organizations are in the shallow end of the pool, staring wistfully at the adventurous kids jumping off the high dive and frolicking around in the deep end. We have a choice — wait, learn to swim, or strap on some floaties and hope for the best. Oh, and there’s no lifeguard and there are most definitely some sharks. With lasers. If organizations don’t improve their cloud governance, they have no chance of meaningfully improving their cloud security. That’s bad enough with today’s relatively limited cloud adoption, but as we gradually move more and more workloads to the cloud, without effective governance the problem will increase exponentially. Nearly every single cloud security issue and breach is the direct result of a governance failure, not a technology failure. Cloud Governance Anti-Patterns I started working hands-on in cloud security in 2010. In any given year I probably talk with hundreds of organizations, if you include training classes and webinars. As an IANS faculty member I take, on average, 3-5 advisory calls a week, mostly with large enterprises. Each year I run multiple cloud security assessments, advisory and consulting engagements (most with larger organizations, including some of the largest in the world). I also provide advisory calls through the Cloud Security Alliance. This post is based on consistent trends I see throughout these calls, projects, and other relationships. In many calls the customer starts to describe a narrow problem which I quickly recognize is a larger governance issue. I often stop them, describe the anti-pattern, and it’s almost like I just magically described their entire childhood. It’s like my work as a paramedic: using key symptoms to identify the larger problem. This is a big dataset, and some of these issues directly contradict each other as different organizations make different mistakes on opposite sides of the spectrum: “We can’t slow down developers.” Security may be allowed to put in some basic requirements, but is often not allowed to install any significant preventative controls. They are often forced to rely on a CSPM/CNAPP and, at best, get to escalate only critical and high issues. IAM is a disaster with teams making major use of static credentials, like AWS IAM Users (which cause 66% of all AWS customer security incidents, according to AWS). “We don’t trust cloud and have to comply with our existing security policies and processes.” Security does get to slow things down, but typically lacks a sufficient technical understanding of cloud and tries to shoehorn it into existing processes. The organization tries to rely on existing security tools, and focuses too much on the network and too little on IAM. Cloud usage is so constrained that teams may just give up and keep deploying into the datacenter. “Cloud is just another datacenter.” There is little acceptance that cloud computing is a fundamentally different technology, which requires a different skillset. Neither infrastructure, development, nor security teams are effectively trained and tooled; instead they are expected to learn as they go. Many projects are just rehosted into the cloud, which reduces reliability and security while increasing costs. There are two subtypes of this pattern: “We must migrate n% of workloads by x date.” Usually driven by datacenter contract renewal dates. “We have $n credits from our vendor (usually Microsoft or Oracle), so we need to use those.” “We are going multicloud.” These organizations usually haven’t finished establishing an effective security program for one cloud, but they are going into other clouds. This is often tied to “we can’t slow down developers.” Multicloud isn’t inherently wrong, but it’s horrifically wrong without proper governance and investment in tools and people. The security teams in these organizations almost entirely rely on CSPM tools for blocking and tackling, and there is almost never investment in having at least one security subject matter expert for each cloud. There are four subtypes I often see: “We are going to be cloud agnostic and run everything in containers.” The expectations is that everything will work in containers wherever you want to deploy it, because the enterprise either thinks they can save money and dynamically move workloads wherever they are cheaper, or because developers want to use their favorite toys. For the record, I am as likely to see a living unicorn as a truly cloud-agnostic workload. “We need to backup our workloads in case our cloud provider has an outage.” If you want to completely rebuild your entire application stack on multiple platforms which don’t share any fundamental technology characteristics, be prepared to pay up. “We got some credits from $provider we need to use.” So either you lose credits, or you pay to up-skill your teams, or you… do neither, and have a poorly supported workload running on a platform on which nobody is expert. “We need to go multicloud in case $provider has an outage.” Have you tracked outages? Do you architect within your existing provider to handle outages? Executive leadership is disengaged and doesn’t set the ground rules. This one isn’t in quotes because that’s never how the

Share:
Read Post

On TidBITS: My Take on Apple Intelligence and Private Cloud Compute

I just published a piece on Apple Intelligence at TidBITS that I’m pretty excited to release. I wrote it (literally sitting poolside on vacation) to try and explain why this matters to someone even if they don’t know anything about AI or security. For those of us in cloud security, some really interesting things are going on: This is confidential computing, but designed for a very specific purpose, which gives Apple more latitude in how they design controls. Think AWS Nitro (because this is deeper than SGX) with some metrics/monitoring to detect tampering. Apple can model and measure their workloads, and even architected a system to publicly share results, so individual devices can validate that the code is running at expected. Apple designed the system with the assumption that an advanced adversary will gain physical access to servers. That’s one hell of a threat model, and… exactly the kind of adversary Apple faces (hello, governments). The non-targeting defenses are excellent. I really appreciate two aspects: Apple can’t track requests back to individual users. They added a third-party intermediary so they never see the traffic source. If an attacker compromises a server/node, they can’t steer a user to it. This is trust but verify on steroids. Apple built a system for continuous external validation. I don’t think I’ve seen anything like it before — certainly not at scale. Lotta crypto. Like, down to the chips and digital certificates in the Secure Enclave on your phones. On the AI side, there’s some cool stuff around how they are optimizing for devices, different models, and using transformers. I also suspect they may be using RAG to interface with the on-device semantic index, but I could be wrong there. Anyway, it was a ton of fun to write. Sorry it’s so long. Read it here! Share:

Share:
Read Post

Old Dog, New Tricks [Final Incite: June 24, 2024]

TL;DR: Back in December, I took a job as head of strategy and technology for a candy-importing company called Dorval Trading. To explain the move I dusted off the confessor structure, and also performed a POPE evaluation of the opportunity below. I’ll be teaching at Black Hat this summer, so I hope to see many of you there. Otherwise you can always reach me at my Securosis email, at least until Rich cancels my account. It’s another sunny day in the spring. Mike walks into the building. It’s so familiar, yet different. It’s been over 4 years since he’s been here, and it seems lighter. Airier. But the old bones are there. He takes a look around and feels nostalgic. Mike knows this is probably the last time he’ll be here. It’s a very strange feeling. He steps into the booth, as he has done so many times before. He came here to talk through pretty much every major transition since 2006, as a way to document what was going on, and to consider the decisions that needed to be made and why. Confessor: Hi Mike. 4 years is a long time. What have you been up to? Mike: It’s nice to be back. I’ve kept myself occupied, that’s for sure. As we recovered from COVID, Rich and I were faced with some big decisions. DisruptOps was acquired, and Rich decided to join Firemon and lead the Cloud Defense product. I was initially going to keep on the Securosis path, but I got an opportunity to join Techstrong and jumped at it. Confessor: So you and Rich went your separate ways. How did that work out? Mike: Yes and no. Although we don’t work together full-time anymore, we still collaborate quite a bit. We’re in the process of updating our cloud security training curriculum, and will launch CCSKv5 this summer. So I still see plenty of Rich… (Mike gets quiet and looks off into space.) Confessor: What’s on your mind? It seems heavy, but not in a bad way. Kind of like you are seeing ghosts. Mike: I guess I am. This is probably the last time I’ll be here. You see, I’ve taken a real turn in my career. It’s so exciting but bittersweet. Security is what I’ve done for over 30 years. It’s been my professional persona. It’s how I’ve defined my career and who I am to a degree. But security is no longer my primary occupation. Confessor: Do tell. It must be a pretty special opportunity to get you to step out of security. Mike: Would you believe I’ve joined a candy-importing company? I’m running strategy and technology for a business I’ve known for over 40 years. It was very unexpected, but makes perfect sense. Confessor: How did you stumble into this? Mike: Stumble is exactly right. You see, Dorval Trading is a family business started by my stepmother’s parents in 1965. She’s been running it since 1992, and as she was looking to her future, she realized she could use some help. So I did some consulting last year after I left Techstrong, and it was a pretty good fit. The company has been around for almost 60 years, and a lot of the systems and processes need to be modernized. We don’t do any direct e-commerce, and since COVID haven’t really introduced a lot of new products. So there is a lot of work to do. Even better, my brother has joined the company as well. After over 20 years in financial services doing procurement operations, he’ll be focused on optimizing our data and compliance initiatives. So I get to see my family every day, and thankfully that’s a great thing for me. Confessor: Candy?!?! No kidding. What kind of candy? I’m asking for a friend. Mike (chuckling): Our primary product is Sour Power, the original sour candy, which we’ve imported from the Netherlands since 1985. We also have a line of taffy products, and import specialty candies from Europe. If you grew up in the Northeast US, you may be familiar with Sour Power. And now we sell throughout the country. Confessor: So, no more security? Really? Mike: Not exactly. I have been in the business 30 years, and still have lots of friends and contacts. I’m happy to help them out if and when I can. I’ll still teach a few cloud security classes a year, and may show up on IANS calls or an event from time to time. I joined the advisory board of Query.ai, which is a cool federated security search company, and I’m certainly open to additional advisory posts if I can be of help. Learning a new business takes time, but I’m not starting from scratch. In the short time I’ve been with Dorval, I’ve confirmed that business is business. You have to sell more than you spend. You need to have great products and work to build customer loyalty. But there are nuances to working with a perishable, imported product. I also leverage my experience in the security business. I learned a lot about launching products, dealing with distribution channels, and even incident response. In the candy business you need to be prepared for a product recall. So we did a tabletop exercise working through a simulated recall scenario. The key to the exercise was having a strong playbook and making sure everyone knew their job. The recall simulation seemed so familiar, but different at the same time. Which is a good way to sum up everything about my new gig. It turns out the biggest candy conference of the year was the week after RSA, so I couldn’t make it to SF for the conference this year. I did miss seeing everyone, especially at the Disaster Recovery Breakfast. I will be at Black Hat this year, where I’m teaching the maiden voyage of CCSKv5. I look forward to seeing many old friends there. Confessor: So this is it, I guess? Mike: It is. But that’s

Share:
Read Post

The Cloud Shared Irresponsibilities Model

The next phase of cloud security won’t be about shiny new products or services, although we’ll have those. It won’t be about stopping the next world-ending cloud 0-day, but we’ll continue trying to prevent them. It won’t be about AI, but we’ll still have to do something with AI to appease our machine overlords. It will be about making cloud deployments more inherently secure through better, smarter defaults, and better, smarter, and yes, cheaper, built-in capabilities. Here’s why: When I first started researching and working with public cloud about 15 years ago, I realized that cloud providers have massive economic incentives to be better at security than your organization. A major breach of a cloud provider that affects all (or most) tenants would be an existential event which would destroy trust in that provider and crater their business. We’ve arguably had moderate multi-tenant events, and are witnessing events in real time — wondering whether my theory will stand, and a major CSP will suffer from a direct breach (as a result of Microsoft’s recent incidents and the CISA CSRB report). This was the origin of the shared responsibilities model. There’s a waterline in the technology: below it the cloud provider is responsible for ensuring the services you consume are inherently secure. Above it you are responsible for how you secure and configure what you use. Security is transitive. When I build on a service, I am only as secure as the underlying service. It turns out this plays both ways. It’s a two-way door. Security impacts are also transitive. If a customer on a cloud platform suffers a major security breach, every headline includes the name of the cloud provider. Sure, you can blame the customer for misconfiguring your service, but that doesn’t mean everyone won’t still think you’re responsible. Thus I present the Cloud Shared Irresponsibilities Model. Cloud providers will be considered partially responsible for any customer breach involving their services, even if the breach was due to customer misconfiguration. This really hit home this week with the Ticketmaster and Santander debacle. An intel firm called Hudson Rock claimed that Snowflake was the source of the breach, and that other companies were affected. Snowflake followed up (backed by Mandiant and Crowdstrike) that the attacks targeted the breached companies and took advantage of clients with single factor authentication. While investigations are ongoing, this is negative for both the breached companies and Snowflake. (And I really wouldn’t want to be Hudson Rock right now, unless I had damn good evidence). Snowflake didn’t do anything wrong. But it kinda doesn’t matter at this point — heck, even if in a fictitious world it turns out the Ticketmaster data wasn’t even in Snowflake, no one will read the follow-up headlines. Or let’s go way back to the Capital One breach. To this day some people still think it was an insider attack by an AWS employee, or a former employee using special knowledge. Nope, it was a former employee using well-known techniques, which I even had been talking about in a training class for the prior year (we had a lab for the credential abuse part!). Here’s the messy part. AWS was partially responsible for the breach. They didn’t do certain things that could have significantly reduced the risk that Capital One would make those mistakes, or eliminated them completely. How do I know? In the years, since we’ve gotten IMDSv2 (and can now enforce it as a default), Block Public Access for S3 (and better tools to determine whether a bucket is potentially public), new regions are opt-in only, and various other enhancements. Microsoft tried to play the customer blame game, but they were hammered in that CSRB report for charging more for the security tools needed to reduce the risk of the attacks. The Shared Responsibilities Model forced providers to create secure base services, but pushed blame for misconfigurations onto customers. The Shared Irresponsibilities Model pushes negative impact back on the cloud provider for these mistakes. It’s about restoring balance to the Force. If I’m right, what will we see? Cloud providers will improve their defaults. For example, there are providers today which do not allow you to have certain accounts without MFA enabled by default (e.g., AWS is adding this requirement for root user accounts). Some security capabilities that customers pay for today will either become ‘free’ or much cheaper (e.g., Microsoft is reducing/eliminating costs for some logs, and/or extending the free retention period). More successful cloud providers will make security simpler and easier. Okay, that last one might be a stretch. It’s there to amuse my fellow cloud security professionals. I’m sure Chris is snorting some sort of not-beer out his nose right now. I really don’t think the cloud providers (other than Microsoft — seriously, read that CSRB report) have done anything wrong. It’s very hard to anticipate failure states, and to insert security which will add friction and slow down the primary buyers and users of cloud services. But now that government regulators have shifted their collective gaze, that media companies prefer headlines with ‘Amazon’, ‘Google’, and ‘Microsoft’ in them, and cloud platforms are becoming the default for new projects, it’s hard not to see this shift towards more secure cloud substrates accelerating. And I’ll finish with a simple KPI we can use to measure maturity across all platforms: The time or data volume before a cloud provider requires MFA on all administrative accounts. Share:

Share:
Read Post

AWS Cloud Incident Analysis Query Cheatsheet

I’ve been teaching cloud incident response with Will Bengtson at Black Hat for a few years now, and one of the cool side effects of running training classes is that we are forced to document our best practices and make them simple enough to explain. (BTW — you should definitely sign up for the 2024 version of our class before the price goes up!) One of the more amusing moments was the first year we taught the class, when I realized I was trying to hand-write all the required CloudTrail log queries in front of the students, because I had only prepared a subset of what we needed. As I wrote in my RECIPE PICKS post, you really only need a handful of queries to find 90% of what you need for most cloud security incidents. Today I want to tie together the RECIPE PICKS mnemonic with the sample queries we use in training. I will break this into two posts — today I’ll load up the queries, and in the next post I’ll show a sample analysis using them. A few caveats: These queries are for AWS Athena running on top of CloudTrail. This is the least common denominator — anyone with an AWS account can run Athena. You will need to adapt them if you use another tool or a SIEM, but those should just be syntactical changes. Obviously you’ll need to do more work for other cloud providers, but this information is available on every major platform. These are only the queries we run on the CloudTrail logs. RECIPE PICKS includes other information these queries don’t cover, or that don’t cleanly match a single query. I’ll write other posts over time, showing more examples of how to gather that data, but none of it takes very long. In class we spend a lot of time adjusting the queries for different needs. For example, when searching for entries on a resource you might need to look in responseElements or requestParameters. I’ll try to knock out a post on how that all works soon, but the TL;DR is: sometimes the ID you query on is used in the API call (request); other times you don’t have it yet, and AWS returns it in the response. RECIPE PICKS is not meant to be done in order. It’s similar to a lot of mnemonics I use in paramedic work. It’s to make sure you don’t miss anything, not an order of operations. With that out of the way, here’s a review of RECIPE PICKS (Canvas FTW): Now let’s go through the queries. Remember, I’ll have follow-on posts with more detail — this is just the main reference post to get started. A few things to help you understand the queries: For each of these queries, you need to replace anything between <> with your proper identifiers (and strip out <>). A “%” is a wildcard in SQL, so just think “*” in your head and you’ll be just fine. You’ll see me pulling different information on different examples (e.g., event name). In real life you might want to pull all the table fields, or different fields. In class we play around with different collection and sorting options for the specific incident, but that is too much for a single blog post. Resource If I have a triggering event associated with a resource, I like to know its current configuration. This is largely to figure out whether I need to stop the bleed and take immediate action (e.g., if something was made public or shared to an unknown account). There is no single query because this data isn’t in CloudTrail. You can review the resource in the console, or run a describe/get/list API call. Events Gather every API call involving a resource. This example is for a snapshot, based on the snapshot ID: SELECTuseridentity.arn,eventname,sourceipaddress,eventtime,resourcesFROM <your table name>WHERE requestparameters like ‘%<snapshot_id%’ OR responseelements like ‘%<snapshot id>%’ORDER BY eventtime Changes Changes is a combination of the before and after state of the resource, and the API call which triggered the change associated with the incident. This is another one you can’t simply query from CloudTrail, and you won’t have a change history without the right security controls in place. This is either: AWS Config A CSPM/CNAPP with historical  inventory A cloud inventory tool (if it has a history) Many CSPM/CNAPP tools include a history of changes. This is the entire reason for the existence of AWS Config (well, based on the pricing there may be additional motivations). My tool (FireMon Cloud Defense) auto-correlates API calls with a change history, but if you don’t have that support in your tool you may need to do a little manual correlation. If you don’t have a change history this becomes much harder. Worst case: you read between the lines. If an API call didn’t error, you can assume the requested change went through and then figure out the state. Identity Who or what made the API call? CloudTrail stores all this in the useridentity element, which is structured as: useridentity STRUCT<type:STRING,principalid:STRING,arn:STRING,accountid:STRING,invokedby:STRING,accesskeyid:STRING,userName:STRING,sessioncontext:STRUCT<attributes:STRUCT< mfaauthenticated:STRING, creationdate:STRING>,sessionissuer:STRUCT< type:STRING, principalId:STRING, arn:STRING, accountId:STRING, userName:STRING>,ec2RoleDelivery:string,webIdFederationData:map<string,string>> The data you’ll see will vary based on the API call and how the entity authenticated. Me? I keep it simple at this point, and just query useridentity.arn as shown in the query above. This provides the Amazon Resource Name we are working with. Permissions What are the permissions of the calling identity? This defines the first part of the IAM blast radius, which is the damage it can do. The API calls are different between user and role, and here’s a quick CLI script that can pull IAM policies. But if you have console access that may be easier: #!/bin/bash# Function to get policies attached to a userget_user_policies() {local user_arn=$1local user_name=$(aws iam get-user –user-name $(echo $user_arn | awk -F/ ‘{print $NF}’) –query ‘User.UserName’ –output text)echo “User Policies for $user_name:”aws iam list-attached-user-policies –user-name $user_name –query ‘AttachedPolicies[*].PolicyArn’ –output text | while read policy_arn; doaws iam get-policy –policy-arn $policy_arn –query ‘Policy.DefaultVersionId’ –output text | while read

Share:
Read Post

Let Your Devs and Admins See the Vulns

A year or so ago I was on an application security program assessment project in one of those very large enterprises. We were working with the security team and they had all the scanners, from SAST/SCA to DAST to vulnerability assessment, but their process was really struggling. It took a long time for bugs to get fixed, things were slow to get approved and deployed, and remediating in-production vulnerabilities could also be slow and inefficient. At one point I asked how vulnerabilities (anything discovered after deployment) were being communicated back to the developers/admins? “Oh, that data is classified as security sensitive so they aren’t allowed access.” Uhh… okay, So you are not letting the people responsible for creating and fixing the problem know about the problem? How’s that going for you? This came up in a conversation today about providing cloud deployment administrators access to the CSPM/CNAPP. In my book this is often an even worse gap, since a large percentage of organizations I work with do not allow the security team change access to cloud deployments, yet issues there are often immediately exploitable over the Internet (or you have a public data exposure… just read the Universal Cloud Threat Model, okay?). Here are my recommendations: Give preference to security tools that have an RBAC model which allows you to provide devs/admins access to only their stuff. If you can’t have that granularity, this doesn’t work well. Communicate discovered security issues in a manner compatible with the team’s tools and workflows. This is often ChatOps (Slack/Teams) or ticketing/tracking systems (JIRA/ServiceNow) they already use. Groom findings before communicating!!! Don’t overload with low-severity or false positives. Focus on critical/high if you are inserting yourself into their workflow, and spend the time to reduce either true false positives (the tools made a mistake) or irrelevant positives (you have a compensating control or it doesn’t matter). On the cloud side this is easier than with code, but it still matters and takes a little time. Allow the admins/leads (at least) direct access to the scanner/assessment tool, but only for things they own. The tools will have more context than an autogenerated alert or ticket. This also allows the team to see the wider range of lower severity issues they still might want to know about. And one final point: Email and spreadsheets are not your friends! When the machines finally come for the humans, the first wave of their attack will probably be email and spreadsheets. One nuance arises when you are dealing with less-trusted devs/admins, which often means “outsourcing”. Look, the ideal is that you trust the people building the things your business runs on, but I know that isn’t how the world always runs. In those cases you will want to invest more in grooming and communications, and probably not give them any direct access to your tooling. I’ve written a lot on appsec and cloudsec over the years, and worked on a ton of projects. This issue has always seemed obvious to me, but I still encounter it a fair bit. I think it’s a holdover from the days when security was focused on controlling all the defenses. But time has proven we can only do so much from the outside, and security really does require everyone to do their part. That’s impossible if you can’t see the bigger security picture. Most of you know this, but if this post helps just one org break through this barrier, then it is worth the 15 minutes it took to write. Share:

Share:
Read Post

New Accidental Research Release: The Universal Cloud Threat Model (UCTM)

The conversation went something like this: Me: “Hey Chris, want to co-present at RSA? I have this idea around how we fix things when we get dropped into a new org and they have a cloud security mess.” Chris: “Sure, you want to write up the description and submit it?” Me: “Yep, on it!” [A couple months later] Chris: “So what’s this Universal Cloud Threat Model you put in the description?” Me: “Oh, I just thought we’d make fun of all the edgy cloud security attack research since nearly every attack is just the same 3 things over and over.” Chris: “Yeah, sounds about right, want to hop on a quick call to map out the slides?” [A two hour spontaneous Zoom call later] Chris: “Crap, I think we need to write a paper.” Me: “Really?” Chris: “Yeah, this is good stuff.” Me: “Fine. But only if we can put my cat in as a threat actor. He just broke a bowl and is making a move on my bourbon .” Chris: “Sure, what’s his name?” Me: “Goose” Chris: “Well what did you expect?”   You can download the UCTM here. And read Chris’ absolutely epic announcement post in the voice of Winston Churchill! Share:

Share:
Read Post

Sisense: Learning Lessons Before the Body Hits the Ground

Look, we don’t yet know what really happened at Sisense. Thanks to Brian Krebs and CISA, combined with the note sent out by the CISO (bottom of this post), it’s pretty obvious the attackers got a massive trove of secrets. Just look at that list of what you have to rotate. It’s every cred you ever had, every cred you ever thought of, and the creds of your unborn children and/or grandchildren. Brian’s article has basically one sentence that describes the breach: Sisense declined to comment when asked about the veracity of information shared by two trusted sources with close knowledge of the breach investigation. Those sources said the breach appears to have started when the attackers somehow gained access to the company’s code repository at Gitlab, and that in that repository was a token or credential that gave the bad guys access to Sisense’s Amazon S3 buckets in the cloud. And as to where the data ended up? Sure sounds like the dark web to me: On April 10, Sisense Chief Information Security Officer Sangram Dash told customers the company had been made aware of reports that “certain Sisense company information may have been made available on what we have been advised is a restricted access server (not generally available on the internet.)” So if (and that’s a very big if, this early) the first quote is correct, then here’s what probably happened: Someone at Sisense stored IAM user keys in GitLab. Probably in code, but that’s an open question. Could have been in the pipeline config. Sisense also stored a vast trove of user credentials that either were not encrypted before putting them in S3 (I’ll discuss that in a moment), or with the decryption key in the code. Bad guys got into GitLab. Bad guys (or girls, I’m not sexist) tested keys and discovered they can access S3. Maybe this was obvious because the keys were in the code to access S3, or maybe it was non-obvious and they tried good old ListBuckets. Attackers (is that better, people?) downloaded everything from S3 and put it on a private server or somewhere in the depths of Tor. We don’t know the chain of events that lead to the key/credential being in GitLab. There’s also the chance it was more-advanced and the attacker stole session credentials from a runner or something and a static key wasn’t a factor. I am not going to victim blame yet — let’s see what facts emerge. The odds are decent this is more complex than what’s emerged. HOWEVER Some of you will go to work tomorrow, and your CEO will ask, “how can we make sure this never happens to us?” None of this is necessarily easy at scale, but here’s where to start: Scan all your repositories for cloud credentials. This might be as simple as turning on a feature of your platform, or you might have to use a static analysis tool or credential-specific scanner. If you find any, get rid of them. If you can’t immediately get rid of them, hop on over to the associated IAM policy and restrict it to the necessary source IP address. Don’t think S3 encryption will save you. First, it’s always encrypted. The trick is to encrypt with a key that isn’t accessible from the same identity that has access to the data. So… that’s not something most people do. Encryption is almost always incorrectly implemented to stop any threat other than a stolen hard drive. If you manage customer credentials, that sh** needs to be locked down. Secrets managers, dedicated IAM platforms, whatever. It’s possible in this case all the credentials weren’t in an S3 bucket and there were other phases of the attack. But if they were in an S3 bucket, that is… not a good look. Sensitive S3 buckets should have an IP-based resource policy on them. All the IAM identities (users/roles) with access should also have IP restrictions. Read up on the AWS Data Perimeter. Get rid of access keys. Scan your code and repositories for credentials. Lock down where data can be accessed from. And repeat after me, Every cloud security failure is an IAM failure, and every IAM failure is a governance failure. I’m really sorry for any of you using Sisense. That list isn’t going to be easy to get through. Share:

Share:
Read Post

You are infected with Epstein-Barr. You are also infected with the next XZ.

Nearly everyone in the United States (and probably elsewhere) is infected with the Epstein-Barr virus at some point in their life. Most people will never develop symptoms, although a few end up with mono. Even without symptoms you carry this invasive genetic material for life. There’s no cure, and EBV causes some people to develop cancers and possibly Multiple Sclerosis, Chronic Fatigue Syndrome, and other problems. Those later diseases are likely caused by some other precipitating even or infection that “triggers” a reaction with EBV. Look, I have most of a molecular biology degree and I’m a paramedic and I won’t pretend to fully understand it all. The tl;dr is EBV is genetic material floating around your body for life and at some point it activates or interacts with something else and causes badness. (Me write good! Use words!) As I’ve been reading about the XZ Initiative (I’m using initiative deliberately due to the planning and premeditation) the same week that the CISA CSRB released their scathing report on Microsoft, it’s damn clear that our software supply chain issues are as deep as the emptiness of my cat’s soul. (I mean I love him, and I’m excited he’s coming back from the hospital this afternoon, but I couldn’t come up with a more-amusing analogy). If you aren’t up to date on all things XZ I suggest reading Matt Johansen’s rollup in his Vulnerable U newsletter. Here’s how EBV and XZ relate, at least in my twisted mind. XZ was clearly premeditated, well planned, sophisticated, and designed to slowly spread itself under the radar for many years before being triggered. There is absolutely no chance this approach hasn’t already been used by multiple threat actors. As much as I hate FUD and hyperbole, I am 100% confident that there is code in tools and services I use that has been similarly compromised. We didn’t miraculously catch the first ever attempt, because a Microsoft dev is anal-retentive about performance. XZ is the first such exploit which got caught. If I were a cybercriminal or government operative, I would already have several of these long-term attacks underway. You are welcome to believe our record is 1 for 1. I think it’s 1 catch of N attacks, and N scares me. I also do not believe we can eliminate this threat vector. I don’t think the best SAST/SCA tools and a signed SBOM have any chance at making this go away. Ever. That doesn’t mean we give up and lose hope — we just change our perspective and focus more on resilience to these attacks than pure prevention. I don’t have all the answers — not even close — but there are three aspects I think we should explore more. First, let’s make it harder on threat actors. Let’s increase their costs. How? Well, aside from all the improved security scanning over the past few years, I like the idea Daniel Miessler recently mentioned in a conversation and noted in his newsletter: use AI to automatically perform open source intel (OSINT) on OSS contributors. Do they have a history outside that code repo? Any real human interactions? This will be far from perfect, but will likely increase the cost of attack to build a persona which looks sufficiently real. We also have compromises in commercial software (hello Solar Winds). Vendors need to explore better internal code controls, sourcing, and human processes. E.g. require YubiKeys from all devs, side channel notifications and approvals of commits, and I suspect there are some new and innovative scanning approaches we can take as AI evolves (until it evolves past humanity and enslaves us all). E.g. “this may not be a known security defect, but it looks weird compared to this developer’s history, so maybe ping another future energy source human to review it”. I’m also a fan of making critical devs work on dedicated machines separate from the ones they use for email and web browsing, to reduce phishing/malware as a vector. No, I haven’t ever had anyplace I’ve worked approve that, but I *have* heard of some shops which pulled it off. The final part is preparing for the next XZ that slips through and is eventually triggered. Early detection, rapid remediation, and all the other hard expensive things. SBOM/SCA/DevSecOps are key here: you MUST be able to figure out where you are using any particular software package, and be able to implement compensating defenses (e.g., firewalls) and patch quickly. This is NOT SIMPLE AT SCALE, but it’s your best bet as the downstream customer for these things. None of what I suggested is easy. I think this is the next phase of the Assume Breach mindset. You can’t cure EBV. You can’t prevent all possible negative outcomes. But you can reduce some risks, detect others earlier, and react aggressively when those first cancer cells show up. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.