Securosis

Research

AWS Cloud Incident Analysis Query Cheatsheet

I’ve been teaching cloud incident response with Will Bengtson at Black Hat for a few years now, and one of the cool side effects of running training classes is that we are forced to document our best practices and make them simple enough to explain. (BTW — you should definitely sign up for the 2024 version of our class before the price goes up!) One of the more amusing moments was the first year we taught the class, when I realized I was trying to hand-write all the required CloudTrail log queries in front of the students, because I had only prepared a subset of what we needed. As I wrote in my RECIPE PICKS post, you really only need a handful of queries to find 90% of what you need for most cloud security incidents. Today I want to tie together the RECIPE PICKS mnemonic with the sample queries we use in training. I will break this into two posts — today I’ll load up the queries, and in the next post I’ll show a sample analysis using them. A few caveats: These queries are for AWS Athena running on top of CloudTrail. This is the least common denominator — anyone with an AWS account can run Athena. You will need to adapt them if you use another tool or a SIEM, but those should just be syntactical changes. Obviously you’ll need to do more work for other cloud providers, but this information is available on every major platform. These are only the queries we run on the CloudTrail logs. RECIPE PICKS includes other information these queries don’t cover, or that don’t cleanly match a single query. I’ll write other posts over time, showing more examples of how to gather that data, but none of it takes very long. In class we spend a lot of time adjusting the queries for different needs. For example, when searching for entries on a resource you might need to look in responseElements or requestParameters. I’ll try to knock out a post on how that all works soon, but the TL;DR is: sometimes the ID you query on is used in the API call (request); other times you don’t have it yet, and AWS returns it in the response. RECIPE PICKS is not meant to be done in order. It’s similar to a lot of mnemonics I use in paramedic work. It’s to make sure you don’t miss anything, not an order of operations. With that out of the way, here’s a review of RECIPE PICKS (Canva FTW): Now let’s go through the queries. Remember, I’ll have follow-on posts with more detail — this is just the main reference post to get started. A few things to help you understand the queries: For each of these queries, you need to replace anything between <> with your proper identifiers (and strip out <>). A “%” is a wildcard in SQL, so just think “*” in your head and you’ll be just fine. You’ll see me pulling different information on different examples (e.g., event name). In real life you might want to pull all the table fields, or different fields. In class we play around with different collection and sorting options for the specific incident, but that is too much for a single blog post. Resource If I have a triggering event associated with a resource, I like to know its current configuration. This is largely to figure out whether I need to stop the bleed and take immediate action (e.g., if something was made public or shared to an unknown account). There is no single query because this data isn’t in CloudTrail. You can review the resource in the console, or run a describe/get/list API call. Events Gather every API call involving a resource. This example is for a snapshot, based on the snapshot ID: SELECT useridentity.arn, eventname, sourceipaddress, eventtime, resources FROM <your table name> WHERE requestparameters OR responseelements like ‘%<snapshot id>%’ ORDER BY eventtime Changes Changes is a combination of the before and after state of the resource, and the API call which triggered the change associated with the incident. This is another one you can’t simply query from CloudTrail, and you won’t have a change history without the right security controls in place. This is either: AWS Config A CSPM/CNAPP with historical  inventory A cloud inventory tool (if it has a history) Many CSPM/CNAPP tools include a history of changes. This is the entire reason for the existence of AWS Config (well, based on the pricing there may be additional motivations). My tool (FireMon Cloud Defense) auto-correlates API calls with a change history, but if you don’t have that support in your tool you may need to do a little manual correlation. If you don’t have a change history this becomes much harder. Worst case: you read between the lines. If an API call didn’t error, you can assume the requested change went through and then figure out the state. Identity Who or what made the API call? CloudTrail stores all this in the useridentity element, which is structured as: useridentity STRUCT< type:STRING, principalid:STRING, arn:STRING, accountid:STRING, invokedby:STRING, accesskeyid:STRING, userName:STRING, sessioncontext:STRUCT< attributes:STRUCT< mfaauthenticated:STRING, creationdate:STRING>, sessionissuer:STRUCT< type:STRING, principalId:STRING, arn:STRING, accountId:STRING, userName:STRING>, ec2RoleDelivery:string, webIdFederationData:map<string,string> > The data you’ll see will vary based on the API call and how the entity authenticated. Me? I keep it simple at this point, and just query useridentity.arn as shown in the query above. This provides the Amazon Resource Name we are working with. Permissions What are the permissions of the calling identity? This defines the first part of the IAM blast radius, which is the damage it can do. The API calls are different between user and role, and here’s a quick CLI script that can pull IAM policies. But if you have console access that may be easier: #!/bin/bash # Function to get policies attached to a user get_user_policies() { local user_arn=$1 local user_name=$(aws iam get-user –user-name $(echo $user_arn | awk -F/ ‘{print $NF}’) –query ‘User.UserName’ –output text) echo “User Policies for $user_name:”

Share:
Read Post

Let Your Devs and Admins See the Vulns

A year or so ago I was on an application security program assessment project in one of those very large enterprises. We were working with the security team and they had all the scanners, from SAST/SCA to DAST to vulnerability assessment, but their process was really struggling. It took a long time for bugs to get fixed, things were slow to get approved and deployed, and remediating in-production vulnerabilities could also be slow and inefficient. At one point I asked how vulnerabilities (anything discovered after deployment) were being communicated back to the developers/admins? “Oh, that data is classified as security sensitive so they aren’t allowed access.” Uhh… okay, So you are not letting the people responsible for creating and fixing the problem know about the problem? How’s that going for you? This came up in a conversation today about providing cloud deployment administrators access to the CSPM/CNAPP. In my book this is often an even worse gap, since a large percentage of organizations I work with do not allow the security team change access to cloud deployments, yet issues there are often immediately exploitable over the Internet (or you have a public data exposure… just read the Universal Cloud Threat Model, okay?). Here are my recommendations: Give preference to security tools that have an RBAC model which allows you to provide devs/admins access to only their stuff. If you can’t have that granularity, this doesn’t work well. Communicate discovered security issues in a manner compatible with the team’s tools and workflows. This is often ChatOps (Slack/Teams) or ticketing/tracking systems (JIRA/ServiceNow) they already use. Groom findings before communicating!!! Don’t overload with low-severity or false positives. Focus on critical/high if you are inserting yourself into their workflow, and spend the time to reduce either true false positives (the tools made a mistake) or irrelevant positives (you have a compensating control or it doesn’t matter). On the cloud side this is easier than with code, but it still matters and takes a little time. Allow the admins/leads (at least) direct access to the scanner/assessment tool, but only for things they own. The tools will have more context than an autogenerated alert or ticket. This also allows the team to see the wider range of lower severity issues they still might want to know about. And one final point: Email and spreadsheets are not your friends! When the machines finally come for the humans, the first wave of their attack will probably be email and spreadsheets. One nuance arises when you are dealing with less-trusted devs/admins, which often means “outsourcing”. Look, the ideal is that you trust the people building the things your business runs on, but I know that isn’t how the world always runs. In those cases you will want to invest more in grooming and communications, and probably not give them any direct access to your tooling. I’ve written a lot on appsec and cloudsec over the years, and worked on a ton of projects. This issue has always seemed obvious to me, but I still encounter it a fair bit. I think it’s a holdover from the days when security was focused on controlling all the defenses. But time has proven we can only do so much from the outside, and security really does require everyone to do their part. That’s impossible if you can’t see the bigger security picture. Most of you know this, but if this post helps just one org break through this barrier, then it is worth the 15 minutes it took to write. Share:

Share:
Read Post

New Accidental Research Release: The Universal Cloud Threat Model (UCTM)

The conversation went something like this: Me: “Hey Chris, want to co-present at RSA? I have this idea around how we fix things when we get dropped into a new org and they have a cloud security mess.” Chris: “Sure, you want to write up the description and submit it?” Me: “Yep, on it!” [A couple months later] Chris: “So what’s this Universal Cloud Threat Model you put in the description?” Me: “Oh, I just thought we’d make fun of all the edgy cloud security attack research since nearly every attack is just the same 3 things over and over.” Chris: “Yeah, sounds about right, want to hop on a quick call to map out the slides?” [A two hour spontaneous Zoom call later] Chris: “Crap, I think we need to write a paper.” Me: “Really?” Chris: “Yeah, this is good stuff.” Me: “Fine. But only if we can put my cat in as a threat actor. He just broke a bowl and is making a move on my bourbon .” Chris: “Sure, what’s his name?” Me: “Goose” Chris: “Well what did you expect?”   You can download the UCTM here. And read Chris’ absolutely epic announcement post in the voice of Winston Churchill! Share:

Share:
Read Post

Sisense: Learning Lessons Before the Body Hits the Ground

Look, we don’t yet know what really happened at Sisense. Thanks to Brian Krebs and CISA, combined with the note sent out by the CISO (bottom of this post), it’s pretty obvious the attackers got a massive trove of secrets. Just look at that list of what you have to rotate. It’s every cred you ever had, every cred you ever thought of, and the creds of your unborn children and/or grandchildren. Brian’s article has basically one sentence that describes the breach: Sisense declined to comment when asked about the veracity of information shared by two trusted sources with close knowledge of the breach investigation. Those sources said the breach appears to have started when the attackers somehow gained access to the company’s code repository at Gitlab, and that in that repository was a token or credential that gave the bad guys access to Sisense’s Amazon S3 buckets in the cloud. And as to where the data ended up? Sure sounds like the dark web to me: On April 10, Sisense Chief Information Security Officer Sangram Dash told customers the company had been made aware of reports that “certain Sisense company information may have been made available on what we have been advised is a restricted access server (not generally available on the internet.)” So if (and that’s a very big if, this early) the first quote is correct, then here’s what probably happened: Someone at Sisense stored IAM user keys in GitLab. Probably in code, but that’s an open question. Could have been in the pipeline config. Sisense also stored a vast trove of user credentials that either were not encrypted before putting them in S3 (I’ll discuss that in a moment), or with the decryption key in the code. Bad guys got into GitLab. Bad guys (or girls, I’m not sexist) tested keys and discovered they can access S3. Maybe this was obvious because the keys were in the code to access S3, or maybe it was non-obvious and they tried good old ListBuckets. Attackers (is that better, people?) downloaded everything from S3 and put it on a private server or somewhere in the depths of Tor. We don’t know the chain of events that lead to the key/credential being in GitLab. There’s also the chance it was more-advanced and the attacker stole session credentials from a runner or something and a static key wasn’t a factor. I am not going to victim blame yet — let’s see what facts emerge. The odds are decent this is more complex than what’s emerged. HOWEVER Some of you will go to work tomorrow, and your CEO will ask, “how can we make sure this never happens to us?” None of this is necessarily easy at scale, but here’s where to start: Scan all your repositories for cloud credentials. This might be as simple as turning on a feature of your platform, or you might have to use a static analysis tool or credential-specific scanner. If you find any, get rid of them. If you can’t immediately get rid of them, hop on over to the associated IAM policy and restrict it to the necessary source IP address. Don’t think S3 encryption will save you. First, it’s always encrypted. The trick is to encrypt with a key that isn’t accessible from the same identity that has access to the data. So… that’s not something most people do. Encryption is almost always incorrectly implemented to stop any threat other than a stolen hard drive. If you manage customer credentials, that sh** needs to be locked down. Secrets managers, dedicated IAM platforms, whatever. It’s possible in this case all the credentials weren’t in an S3 bucket and there were other phases of the attack. But if they were in an S3 bucket, that is… not a good look. Sensitive S3 buckets should have an IP-based resource policy on them. All the IAM identities (users/roles) with access should also have IP restrictions. Read up on the AWS Data Perimeter. Get rid of access keys. Scan your code and repositories for credentials. Lock down where data can be accessed from. And repeat after me, Every cloud security failure is an IAM failure, and every IAM failure is a governance failure. I’m really sorry for any of you using Sisense. That list isn’t going to be easy to get through. Share:

Share:
Read Post

You are infected with Epstein-Barr. You are also infected with the next XZ.

Nearly everyone in the United States (and probably elsewhere) is infected with the Epstein-Barr virus at some point in their life. Most people will never develop symptoms, although a few end up with mono. Even without symptoms you carry this invasive genetic material for life. There’s no cure, and EBV causes some people to develop cancers and possibly Multiple Sclerosis, Chronic Fatigue Syndrome, and other problems. Those later diseases are likely caused by some other precipitating even or infection that “triggers” a reaction with EBV. Look, I have most of a molecular biology degree and I’m a paramedic and I won’t pretend to fully understand it all. The tl;dr is EBV is genetic material floating around your body for life and at some point it activates or interacts with something else and causes badness. (Me write good! Use words!) As I’ve been reading about the XZ Initiative (I’m using initiative deliberately due to the planning and premeditation) the same week that the CISA CSRB released their scathing report on Microsoft, it’s damn clear that our software supply chain issues are as deep as the emptiness of my cat’s soul. (I mean I love him, and I’m excited he’s coming back from the hospital this afternoon, but I couldn’t come up with a more-amusing analogy). If you aren’t up to date on all things XZ I suggest reading Matt Johansen’s rollup in his Vulnerable U newsletter. Here’s how EBV and XZ relate, at least in my twisted mind. XZ was clearly premeditated, well planned, sophisticated, and designed to slowly spread itself under the radar for many years before being triggered. There is absolutely no chance this approach hasn’t already been used by multiple threat actors. As much as I hate FUD and hyperbole, I am 100% confident that there is code in tools and services I use that has been similarly compromised. We didn’t miraculously catch the first ever attempt, because a Microsoft dev is anal-retentive about performance. XZ is the first such exploit which got caught. If I were a cybercriminal or government operative, I would already have several of these long-term attacks underway. You are welcome to believe our record is 1 for 1. I think it’s 1 catch of N attacks, and N scares me. I also do not believe we can eliminate this threat vector. I don’t think the best SAST/SCA tools and a signed SBOM have any chance at making this go away. Ever. That doesn’t mean we give up and lose hope — we just change our perspective and focus more on resilience to these attacks than pure prevention. I don’t have all the answers — not even close — but there are three aspects I think we should explore more. First, let’s make it harder on threat actors. Let’s increase their costs. How? Well, aside from all the improved security scanning over the past few years, I like the idea Daniel Miessler recently mentioned in a conversation and noted in his newsletter: use AI to automatically perform open source intel (OSINT) on OSS contributors. Do they have a history outside that code repo? Any real human interactions? This will be far from perfect, but will likely increase the cost of attack to build a persona which looks sufficiently real. We also have compromises in commercial software (hello Solar Winds). Vendors need to explore better internal code controls, sourcing, and human processes. E.g. require YubiKeys from all devs, side channel notifications and approvals of commits, and I suspect there are some new and innovative scanning approaches we can take as AI evolves (until it evolves past humanity and enslaves us all). E.g. “this may not be a known security defect, but it looks weird compared to this developer’s history, so maybe ping another future energy source human to review it”. I’m also a fan of making critical devs work on dedicated machines separate from the ones they use for email and web browsing, to reduce phishing/malware as a vector. No, I haven’t ever had anyplace I’ve worked approve that, but I *have* heard of some shops which pulled it off. The final part is preparing for the next XZ that slips through and is eventually triggered. Early detection, rapid remediation, and all the other hard expensive things. SBOM/SCA/DevSecOps are key here: you MUST be able to figure out where you are using any particular software package, and be able to implement compensating defenses (e.g., firewalls) and patch quickly. This is NOT SIMPLE AT SCALE, but it’s your best bet as the downstream customer for these things. None of what I suggested is easy. I think this is the next phase of the Assume Breach mindset. You can’t cure EBV. You can’t prevent all possible negative outcomes. But you can reduce some risks, detect others earlier, and react aggressively when those first cancer cells show up. Share:

Share:
Read Post

The 14th Annual RSAC Disaster Recovery Breakfast Is on!

Over 15 years ago (pre-Blip) I wanted to do something fun and casual for friends and Securosis readers at the annual RSA Conference… that I, as a budding entrepreneur, could actually afford. I started calling around and found a little place called Jillian’s right near the conference willing to open up early and serve breakfast for a reasonable rate. We ended up with around 50 people dropping in and out over those few hours, just mostly sitting around a table talking about whatever. Little did I know that our Disaster Recovery Breakfast would outlast Jillian’s, and, it seems, downtown San Francisco? I also never thought it would peak out at one point at around 300 people and inspire dozens of copycats. But one thing never changed — the casual atmosphere, the chance to talk without having to scream into someone’s ear, and the great conversations fueled by coffee (and the occasional Irish coffee). Once again, we’re back! Like last year we are hosting at the Pink Elephant which is just a few minutes walk and totally worth it if you want breakfast burritos or an omelette. This year we have two of our long-standing partners helping us out, plus a new (old) face. Here are the details: The 14th Annual Securosis Disaster Recovery Breakfast Thursday, May 9, 8-11 AM Pink Elephant 142 Minna St San Francisco, CA 94105 Come meet IANS Faculty and leadership, the LaunchTech team, members from the Vulnerable U community, the illustrious founder of Securosis (that’s me, duh), and whoever walks in the door for casual, no-marketing conversations. Drop in and out as you like, and you can even grab a coffee to go! RSVP at rsvp@securosis.com is appreciated but not required; feel free to just show up, but if you don’t RSVP we might run out of bacon. Share:

Share:
Read Post

It’s Time for a Microsoft Trustworthy Cloud Initiative

“All cloud security failures are IAM failures, and all IAM failures are governance failures.” — me on Twitter (too many years ago to find) CISA just released their report on the big Summer 2023 Microsoft Exchange Online Intrusion. You could call it blistering, but I call it more of a third degree plasma burn. It’s also the kind of validation I wish never had to happen. Like many other cloud security professionals, I have been concerned with the security of Microsoft’s cloud (Azure/Office). When I first started using Azure I noticed it tended towards more-open and less-secure defaults. For example, the default for running a VM in a VNet was… no Network Security Groups. The VM would be wide open to the Internet for both inbound and outbound traffic. In AWS and GCP you can’t even deploy anything without an SG attached. (The portal does now try to get you to deploy with an NSG). Other examples? The Azure activity log doesn’t record Read activity, so you can’t identify reconnaissance. Then there are the series of security flaws discovered by the teams and Wiz, Orca, and others. The report has great detail, but the structural issues and recommendations are the real highlights. Here are the ones I think stand out — which have implications (both good and bad) beyond Microsoft. It’s a governance failure: The Board concluded that Microsoft’s security culture was inadequate (page 17). Because features and innovation are prioritized over security: as written in stone by the first cave dwellers. Other CSPs have better security practices: Don’t blame me, it’s item 3 on page 17, and no surprise to those of us who do this for a living. Microsoft did not correct inaccurate information and still does not know what happened: This means multiple failures at multiple levels. Page 17, again. There has been more than one nation-state breach: We knew this, and they refer to Midnight Blizzard. The mistakes there are also… troubling. The Board believes Microsoft has deprioritized security and risk management: Bottom of page 18. The Board recommends Microsoft slow innovation until they fix security: It’s been done before, but I’m not sure how Copilot feels about that. The report then mentions the Microsoft Secure Future Initiative. I wrote on LinkedIn when that came out that it seemed inadequate. It’s like a Band-Aid when you need a tourniquet. The report goes into more detail on some specific security practices it recommends changing; but also seems to indicate they consider other cloud providers to be doing a better job with security around keys, tokens, and credentials. I can only assume they also know about SAS tokens. I mean, this report is rough, and anyone using Azure and Office needs to read it. And yes, I do use both myself for various things, but I’m not… a bank or the United States Government. Outside Microsoft specifically, there are some things in the report that make us cloud security types scream “I KNEW IT! I TOLD YOU SO!!!” at our screens: NIST needs to update 800-53 for cloud: Page 21, and if you know me you’ve heard me complaining about that for years. M&A is a security risk: Okay, Chris Farris and I are literally days from publishing a thing which might just call M&A a threat. CSPs need to stop charging for security-relevant logs: I’m screaming religious words right now. Which is weird, since I’m an atheist. CSPs should be transparent and report incidents and ALL vulnerabilities: Another one that’s an issue beyond Microsoft. CSPs and the government should have better victim notification: This is interesting and unexpected. They straight up call for non-spoofable mobile notifications. The government is watching and should use FedRAMP and its buying power to incentivize change: The original Trustworthy Computing Initiative was largely the result of serious government… threats?… to look at alternative operating systems. It’s time for a replay. It’s time for a Microsoft Trustworthy Cloud Initiative. Especially if they want us to trust them to be the leading AI provider. And FREE THE LOGS!!! Adding link to Joseph Menn’s Washington Post article. He’s banned in Russia so you know you can trust him. Share:

Share:
Read Post

Resolve 90% of Cloud Incidents with RECIPE PICKS

As any long-time readers know, I constantly abuse my past experiences and hobbies to try and make my current work sound WAY more interesting than it probably is. Or maybe it’s just an ego thing, I don’t want to think too hard about it. But, on occasion, lessons from my parallel lives actually inspire some original work. As a paramedic and a pilot I have had to memorize many dozens of mnemonics, and I’ve forgotten many more. Mnemonics are proven to be highly effective memory devices even in the midst of intense stress, like flying a plane or working a 9-1-1 call. For example, I learned “SAMPLE” for taking a patient’s history probably 30 years ago and I still use it today because in the insanity that is some calls it can be easy to lose track and forget a fundamental. This I always remember to ask about Signs and Symptoms, Allergies, Medications, Prior medical history, Last oral intake, and Event (why did they call us today?). Having issues ventilating an intubated patient? Use DOPE. Accidentally put your airplane into a spin? Use PARE (Power, Aileron, Rudder, Elevator). The more you drill these the better they work. I memorized RAKETS for my private pilot checkride but I definitely need to look that one up (it’s used to figure out if you can still fly a plane with a broken part). We don’t really use these in infosec, and I think it’s time to change that. Thus I present to you RECIPE PICKS for cloud incident response. This one hit me yesterday on an internal dev review call in one window while finishing my paramedic recertification in an open browser tab. For 4 years now here is how I’ve taught what to look for first in a cloud incident: I have the students leave that one up when we start the scenarios and live fire exercises. But standing in the shower I came up with a much better way to remember what to do. NOTE: the order doesn’t matter, as with SAMPLE it’s to make sure you don’t miss anything (the format breaks a little at the end due to this sites rendering, sorry):               Resource (current config/state)               Events (api call(s) on that resource)               Changes (diff plus associated API calls)               Identity (who made the triggering change or API call)               Permissions (of the identity; informs the blast radius)               Entitlements (of the resource: e.g. it’s IAM role or managed identity)               Public (is it public?)               IP (all API calls from that IP address)              Caller (all other API calls from the calling identity) tracK (look for indications of a pivot; e.g. role chaining) forenSics (on a resource, or digging into resource logs) These steps shouldn’t be done in order, except the last two probably need to be the last two (especially the forensics). This is all based on the process I’ve figured out over the years and I estimate you can probably close 90% of incidents relatively quickly by pulling this data. I’m definitely going to start trying to build more of these into my trainings, and I’ll do some more blog posts in the coming weeks on how to use RECIPE PICKS. I’d also be remiss if I didn’t link over to a work blog post on how our platform does most of this automatically on every incident. Let me know what you think and if I missed anything. Just email rmogull@securosis.com since I have comments turned off due to all the ridiculous spam. Share:

Share:
Read Post

Check out the shiny new Cloud Security Maturity Model 2.0!

I’m pretty excited about this one. We are finally releasing version 2.0 of the Cloud Security Maturity Model. This is the culmination of nearly 9 months of research and analysis, a massive update from the original released in 2020. The tl;dr is that this version is not only updated to reflect current cloud security practices, but it includes around 100 cloud security control objectives to use as Key Performance Indicators — each matched 1:1 (where possible) with a technical control you can assess (AWS for now— we plan to expand to Azure and GCP next). You can download it here — no registration wall, and it includes the spreadsheet and PDFs. The CSMM 2.0 was developed by Securosis (that’s us!) and IANS Research in cooperation with the Cloud Security Alliance. Version 2.0 underwent a public peer review process at the CSA and internal review at IANS. We will keep updating it based on public feedback. The model includes nearly 100 control objectives and controls, organized into 12 Categories in 3 Domains. IANS released a free diagnostic self-assessment survey tool. You can quickly and easily generate a custom maturity report. FireMon added a free CSMM dashboard to Cloud Defense, which will automatically assess, rate, and track your cloud maturity using the CSMM! It’s really cool. But I’m biased because I pushed hard to build it. Okay, that’s what it is, but here’s why you should care. When Mike and I first built the CSMM we designed it more as a discussion tool to describe the cloud journey. Then we started using it with clients and realized it also worked well as a framework to organize a cloud security program. Two of the big issues with cloud governance we’ve seen in the decade-plus we’ve been doing this are: Existing security frameworks have been extended to cloud, but not designed for cloud, which creates confusion because they lack clear direction. Those don’t tell you “do this for cloud” — they tell you “add this cloud stuff”. We saw need for a cloud-centric view. Security teams quickly get tossed into cloud, and while tooling has improved immensely over time, those flood you with data and don’t tell you where to start. We don’t lack tools, but we do lack priorities. Version 2.0 of the CSMM was built directly to address these issues. We reworked the CSMM to work as a cloud security framework. What does that mean? The model focuses on the 12 main categories of cloud security activities, which you can use to organize your program. The maturity levels and KPIs then help define your goals and guide your program without the minutiae of handling individual misconfigurations. What’s the difference between the Diagnostic and the Dashboard? The IANS diagnostic is where you should start. It’s a survey tool anyone can fill out without technical access to their deployments. The objective of the diagnostic is to help you quickly self-assess your program and then, using that, determine your maturity objectives. Let’s be realistic — not all organizations can or should be at “Level 5”. The diagnostic helps set realistic goals and timelines, based on where you are now. The FireMon Cloud Defense CSMM Dashboard is a quantitative real-time assessment and tracking tool. Once you integrate it with your cloud accounts you’ll have a dashboard to track maturity for the entire organization, different business units, and even specific accounts. It’s the tool to track how you are meeting the goals established with the diagnostic. It’s self-service and covers as many AWS accounts as you have (Azure will be there once the CSMM adds Azure controls). You can also just use the CSMM spreadsheet. Options are good. Free options are better. Finally, please send me your feedback. These are living documents and tools, and we plan to keep them continuously updated. The usual disclosure: I’m an IANS faculty member and I manage the Cloud Defense product. But both of these are available absolutely free, no strings attached, as is the model itself. Share:

Share:
Read Post

I Broke the 3-2-1 Rule and Almost Paid The Price!

This post isn’t about some fancy new research. Consider it a friendly nudge to floss. I’m pretty Type A about backing up and have data going back 20+ years at this point. I’m especially particular about my family photos. Until yesterday (this is called foreshadowing) my strategy was: Time Machine running on a Drobo for my main Mac Drobo as a company is dead, but this is a direct attached 5D, which has worked well and has enough capacity that I can lose drives and recover (which has happened). The Drobo as mass storage for the large files I don’t store on my SSD. Archives, VMs, videos. A WD MyBook with 12 TB, also directly connected to my Mac. Data replicated from there using Carbon Copy Cloner. Backblaze for cloud backups. With a personal encryption key. iCloud (I’m on the 6TB plan) for all my photos and related iCloud stuff. iCloud is synced across multiple systems. Box for Securosis corporate documents. Some older S3/Glacier archives. Probably more. I’m old and forget things. My entire house could burn down and I shouldn’t lose anything. But I broke the 3-2-1 rule. The 3-2-1 rule of backups is 3 copies of everything, at least 2 of them local, and 1 offsite. My Drobo died. Completely and suddenly. Not a single drive, but the entire thing. And the moment it happened I couldn’t remember whether I was backing up ALL of the Drobo anywhere else. It was RAID — what were the odds of losing the entire device? I knew I needed to replace it soon because the drivers weren’t being updated, but I kept putting it off. Well okay, I should be fine with my CCC backups… except that wasn’t set as a scheduled job, and I was only replicating one of the Drobo partitions. The other partitions? Well, one of them had my in-progress CloudSLAW video for next week and a demo video for the new CSMM feature we are releasing at work (remember, foreshadowing). Two time-sensitive things I REALLY didn’t want to recreate. Cloud/Backblaze to the Rescue and My New Strategy It turns out I really was sending everything from every drive to the cloud, and keeping versions for a year. It cost me just over $100 (for a single machine). I’ve never thought much about it, but all the data was there. The clincher was fast, selective restore. I was able to directly what I needed, including the video files, and download a .zip in less than an hour. Then I ordered a Synology, and I’ll go through the longer restore process once that arrives. Does this mean I can skip keeping 2 local versions on separate devices? And doesn’t RAID count as 2 devices? Nope and nope. But here’s my strategy and reasoning: an evolution of the 3-2-1 rule: Family photos and things I never want to lose are stored on 2-4 local devices and at least 2 different cloud providers, with occasional archives to a third provider. My iCloud Photos sync to my Mac. That’s backed up to via Time Machine and to the (soon to arrive) Synology. It also goes to Backblaze, and a couple times a year I archive to S3. All critical business documents are in 2 cloud services. That’s Box, and since I sync the files locally, they also land in my cloud backups of my local drive. Code and other documents are in places like GitHub and OneDrive, depending on which hat I’m wearing. I just make sure there are 2 of everything at 2 different services. A bootable image of my working Mac. I use Carbon Copy Cloner for this. I’m not as religious about it because I can fully work off my laptop when needed. Archived and media files are single copies on the RAID, but the RAID is backed up to cloud, from where I can selectively restore. These are the things I am okay with not having right away. UPDATE: I will now keep my working video files on a second local drive. This will be directly attached to my Mac, and backed up to both the cloud and the new RAID (Synology), which will be network attached instead of directly connected. So, 3-5 copies of all files. 1-3 local based on priority, 1-3 in cloud, also based on priority. Baby pics are 3 local, 3 in different cloud services. Full system is 2 local, 1 bootable. Work documents at 2 cloud services, at least one with versioning. Large “working” (media) files are 2 local, one on fast storage and the other RAID. Mass storage is 1 local (RAID) and 1 versioned copy in cloud. All critical work applications should be on 2 systems (laptop/desktop, and for me I do a ton on iPad). I lucked out this time. I really did not remember sending the Drobo files to Backblaze, and had a brief panic attack. And I hadn’t used selective restore previously, which helped me rapidly find and download the working files I needed. I’m gonna go floss now. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.