Securosis

Research

Building a Vendor IT Risk Management Program: Understanding Vendor IT Risk

Outsourcing is nothing new. Industries have been embracing service providers for functions they either couldn’t or didn’t want to perform for years. This necessarily involved integrating business systems and providing these third-party vendors with access to corporate networks and computer systems. The risk was generally deemed manageable and rationalized by the business need for those integrated processes. Until it wasn’t. The post-mortem on a recent very high-profile data breach indicated the adversary got into the retailer’s network, not through their own systems, but instead through a trusted connection with a third-party vendor. Basically the attacker owned a small service provider, and used that connection to gain a foothold within the real target’s environment. The path of least resistance into your environment may no longer be through your front door. It might be through a back door (or window) you left open for a trading partner. Business will continue to take place, and you will need to provide access to third parties. Saying ‘no’ is not an option. But you can no longer just ignore the risks vendors present. They dramatically expand your attack surface, which now includes the environments of all the third parties with access to your systems. Ugh. This could be thousands of different vendors. No, we aren’t forgetting that most of you don’t have the skills or resources to stay on top of your own technology infrastructure – not to mention critical data moving to cloud resources. Now you also need to worry about all those other organizations you can neither control nor effectively influence. Horrifying. This is when you expect Tom Cruise to show up, because this sounds like the plot to the latest Mission: Impossible sequel. But unfortunately this is your lot in life. Yet there is hope, because threat intelligence services can now evaluate the IT risk posed by your trading partners, without needing access to their networks. Our new Building a Vendor Risk Management Program series we will go into why you can no longer ignore vendor risk, and how these services can actually pinpoint malicious activity on your vendors’ networks. But just having that information is (no surprise) not enough. To efficiently and effectively manage vendor risk you need a systematic program to evaluate dangers to your organization and objectively mitigate them. We would like to thank our friends at BitSight Technologies, who have agreed to potentially license the content in this series upon completion. As always, we will write the series using our Totally Transparent Research methodology in a totally objective and balanced way. Regulation You know something has been a problem for a while when regulators establish guidance to address the problem. Back in 2013 the regulators overseeing financial institutions in the US seemed to get religion about the need to assess and monitor vendor risk, and IT risk was a subset of the guidance they produced. Of course, as with most regulation, enforcement has been spotty and didn’t really offer a prescriptive description of what a ‘program’ consists of. It’s not like the 12 (relatively) detailed requirements you get with the PCI-DSS. In general, the guidance covers some pretty straightforward concepts. First you should actually write down your risk management program, and then perform proper due diligence in selecting a third party. I guess you figure out what ‘proper’ means when the assessor shows up and lets you know that your approach was improper. Next you need to monitor vendors on an ongoing basis, and have contingency plans in case one screws up and you need to get out of the deal. Finally you need program oversight and documentation, so you can know your program is operational and effective. Not brain surgery, but also not very specific. The most detail we have found comes from the OCC (Office of the Comptroller of the Currency), which recommends an assessment of each vendor’s security program in its Risk Management Guidance. Information Security Assess the third party’s information security program. Determine whether the third party has sufficient experience in identifying, assessing, and mitigating known and emerging threats and vulnerabilities. When technology is necessary to support service delivery, assess the third party’s infrastructure and application security programs, including the software development life cycle and results of vulnerability and penetration tests. Evaluate the third party’s ability to implement effective and sustainable corrective actions to address deficiencies discovered during testing. No problem, right? Especially for those of you with hundreds (or even thousands) of vendors within the scope of assessment. We’ll add our standard disclaimer here, that compliance doesn’t make you secure. It cannot make your vendors secure either. But it does give you a reason to allocate some funding to assessing your vendors and making sure you understand how they affect your attack surface and exploitability. The Need for a Third-Party Risk Program Our long-time readers won’t be surprised that we prescribe a program to address a security need. Managing vendor IT risk is no different. In order to achieve consistent results, and be able to answer your audit committee about vendor risk, you need a systematic approach to plan the work, and then work the plan. Here are the key areas of the program we will dig into in this series: Structuring the V(IT)RM Program: First we’ll sketch out a vendor risk management program, starting with executive sponsorship, and defining governance and policies that make sense for each type of vendor you are dealing with. In this step you will also define risk categories and establish guidelines for assigning vendors to each category. Evaluating Vendor Risk: When assessing vendors you have limited information about their IT environments. This post will dig into how to balance the limitations of what vendors self-report against external information you can glean regarding their security posture and malicious activity. Ongoing V(IT)R Monitoring and Communication: Once you have identified the vendors presenting the greatest risk, and taken initial action, how do you communicate your findings to vendors and internal management? This is especially important for vendor which present significant

Share:
Read Post

Firestarter: The Rugged vs. SecDevOps Smackdown

After a short review of the RSA Security Conference, Rich, Mike, and Adrian debate the value of using labels like “Rugged DevOps” or “SecDevOps”. Rich sees them as different, Mike wonders if we really need them, and Adrian has been tracking their reception on the developer side of the house. Okay, it’s pathetic as smackdowns go, but you wouldn’t have read this far if we didn’t give it an interesting title. Watch or listen: Share:

Share:
Read Post

SIEM Kung Fu: Getting Started and Sustaining Value

As we wrap up this series on SIEM Kung Fu, we have discussed SIEM Fundamentals and some advanced use cases to push your SIEM beyond its rather limited out-of-the-box capabilities. To make the technology more useful over time, you should revisit your SIEM operation process. Many failed SIEM projects over the past 10 years have not been technology failures. More stumble over a lack of understanding of the amount of time and resources needed to get value from the SIEM in early deployments and over time, the amount of effort required to keep them current and tuned. So a large part of SIEM Kung Fu is just making sure you have the people and process in place to leverage the technology effectively and sustainably. Getting Started As a matter of practice you should be focused on getting quick value out of any new technology investment, and SIEM is no exception. Even if you have had the technology in place for years, it’s useful to take a fresh look at the implementation to see if you missed any low-hanging fruit that’s there for the taking. Let’s assume you already have the system up and running, are aggregating log and event sources (including things like vulnerability data and network flows), and have already implemented some out-of-the-box policies. You already have the system in place – you are just underutilizing it. Adversaries For a fresh look at SIEM we recommend you start with adversaries. We described adversary analysis in detail in the CISO’s Guide to Advanced Attackers (PDF). Start by determining who is most likely to attempt to compromise your environment. Defining a likely attacker mission. Then profile potential adversaries to determine the groups most likely to attack you. At that point you can get a feel for the most likely Tactics, Techniques, and Procedures (TTPs) for adversaries to use. This information typically comes from a threat intelligence service, although some information sharing groups can also offer technical indicators to focus on. Armed with these indicators you engage your SIEM to search for them. This is a form of hunting, which we will detail later in this post, and you may well find evidence of active threat actors in your environment. This isn’t a great outcome for your organization, but it does prove the value of security monitoring. At that point you can triage the alerts you have received from SIEM searches to figure out whether you are dealing with false positives or a full-blown incident. We suggest you start with the attacks of your most likely adversaries, among the millions of indicators you can search for. And odds are you’ll find lots of things, if you search for anything and everything. By initially focusing on adversaries you are restricting your search to the attack patterns most likely to be used against you. Two Tracks Once you have picked the low-hanging fruit from adversary analysis, focus shifts toward putting advanced use cases into a systematic process that is consistent and repeatable. Let’s break up the world into two main categories of SIEM operations to describe the different usage models: reactive and proactive. Reactive Reactive usage of SIEM should be familiar because that’s how most security teams function. It’s the alert/triage/respond cycle. The SIEM fires an alert, your tier 1 analyst figure out whether it’s legitimate, and then you figure out how to respond – typically via escalation to tier 2. You can do a lot to refine this process as well, so even if you are reacting you can do it more efficiently. Here are a few tips: Leverage Threat Intel: As we described above under adversary analysis, and in our previous post, you can benefit from the misfortune of others by integrating threat intelligence into your SIEM searches. If you see evidence of a recent attack pattern (provided by threat intel) within your environment, you can get ahead of it. We described this in our Leveraging Threat Intel in Security Monitoring paper. Use it – it works. User Behavioral Analytics (UBA): You can also figure out the relative severity of a situation by tracking the attack to user activity. This involves monitoring activity (and establishing the baselines/profiles described in our last post) not just by device, but also aggregating data and profiling activity for individuals. For example, instead of just monitoring the CEO’s computer, tablet, and smartphone independently, you can look at all three devices to establish a broader profile of the CEO’s activity. Then if you see any of her devices acting outside that baseline, that would trigger an alert you can triage/investigate. Insider Threat: You can also optimize some of your SIEM rules around insiders. During many attacks an adversary eventually gains a foothold in your environment and becomes an insider. You can optimize your SIEM rules to look for activity specifically targeting things you know would be valuable to insiders, such as sensitive data (both structured and unstructured). UBA is also useful here because you are profiling an insider and can watch for them doing strange reconnaisance, or possibly moving an uncharacteristially large amount of data. Threat Modeling: Yes, advanced SIEM users still work through the process of looking at specific, high-value technology assets and figuring out the best ways to compromise them. This is predominately used in the “external stack attack” use case described last post. By analyzing the ways to break an application (or technology stack), SOC analysts can build SIEM rules from those attack patterns, to detect evidence an asset is being targeted. Keep in mind that you need to consistently look at your SIEM ruleset, add new attack patterns/use cases, and prune rules that are no longer relevant. The size of your ruleset correlates to the performance and responsiveness of your SIEM, so you need to balance looking for everything (and crushing the system) against your chance of missing something. This is a key part of the ongoing maintenance required to keep your SIEM relevant and valuable. Whether you get new rules from a threat intelligence vendor, drinking buddies, or conferences, new rules require time to refine thresholds and determine relevance to your organization. So we reiterate that SIEM

Share:
Read Post

Incite 3/9/2016: Star Lord

Everything is a game nowadays. Not like Words with Friends (why yes, since you ask – I do enjoy getting my ass kicked by the women in my life) or even Madden Mobile (which the Boy plays constantly) – I’m talking about gamification. In our security world, the idea is that rank and file employees will actually pay attention to security stuff they don’t give a rat’s ass about… if you make it all into a game. So get departments to compete for who can do best in the phishing simulation. Or give a bounty to the team with the fewest device compromises due to surfing pr0n. Actually, though, it might be more fun to post the link that compromised the machine in the first place. The employee with the nastiest NSFW link would win. And get fired… But I digress. I find that I do play these games. But not on my own device. I’m kind of obsessed with Starbucks’ loyalty program. If you accumulate 12 stars you get a free drink. It’s a great deal for me. I get a large brewed coffee most days. I don’t buy expensive lattes, and I get the same star for every drink I buy. And if I have the kids with me, I’ll perform 3 or 4 different transactions, so I can get multiple stars. When I get my reward drink, I get a 7 shot Mocha. Yes, 7 shots. I’m a lot of fun in the two hours after I drink my reward. And then Starbucks sends out promotions. For a while, if you ordered a drink through their mobile app, you’d get an extra star. So I did. I’d sit in their store, bust open my phone, order the drink, and then walk up to the counter and get it. Win! Extra star! Sometimes they’d offer 3 extra stars if you bought a latte drink, an iced coffee, and a breakfast sandwich within a 3-day period. Well, a guy’s gotta eat, right? And I was ordering the iced coffee anyway in the summer. Win! Three bonus stars. Sometimes they’d send a request for a survey and give me a bunch of stars for filling it out. Win! I might even be honest on the survey… but probably not. As long as I get my stars, I’m good. Yes, I’m gaming the system for my stars. And I have two reward drinks waiting for me, so evidently it’s working. I’m going to be in Starbucks anyway, and drinking coffee anyway – I might as well optimize for free drinks. Oh crap, what the hell have I become? A star whore? Ugh. Let’s flip that perspective. I’m the Star Lord. Yes! I like that. Who wants to be Groot? Pretty much every loyalty program gets gamed. If you travel like I do, you have done the Dec 30 or 31 mileage run to make the next level in a program. You stay in a crappy Marriott 20 miles away from your meeting, instead of the awesome hotel right next to the client’s office. Just to get the extra night. You do it. Everyone does. And now it’s a cat and mouse game. The airlines change their programs every 2-3 years, to force customers to find new ways to optimize milage accumulation. Starbucks is changing their program to reward customers based on what they spend. The nerve of them. Now it will take twice as long to get my reward drinks. Until I figure out how to game this version of the program. And I will, because to me gaming their game is the game. –Mike Photo credit: “Star-Lord ord” from Dex We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes you’ll see at this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF). The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Feb 17 – RSA Conference – The Good, Bad and Ugly Dec 8 – 2015 Wrap Up and 2016 Non-Predictions Nov 16 – The Blame Game Nov 3 – Get Your Marshmallows Oct 19 – re:Invent Yourself (or else) Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Securing Hadoop Architectural Security Issues Architecture and Composition Security Recommendations for NoSQL platforms SIEM Kung Fu Advanced Use Cases Fundamentals Building a Threat Intelligence Program Success and Sharing Using TI Gathering TI Introduction Recently Published Papers Threat Detection Evolution Building Security into DevOps Pragmatic Security for Cloud and Hybrid Networks EMV Migration and the Changing Payments Landscape Applied Threat Intelligence Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security The Future of Security Incite 4 U An expensive lie: Many organizations don’t really take security seriously. It has never been proven that breaches cause

Share:
Read Post

SIEM Kung Fu: Advanced Use Cases

Given the advance of SIEM technology, the use cases described in the first post of our SIEM Kung Fu series are very achievable. But with the advent of more packaged attack kits leveraged by better organized (and funded) adversaries, and the insider threat, you need to go well beyond what comes out of the [SIEM] box, and what can be deployed during a one-week PoC, to detect real advanced attacks. So as we dig into more advanced use cases we will tackle how to optimize your SIEM to both a) detect advanced attacks and b) track user activity, to identify possible malicious insider behavior. There is significant overlap between these two use cases. Ultimately, in almost every successful attack, the adversary gains presence on the network and therefore is technically an insider. But let’s take adversaries out of play here, because in terms of detection, whether the actor is external or internal to your organization doesn’t matter. They want to get your stuff. So we’ll break up the advanced use cases by target. It might be the application stack directly (from the outside), to establish a direct path to the data center, without requiring any lateral movement to achieve the mission. The other path is to compromise devices (typically through an employee), escalate privileges, and move laterally to achieve the mission. Both can be detected by a properly utilized SIEM. Attacking Employees The most prominent attack vector we see in practice today is the advanced attack, which is also known as an APT or a kill chain, among other terms. But regardless of what you call it, this is a process which involves an employee device being compromised, and then used as a launching point to systematically move deeper within an organization – to find, access, and exfiltrate critical information. Detecting this kind of attack requires looking for anomalous behavior at a variety of levels within the environment. Fortunately employees (and their devices) should be reasonably predictable in what they do, which resources they access, and their daily traffic patterns. In a typical device-centric attack an adversary follows a predictable lifecycle: perform reconnaissance, send an exploit to the device, and escalate privileges, then use that device as a base for more reconnaissance, more exploits, and to burrow further into the environment. We have spent a lot of time on how threat detection needs to evolve and how to catch these attacks using network-based telemetry. Leveraging your SIEM to find these attacks is similar; it involves understanding the trail the adversary leaves, the resulting data you can analyze, and patterns to look for. An attacker’s trail is based specifically on change. During any attack the adversary changes something on the device being attacked. Whether it’s the device configuration, creating new user accounts, increasing account privileges, or just unusual traffic flows, the SIEM has access to all this data to detect attacks. Initial usage of SIEM technology was entirely dependent on infrastructure logs, such as those from network and security devices. That made sense because SIEM was initially deployed to stem the flow of alerts streaming in from firewalls, IDS, and other network security devices. But that offered a very limited view of activity and eventually become easy for adversaries to evade. So over the past decade many additional data sources have been integrated into the SIEM to provide a much broader view of your environment. Endpoint Telemetry: Endpoint detection has become very shiny in security circles. There is a ton of interest in doing forensics on endpoints, and if you are trying to figure out how the proverbial horse left the barn, endpoint telemetry is great. Another view is that devices are targeted in virtually every attack, so highly detailed data about exactly what’s happening on an endpoint is critical – not just to incident response, but also to detection. And this data (or the associated metadata) can be instrumental when watching for the kind of change that may indicate an active threat actor. Identity Information: Inevitably, once an adversary has presence in your environment, they will go after your identity infrastructure, because that is usually the path of least resistance for access to valuable data. So you need access to identity stores; watch for new account creation and new privilege entitlements, which are both likely to identify attacks in process. Network Flows: The next step in the attack is to move laterally within the environment, and move data around. This leaves a trail on the network that can be detected by tracking network flows. Of course full packet capture provides the same information and more granularity, with a greater demand for data collection and analytics. Threat Intelligence: Finally, you can leverage external threat data and IP reputation to pinpoint egress network traffic that may headed places you know are bad. Exfiltration now typically includes proprietary encryption, so you aren’t likely to catch the act through content analysis; instead you need to track where data is headed. You can also use threat intelligence indicators to watch for specific new attacks in your environment, as we have discussed ad nauseum in our threat intelligence and security monitoring research. The key to using this data to find advanced attacks is to establish a profile of what’s normal within your environment, and then look for anomalous activity. We know anomaly detection has been under discussion in security circles for decades, but it is still one of the top ways to figure out when attackers are doing their thing in your environment. Of course keeping your baseline current and minimizing false positives are keys to making a SIEM useful for this use case. That requires ongoing effort and tuning. Of course no security monitoring tool just works – so go in with your eyes open regarding the amount of work required. Multiple data points Speaking of minimizing false positives, how can you do that? More SIEM projects fail due to alert exhaustion than for any other reason, so don’t rely on any single data point to produce a verdict that an alert is legitimate and demands investigation. Reduction of false positives is even more critical because of the skills gap which

Share:
Read Post

Incite 2/29/2016: Leap Day

Today is leap day, the last day of February in a leap year. That means the month of February has 29 days. It happens once every 4 years. I have one friend (who I know of) with a birthday on Leap Day. That must have been cool. You feel very special every four years. And you just jump on the Feb 28 bandwagon to celebrate your birthday in non-leap years. Win/win. The idea of a four-year cycle made me curious. What was I doing during leap day in 2012? Turns out I was doing the same thing I’ll be doing today – running between meetings at the RSA Conference. This year, leap day is on Monday, and that’s the day I usually spend at the America’s Growth Capital Conference, networking with CEOs and investors. It’s a great way to take the temperature of the money side of the security industry. And I love to moderate the panels, facilitating debate between leaders of the security industry. Maybe I’ll even interject an opinion or two during the event. That’s been known to happen. Then I started looking back at my other calendar entries for 2012. The boy was playing baseball. Wow, that seems like a long time ago since it seems like forever he’s been playing lacrosse. The girls were dancing, and they had weekend practices getting ready for their June Disney trip. XX1 was getting ready for her middle school orientation. Now she’s in high school. The 4 years represent less than 10% of my life. But a full third of the twins’ existence. That’s a strange thought. And have I made progress professionally? I think so. Our business has grown. We’ll have probably three times the number of people at the Disaster Recovery Breakfast, if that’s any measure of success. The cloud security work we do barely provided beer money in 2012, and now it’s the future of Securosis. I’ve deepened relationships with some clients and stopped working with others. Many of my friends have moved to different gigs. But overall I’m happy with my professional progress. Personally I’m a fundamentally different person. I have described a lot of my transformation here in the Incite, or at least its results. I view the world differently now. I was figuring out which mindfulness practices worked for me back in 2012. That was also the beginning of a multi-year process to evaluate who I was and what changes I needed for the next phase of my life. Over the past four years, I have done a lot of work personally and made those changes. I couldn’t be happier with the trajectory of my life right now. So this week I’m going to celebrate with many close friends. Security is what I do, and this week is one of the times we assemble en masse. What’s not to love? Even cooler is that I have no idea what I’ll be writing about in 2020. My future is unwritten, and that’s very exciting. I do know that by the next time a leap year comes along, XX1 will be midway through college. The twins will be driving (oy, my insurance bill!). And in all likelihood, I’ll be at the RSA Conference hanging out with my friends at the W, waiting patiently for a drink. Most things change, but some stuff stays the same. And there is comfort in that. –Mike Photo credit: “60:366” from chrisjtse We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes you’ll see at this year’s conference (which is really a proxy for the industry), along with deep dives into cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the post or download the guide directly (PDF). It’s that time of year again! The 8th annual Disaster Recovery Breakfast will once again happen at the RSA Conference. Thursday morning, March 3 from 8 – 11 at Jillians. Check out the invite or just email us at rsvp (at) securosis.com to make sure we have an accurate count. The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Dec 8 – 2015 Wrap Up and 2016 Non-Predictions Nov 16 – The Blame Game Nov 3 – Get Your Marshmallows Oct 19 – re:Invent Yourself (or else) Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends January 15 – Toddler Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Securing Hadoop Architectural Security Issues Architecture and Composition Security Recommendations for NoSQL platforms SIEM Kung Fu Fundamentals Building a Threat Intelligence Program Success and Sharing Using TI Gathering TI Introduction Recently Published Papers Threat Detection Evolution Building Security into DevOps Pragmatic Security for Cloud and Hybrid Networks EMV Migration and the Changing Payments Landscape Applied Threat Intelligence Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security The Future of Security Incite 4 U Phisherman’s dream: Brian Krebs has written a lot about small and mid-sized companies being targets for scammers over the last

Share:
Read Post

Presenting the RSA Conference Guide 2016

Apparently the RSA Conference folks failed to regain their senses after letting us have free reign last year to post our RSA Conference Guide to the conference blog. We changed the structure this year, and here is how we explained it in the introductory post of the Guide. In previous years the RSAC-G followed a consistent format. An overview of top-level trends and themes you would see at the show, a deep dive into our coverage areas, and a breakout of what’s on the show floor. We decided to change things up this year. The conference has grown enough that our old format doesn’t make as much sense. And we are in the middle of shaking up the company, so might as well update the RSAC-G while we’re at it. This year we’ll still highlight main themes, which often set the tone for the rest of the security presentations and marketing you see throughout the year. But instead of deep dives into our coverage areas, we are focusing on projects and problems we see many clients tackling. When you go to a conference like RSA, it isn’t really to learn about technology for technology’s sake–you are there to learn how to solve (or at least manage) particular problems and projects. This year our deep dives are structured around the security problems and projects we see toping priority lists at most organizations. Some are old favorites, and others are just hitting the radar for some of you. We hope the new structure is a bit more practical. We want you able to pop open the Guide, find something at the top of your list, jump into that section, and know where to focus your time. Then we take all that raw content and format it into a snazzy PDF with a ton of meme goodness. So you can pop the guide onto your device and refer to it during the show. Without further ado, we are excited to present the entire RSA Conference Guide 2016 (PDF). Just so you can get a taste of the meme awesomeness of the published Guide, check out this image. That’s right. We may be changing the business a bit, but we aren’t going to get more politically correct, that’s for sure. And it’s true. Most n00b responders soil their pants a bit until they get comfortable during incidents. And in case you want to check out the posts on the RSAC blog: Introduction The Securosis Guide to the RSA Conference 2016: The FUD Awakens! Key Themes Yes, all the key themes have a Star Wars flavor. Just because we can. Threat Intelligence & Bothan Spies R2DevOps Escape from Cloud City The Beginning of the End(point) for the Empire Training Security Jedi Attack of the (Analytics) Clones Deep Dives Cloud Security Threat Protection Data Security Share:

Share:
Read Post

Summary: The Cloud Horizon

By Adrian Two weeks ago Rich sketched out some changes to our Friday Summary, including how the content will change. But we haven’t spelled out our reasons. Our motivation is simple. In a decade, over half your systems will be in some cloud somewhere. The Summary will still be about security, but we’ll focus on security for cloud services, cloud applications, and how DevOps techniques intertwine with each. Rather than rehash on-premise security issues we have covered (ad nauseum) for 9 years, we believe it’s far more helpful to IT and security folks to discuss what is on the near horizon which they are not already familiar with. We can say with certainty that most of what you’ve learned about “the right way to do things” in security will be challenged by cloud deployments, so we are tuning the Summary to increase understanding the changes in store, and what to do about them. Trends, features, tools, and even some code. We know it’s not for everybody, but if you’re seriously interested, you can subscribe directly to the Friday Summary. The RSA conference is next week, so don’t forget to get a copy of Securosis’s Guide to the RSA Conference. But be warned; Mike’s been at the meme generator again, and some things you just can’t unsee. Oh, and if you’re interested in attending the Eighth Annual Securosis Disaster Recovery Breakfast at RSA, please RSVP. That way we know how much bacon to order. Or Bloody Marys to make. Something like that. Top Posts for the Week CSA Summit at RSA Conference Docker Containers as a Service walkthrough Scheduling SSH jobs using AWS Lambda Transparency and Auditing on AWS Introducing custom authorizers in Amazon API Gateway S3 Lifecycle Policies, Versioning & Encryption: AWS Security AWS Basic Security Checklist CloudWatch Logs Subscription Consumer + Elasticsearch + Kibana Dashboards Securely Accessing Customer AWS Accounts with Cross-Account IAM Roles Red Hat Brings DevOps to the Network with New Ansible Capabilities Introducing the Fastly Security Speaker Series Account Separation and Mandatory Access Control Customizing CloudFormation With Python Tidas: a new service for building password-less apps NXLog Open Source Log Management tool Why the FBI’s request to Apple will affect civil rights for a generation Staying on top of the DevOps game in 2016 Continuous Web Security Testing with CircleCI Spotify Moves Itself Onto Google’s Cloud–Lucky for Google Continuous Delivery and Effective Feature Flagging with LaunchDarkly – AWS Startup Collection Design Patterns using Amazon DynamoDB Using Amazon API Gateway with microservices deployed on Amazon ECS Continuous Delivery and Effective Feature Flagging with LaunchDarkly – AWS Startup Collection 8 Common AWS Security Issues – and How to Fix Them Using Roles to Secure Your Environment: Part 2 Automate EBS Snapshots using a Lambda function Attending RSA in San Francisco? Visit the AWS Pop-up Loft for Security Talks! Amazon CTO On Encryption: “Evil Players Will Get Access To These Backdoors” IBM previews new tools for developing with Swift in the cloud Tool of the Week This is a new section highlighting a cloud, DevOps, or security tool we think you should take a look at. We still struggle to keep track of all the interesting tools that can help us, so if you have submissions please email them to info@securosis.com. Alerts literally drive DevOps. One may fire off a cloud-based service, or it might indicate a failure a human needs to look at. When putting together a continuous integration pipeline, or processing cloud services, how do you communicate status? SMS and email are the common output formats, and developer tools like Slack or bug tracking systems tend to be the endpoints, but it’s hard to manage and integrate the streams of automated outputs. And once you get one message of a particular event type, you usually don’t want to see that event again for a while. You can create a simple web console, or use AWS to stream to specified recipients, but that’s all manual setup. Things like Slack can help with individuals, team, and third parties, but managing them is frankly a pain in the ass. As you scale up cloud and DevOps processes it’s easy to get overwhelmed. One of the tools I was looking at this week was (x)matters, which provides an integration and management hub for automated messages. It can understand messages from multiple sources and offers aggregation to avoid over-pinging users. I have not seen many products addressing this problem, so I wanted to pass it along. Securosis Blog Posts this Week Firestarter: RSA Conference – the Good, Bad, and the Ugly. Securing Hadoop: Technical Recommendations. Securing Hadoop: Enterprise Security For NoSQL. Other Securosis News and Quotes I posted a piece at Macworld on the FBI vs. Apple that has gotten a lot of attention. It got linked all over the place and I did a bunch of interviews, but I won’t spam you with them. We are posting our whole RSA Conference Guide as posts over at the RSA Conference blog – here are the latest: Securosis Guide: Training Security Jedi Securosis Guide: The Beginning of the End(point) for the Empire Securosis Guide: Escape from Cloud City Training and Events We are giving multiple presentations at the RSA Conference. Rich and Mike are giving Cloud Security Accountability Tour Rich is co-presenting with Bill Shinn of AWS: Aspirin as a Service: Using the Cloud to Cure Security Headaches David Mortman is presenting: Learning from Unicorns While Living with Legacy Docker: Containing the Security Excitement Docker: Containing the Security Excitement (Focus-On) Leveraging Analytics for Data Protection Decisions Rich is giving a presentation on Rugged DevOps at Scale at DevOps Connect the Monday of RSAC We are running two classes at Black Hat USA: Cloud Security Hands-On (CCSK-Plus) Advanced Cloud Security and Applied SecDevOps Share:

Share:
Read Post

Do We Have a Right to Security?

Don’t be distracted by the technical details. The model of phone, the method of encryption, the detailed description of the specific attack technique, and even feasibility are all irrelevant. Don’t be distracted by the legal wrangling. By the timing, the courts, or the laws in question. Nor by politicians, proposed legislation, Snowden, or speeches at think tanks or universities. Don’t be distracted by who is involved. Apple, the FBI, dead terrorists, or common drug dealers. Everything, all of it, boils down to a single question. Do we have a right to security? This isn’t the government vs. some technology companies. It’s the government vs. your right to fundamental security in the digital age. Vendors like Apple have hit the point where some of the products they make, for us, are so secure that it is nearly impossible, if not impossible, to crack them. As a lifetime security professional, this is what my entire industry has been dreaming of since the dawn of computers. Secure commerce, secure communications, secure data storage. A foundation to finally start reducing all those data breaches, to stop China, Russia, and others from wheedling their way into our critical infrastructure. To make phones so secure they almost aren’t worth stealing, since even the parts aren’t worth much. To build the secure foundation for the digital age that we so lack, and so desperately need. So an entire hospital isn’t held hostage because one person clicked on the wrong link. The FBI, DOJ, and others are debating whether secure products and services should be legal. They hide this in language around warrants and lawful access, and scream about terrorists and child pornographers. What they don’t say, what they never admit, is that it is impossible to build in back doors for law enforcement without creating security vulnerabilities. It simply can’t be done. If Apple, the government, or anyone else has master access to your device, to a service, or communications, that is a security flaw. It is impossible for them to guarantee that criminals or hostile governments won’t also gain such access. This isn’t paranoia, it’s a demonstrable fact. No company or government is completely secure. And this completely ignores the fact that if the US government makes security illegal here, that destroys any concept of security throughout the rest of the world, especially in repressive regimes. Say goodbye to any possibility of new democracies. Never mind the consequences here at home. Access to our phones and our communications these days isn’t like reading our mail or listening to our phone calls – it’s more like listening to whispers to our partners at home. Like tracking how we express our love to our children, or fight the demons in our own minds. The FBI wants this case to be about a single phone used by a single dead terrorist in San Bernadino to distract us from asking the real question. It will not stop at this one case – that isn’t how law works. They are also teaming with legislators to make encrypted, secure devices and services illegal. That isn’t conspiracy theory – it is the stated position of the Director of the FBI. Eventually they want systems to access any device or form of communications, at scale. As they already have with our phone system. Keep in mind that there is no way to limit this to consumer technologies, and it will have to apply to business systems as well, undermining corporate security. So ignore all of that and ask yourself, do we have a right to security? To secure devices, communications, and services? Devices secure from criminals, foreign governments, and yes, even our own? And by extension, do we have a right to privacy? Because privacy without security is impossible. Because that is what this fight is about, and there is no middle ground, mystery answer hiding in a research project, or compromise. I am a security expert. I have spent 25 years in public service and most definitely don’t consider myself a social activist. I am amused by conspiracy theories, but never take them seriously. But it would be unconscionable for me to remain silent when our fundamental rights are under assault by elements within our own government. Share:

Share:
Read Post

Building a Threat Intelligence Program: Gathering TI

[Note: We received some feedback on the series that prompted us to clarify what we meant by scale and context towards the end of the post. See? We do listen to feedback on the posts. – Mike] We started documenting how to build a Threat Intelligence program in our first post, so now it’s time to dig into the mechanics of thinking more strategically and systematically about how to benefit from the misfortune of others and make the best use of TI. It’s hard to use TI you don’t actually have yet, so the first step is to gather the TI you need. Defining TI Requirements A ton of external security data available. The threat intelligence market has exploded over the past year. Not only are dozens of emerging companies offering various kinds of security data, but many existing security vendors are trying to introduce TI services as well, to capitalize on the hype. We also see a number of new companies with offerings to help collect, aggregate, and analyze TI. But we aren’t interested in hype – what new products and services can improve your security posture? With no lack of options, how can you choose the most effective TI for you? As always, we suggest you start by defining your problem, and then identifying the offerings that would help you solve it most effectively. Start with your the primary use case for threat intel. Basically, what is the catalyst to spend money? That’s the place to start. Our research indicates this catalyst is typically one of a handful of issues: Attack prevention/detection: This is the primary use case for most TI investments. Basically you can’t keep pace with adversaries, so you need external security data to tell you what to look for (and possibly block). This budget tends to be associated with advanced attackers, so if there is concern about them within the executive suite, this is likely the best place to start. Forensics: If you have a successful compromise you will want TI to help narrow the focus of your investigation. This process is outlined in our Threat Intelligence + Incident Response research. Hunting: Some organizations have teams tasked to find evidence of adversary activity within the environment, even if existing alerting/detection technologies are not finding anything. These skilled practitioners can use new malware samples from a TI service effectively, then can also use the latest information about adversaries to look for them before they act overtly (and trigger traditional detection). Once you have identified primary and secondary use cases, you need to look at potential adversaries. Specific TI sources – both platform vendors and pure data providers – specialize in specific adversaries or target types. Take a similar approach with adversaries: understand who your primary attackers are likely to be, and find providers with expertise in tracking them. The last part of defining TI requirements is to decide how you will use the data. Will it trigger automated blocking on active controls, as described in Applied Threat Intelligence? Will data be pumped into your SIEM or other security monitors for alerting as described in Threat Intelligence and Security Monitoring? Will TI only be used by advanced adversary hunters? You need to answer these questions to understand how to integrate TI into your monitors and controls. When thinking about threat intelligence programmatically, think not just about how you can use TI today, but also what you want to do further down the line. Is automatic blocking based on TI realistic? If so that raises different considerations that just monitoring. This aspirational thinking can demand flexibility that gives you better options moving forward. You don’t want to be tied into a specific TI data source, and maybe not even to a specific aggregation platform. A TI program is about how to leverage data in your security program, not how to use today’s data services. That’s why we suggest focusing on your requirements first, and then finding optimal solutions. Budgeting After you define what you need from TI, how will you pay for it? We know, that’s a pesky detail, but it is important, as you set up a TI program, to figure out which executive sponsors will support it and whether that funding source is sustainable. When a breach happens, a ton of money gets spent on anything and everything to make it go away. There is no resistance to funding security projects, until there is – which tends to happen once the road rash heals a bit. So you need to line up support for using external data and ensure you have got a funding source that sees the value of investment now and in the future. Depending on your organization security may have its own budget to spend on key technologies; in that case you just build the cost into the security operations budget because TI is be sold on a subscription basis. If you need to associate specific spending with specific projects, you’ll need to find the right budget sources. We suggest you stay as close to advanced threat prevention/detection as you can because that’s the easiest case to make for TI. How much money do you need? Of course that depends on the size of your organization. At this point many TI data services are priced at a flat annual rate, which is great for a huge company which can leverage the data. If you have a smaller team you’ll need to work with the vendor on lower pricing or different pricing models, or look at lower cost alternatives. For TI platform expenditures, which we will discuss later in the series, you will probably be looking at a per-seat cost. As you are building out your program it makes sense to talk to some TI providers to get preliminary quotes on what their services cost. Don’t get these folks engaged in a sales cycle before you are ready, but you need a feel for current pricing – that is something any potential executive sponsor needs to know. While we are discussing money, this is a good point to start thinking about how to quantify the

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.