Securosis

Research

SecMon State of the Union: Refreshing Requirements

Now that you understand the use cases for security monitoring, our next step is to translate them into requirements for your strategic security monitoring platform. In other words, now that you have an idea of the problem(s) you need to solve, what capabilities do you need to address them? Part of that discussion is inevitably about what you don’t get from your existing security monitoring approach – this research wouldn’t be very interesting if your existing tools were all peachy. Visibility We made the case that Visibility Is Job #1 in our Security Decision Support series. Maintaining sufficient visibility across all the moving pieces in your environment is getting harder. So when we boil it down to a set of requirements, it looks like this: Aggregate Existing Security Data: We could have called this requirement same as it ever was, because all your security controls generate a bunch of data you need to collect. Kind of like the stuff you were gathering in the early days of SEM (Security Event Management) or log management 15 years ago. Given all the other things on your plate, what you don’t want is to need to worry about integrating your security devices, or figuring out how to scale a solution to the size of your environment. To be clear, security data aggregation has commoditized, so this is really table stakes for whatever solution you consider. Data Management: Amazingly enough, when you aggregate a bunch of security data, you need to manage it. So data management is still a thing. We don’t need to go back to SIEM 101 but aggregating, normalizing, reducing, and archiving security data is a core function for any security monitoring platform – regardless of whether it started life as SIEM or a security analytics product. One thing to consider (which we will dig into more when we get to procurement strategies) is the cost of storage, because some emerging cloud-based pricing models can be punitive when you significantly increase the amount of security data collected. Embracing New Data Sources: In the old days the complaint was that vendors did not support all the devices (security, networking, and computing) in the organization. As explained above, that’s less of an issue now. But consuming and integrating cloud monitoring, threat intelligence, business context (such as asset information and user profiles), and non-syslog events – all drive a clear need for streamlined integration to get value from additional data faster. Seeing into the Cloud When considering the future requirements of a security monitoring platform, you need to understand how it will track what’s happening in the cloud, because it seems the cloud is here to stay (yes, that was facetious). Start with API support, the lingua franca of the cloud. Any platform you choose must be able to make API calls to the services you use, and/or pull information and alerts from a CASB (Cloud Access Security Broker) to track use of SaaS within your organization. You’ll also want to understand the architecture involved in gathering data from multiple cloud sources. You definitely use multiple SaaS services and likely have many IaaS (Infrastructure as a Service) accounts, possibly with multiple providers, to consider. All these environments generate data which needs to be analyzed for security impact, so you should define a standard cloud logging and monitoring approach, and likely centralize aggregation of cloud security data. You also should consider how cloud monitoring integrates with your on-premise solution. For more detail on this please see our paper on Monitoring the Hybrid Cloud. For specific considerations regarding different cloud environments: Private cloud/virtualized data center: There are differences between monitoring your existing data center and a highly virtualized environment. You can tap the physical network within your data center for visibility. But for the abstracted layer above that – which contains virtualized networks, servers, and storage – you need proper access and instrumentation in the cloud environment to see what happens within virtual devices. You can also route network traffic within your private cloud through an inspection point, but the architectural flexibility cost is substantial. The good news is that security monitoring platforms can now generally monitor virtual environments by installing sensors within the private cloud. IaaS: The biggest and most obvious challenge in monitoring IaaS is reduced visibility because you don’t control the physical stack. You are largely restricted to logs provided by your cloud service provider. IaaS vendors abstract the network, limiting your ability to see network traffic and capture network packets. You can run all network traffic through a cloud-based choke point for collection, regaining a faint taste of the visibility available inside your own data center, but again that sacrifices much of the architectural flexibility attracting you to the cloud. You also need to figure out where to aggregate and analyze collected logs from both the cloud service and individual instances. These decisions depend on a number of factors – including where your technology stacks run, the kinds of analyses to perform, and what expertise you have available on staff. SaaS: Basically, you see what your SaaS provider shows you, and not much else. Most SaaS vendors provide logs to pull into your security monitoring environment. They don’t provide visibility into the vendor’s technology stack, but you are able to track your employees’ activity within their service – including administrative changes, record modifications, login history, and increasingly application activity. You can also pull information from a CASB which is polling SaaS APIs and analyzing egress web logs for further detail. Threat Detection The key to threat detection in this new world is the ability to detect both attacks you know about (rules-based), attacks you haven’t seen yet but someone else has (threat intelligence driven), and unknown attacks which cause anomalous activity on behalf of your users or devices (security analytics). The patterns you are trying to detect can be pretty much anything – including command and control, fraud, system misuse, malicious insiders, reconnaissance, and even data exfiltration. So there is no lack of stuff to look for – the question is what do you need to detect

Share:
Read Post

SecMon State of the Union: Focus on Use Cases

When we revisited the Security Monitoring Team of Rivals it became obvious that the overlap between SIEM and security analytics has passed a point of no return. So with a Civil War brewing our key goal is to determine what will be your strategic platform for security monitoring. This requires you to shut out the noise of fancy analytics and colorful visualizations, and focus on the problem you are trying to solve now, with an eye to how it will evolve in the future. That means getting back to use cases. The cases for security monitoring tend to fall into three major buckets: Security alerts Forensics and Response Compliance reporting Let’s go into each of these to make sure you have a clear handle on success today, and how each will change in the future. After we work through the use cases, we’ll cover pros and cons of how each combatant (SIEM vs. Security Analytics) addresses them. As you can see, there isn’t really any clean way to categorize the players, so let’s just jump into cases. Security Alerts Traditional SIEM was based on looking for patterns you knew to be attacks. You couldn’t detect things that you didn’t yet recognize as attacks yet, and keeping the rules current to keep pace with dynamic attacks was a challenge. So many customers didn’t receive the value they needed. In response a new generation of security analytics products appeared to apply advanced mathematical techniques to security data, identifying and analyzing anomalous activity, giving customers hope that they would be able to detect attacks not covered by their existing rules. Today to have a handle on success any security monitoring platform needs the ability to detect and alert on the following attack vectors: Commodity Malware: Basically these are known attacks, likely with a Metasploit module available to allow even the least sophisticated attackers to use them. Although not sexy, this kind of attack is still prevalent because adversaries don’t use advanced attacks unless they need to. Advanced Attacks: You make the assumption that you haven’t seen an advanced attack before, thus you are very unlikely to have a rule in your security monitoring platform to find it. User Behavior Analysis: Another way to pinpoint attacks is to look for strange user activity. At some point in an attack, a device will be compromised and that device will act in an anomalous way, which provides an opportunity to detect it. Insider Threat Detection: The last use case we’ll describe overlaps with UBA because it’s about figuring out if you have a malicious insider stealing data or causing damage. The insider tends to be a user (thus the overlap with UBA). Yet this use case is less about malware (because the user is already within the perimeter) and more about profiling employee behavior and looking for signs of malicious intent, such as reconnaissance and exfiltration. But the telemetry used to drive security monitoring tools today is much broader than in the past. The first generation of the technology – SIEM – was largely driven by log data and possibly some network flows and vulnerability information. Now, given the disruption of cloud and mobility, a much broader set of data is needed. For instance there are SaaS applications in your environment, which you need to factor into your security monitoring. There are likely IoT devices as well, whether they be work floor sensors or multi-function printers with operating systems which can be compromised. Those also need to be watched. And finally, mobile endpoints are full participants in the technology ecosystem nowadays, so gathering telemetry from those devices is an important aspect of monitoring as well. So aside from the main attack vectors, the fact that corporate data lies both inside the perimeter and across a bunch of SaaS services and mobile devices, makes it much harder to build a comprehensive security monitoring environment. We described this need for enterprise visibility in our Security Decision Support series. Forensics and Response The forensics and response use case comes into play after an attack, when the organization is trying to figure out what happened and assess damage. The key functions required for response tend to be sophisticated search and the ability to drill down into an attack quickly and efficiently. Skilled responders are very scarce, so they need to leverage technology where possible to streamline their efforts. But given the scarcity of responders, a heavy dose of enrichment (adding threat intel to case files) and even potential attack remediation must be increasingly automated. So it’s not just about equipping the responders – it’s about helping scale their activity. Compliance Reporting This use case is primarily focused on providing the information needed to make the auditor go away as quickly as possible, with minimal customization and tuning of reports. Every organization has to deal with different compliance and regulatory hierarchies, as well as internal controls reporting, so success entails having the tool handle mapping specific controls to regulations, and substantiating that the controls are actually in place and operational. Seems pretty simple, right? It is until you have to spend two days in Excel cleaning up the stuff that came from your tool. You could pay an assessor to go through all your stuff and make sense of things, but that may not be the best use of your or their time – nor can you ensure they’ll reach the right conclusions regarding your controls. As we look to the future, compliance reporting won’t change that much. But the data you need to feed into a platform to generate your substantiation will expand substantially. It’s all about visibility as mentioned above. As your organization embraces cloud computing and mobility, you will need to make sure you have logs and appropriate telemetry from the controls protecting functions to ensure you can substantiate your security activity. Assessing the Combatants Given the backdrop of these use cases and what’s needed for the future, we need to perform a general assessment of SIEM and security analytics. To be clear this isn’t an apples to apples comparison –

Share:
Read Post

The Security Profession Needs to Adopt Just Culture

Yesterday Twitter revealed they had accidentally stored plain-text passwords in some log files. There was no indication the data was accessed and users were warned to update their passwords. There was no known breach, but Twitter went public anyway, and was excoriated in the press and… on Twitter. This is a problem for our profession and industry. We get locked into a cycle where any public disclosure of a breach or security mistake results in: People ripping the organization apart on social media without knowing the facts. Vendors issuing press releases claiming their product would have prevented the issue, without knowing the facts. Press articles focusing on the worst case scenario without any sort of risk analysis… or facts. Plenty of voices saying how simple it is to prevent the problem, without any the concept of complexity or scale of even simple controls (remember kids, simple doesn’t scale). To be clear, there are cases where organizations are negligent and try to cover up their errors. If a press release says things like “very sophisticated attack”, infosec fairies deservedly lose their wings, but more often than not we focus on blame rather than cause. This is true both in public and for internal investigations. This is a problem many industries have faced; two in particular have performed extensive research and adopted a concept called Just Culture. It’s time for security to formally adopt Just Culture, including adding it to certifications and training programs. Aviation and healthcare are two professions/industries which use Just Culture, to different degrees. My background and introduction is on the healthcare side so that’s where I draw from. First, read this paper available through the National Institutes of Health: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3776518/. The focus in Just Culture is to identify and correct the systemic cause, not to blame the individual. Here are some choice quotes: People make errors. Errors can cause accidents. In healthcare, errors and accidents result in morbidity and adverse outcomes and sometimes in mortality. One organizational approach has been to seek out errors and identify the responsible individual. Individual punishment follows. This punitive approach does not solve the problem. People function within systems designed by an organization. An individual may be at fault, but frequently the system is also at fault. Punishing people without changing the system only perpetuates the problem rather than solving it. … A just culture balances the need for an open and honest reporting environment with the end of a quality learning environment and culture. While the organization has a duty and responsibility to employees (and ultimately to patients), all employees are held responsible for the quality of their choices. Just culture requires a change in focus from errors and outcomes to system design and management of the behavioral choices of all employees. … In a just culture, both the organization and its people are held accountable while focusing on risk, systems design, human behavior, and patient safety. The focus is on systemic risk first, and individual… later. This is something we face in healthcare/rescue every day, where many errors result from the system more than the person. For example in some prehospital systems it isn’t uncommon to have two medications with vastly different effects in very similar packaging, resulting in medication errors which can be fatal. That answer isn’t better training but better packaging. Fix the system – don’t expect perfect behavior. Let’s apply this to Twitter. Plain text passwords were stored in logs. This is bad, but there are many ways it could have happened. Think of all the levels of logging and software components they have, and all the places passwords might have fallen into logs. Using a Just Culture approach we should reward Twitter for their honesty, and learn what techniques they used to detect the exposed data, and what allowed it to be saved in those logs, undiscovered for so long. What system issues caused the problem, and how can we prevent them moving forward? Not “Twitter was stupid and got hacked” (because apparently they weren’t). Just Culture is about fostering an open culture of safety where mistakes – even individual mistakes – are used to improve overall system resilience. It’s our time. Share:

Share:
Read Post

SecMon State of the Union: Revisiting the Team of Rivals

Things change. That’s the only certainty in technology today, and certainly in security. Back when we wrote Security Analytics Team of Rivals, SIEM and Security Analytics offerings were different and did not really overlap. It was more about how can they coexist, instead of choosing one over the other. But nowadays the overlap is significant, so you need existing SIEM players basically bundling in security analytics capabilities and security analytics players positioning their products as next-generation SIEM. As per usual, customers are caught in the middle, trying to figure out what is truth and what is marketing puffery. So Securosis is again here to help you figure out which end is up. In this Security Monitoring (SecMon) State of the Union series we will offer some perspective on the use cases which make sense for SIEM, and where security analytics makes a difference. Before we get started we’d like to thank McAfee for once again licensing our security monitoring research. It’s great that they believe an educated buyer is the best kind, and appreciate our Totally Transparent Research model. Revisiting Security Analytics Security analytics remains a fairly perplexing market because almost every company providing security products and/or services claims to perform some kind of analytics. So to level-set let’s revisit how we defined Security Analytics (SA) in the Team of Rivals paper. A SA tool should offer: Data Aggregation: It’s impossible to analyze without data. Of course there is some question whether a security analytics tool needs to gather its own data, or can just integrate with an existing security data repository like your SIEM. Math: We joke a lot that math is the hottest thing in security lately, especially given how early SIEM correlation and IDS analysis were based on math too. But this new math is different, based on advanced algorithms and using modern data management to find patterns within data volumes which were unimaginable 15 years ago. The key difference is that you no longer need to know what you are looking for to find useful patterns, a critical limitation of today’s SIEM. Modern algorithms can help you spot unknown unknowns. Looking only for known and profiled attacks (signatures) is clearly a failed strategy. Alerts: These are the main output of security analytics, so you want them prioritized by importance to your business. Drill down: Once an alert fires an analyst needs to dig into the details, both for validation and to determine the most appropriate response. So analytics tools must be able to drill down and provide additional detail to facilitate response. Learn: This is the tuning process, and any offering needs a strong feedback loop between responders and the folks running it. You must refine analytics to minimize false positives and wasted time. Evolve: Finally the tool must improve because adversaries are not static. This requires a threat intelligence research team at your security analytics provider constantly looking for new categories of attacks, and providing new ways to identify them. These are attributes the requirements of a SA tool. But over the past year we have seen these capabilities not just in security analytics tools, but also appearing in more traditional SIEM products. Though to be clear, “traditional SIEM” is really a misnomer because none of the market leaders are built on 2003-era RDBMS technology or sitting still waiting to be replaced by new entrants with advanced algorithms. In this post and the rest of this series we will discuss how well each tool matches up to the emerging use cases (many of which we discussed in Evolving to Security Decision Support), and how technologies such as the cloud and IoT impact your security monitoring strategy and toolset. Wherefore art thou, Team of Rivals? The lines between SIEM and security analytics have blurred as we predicted, so what should we expect vendors to do? First understand that any collaboration and agreements between SIEM and security analytics are deals of convenience to solve the short-term problem of the SIEM vendor not having a good analytics story, and the analytics vendor not having enough market presence to maintain growth. The risk to customers is that buying a bundled SA solution with your SIEM can be problematic if the vendor acquires a different technology and eventually forces a migration to their in-house solution. This underlies the challenge of vendor selection as markets shift and collapse. We are pretty confident that the security monitoring market will play out as follows over the short term: SIEM players will offer broad and more flexible security analytics. Security analytics players will spend a bunch of time filling out SIEM reporting and visualization features sets to go after replacement deals. Customers will be confused and unsure whether they need SIEM, security analytics, or both. But that story ends with confused practitioners, and that’s not where we want to be. So let’s break the short-term reality down a couple different ways. Short-term plan: You are where you are… The solution you choose for security monitoring should suit emerging use cases you’ll need to handle and the questions you’ll need to answer about your security posture over time. Yet it’s unlikely you don’t already have security monitoring technology installed, so you are where you are. Moving forward requires clear understanding of how your current environment impacts your path forward. SIEM-centric If you are a large company or under any kind of compliance/regulatory oversight – or both – you should be familiar with SIEM products and services because you’ve been using them for over a decade. Odds are you have selected and implemented multiple SIEM solutions so you understand what SIEM does well…. And not so well. You have no choice but to compensate for its shortcomings because you aren’t in a position to shut it off or move to a different platform. So at this point your main objective is to get as much value out of the existing SIEM as you can. Your path is pretty straightforward. First refine the alerts coming out of the system to increase the signal from the SIEM and focus your team on triaging and investigating real attacks. Then

Share:
Read Post

Firestarter: The RSA 2018 Episode

This week Rich, Mike, and Adrian talk about what they expect to see at the RSA Security Conference, and if it really means anything. As we do in most of our RSA Conference related discussions the focus is less on what to see and more on what industry trends we can tease out, and the potential impact on the regular security practitioner. For example, what happens when blockchain and GDPR collide? Do security vendors finally understand cloud? What kind of impact does DevOps have on the security market? Plus we list where you can find us, and, as always, don’t forget to attend the Tenth Annual Disaster Recovery Breakfast! Watch or listen: Share:

Share:
Read Post

Complete Guide to Enterprise Container Security *New Paper*

The explosive growth of containers is not surprising because the technology (most obviously Docker) alleviates several problems for deploying applications. Developers need simple packaging, rapid deployment, reduced environmental dependencies, support for micro-services, generalized management, and horizontal scalability – all of which containers help provide. When a single technology enables us to address several technical problems at once, it is very compelling. But this generic model of packaged services, where the environment is designed to treat each container as a “unit of service”, sharply reduces transparency and audit-ability (by design), and gives security pros nightmares. We run more code faster, but must in turn accept a loss of visibility inside the containers. It begs the question, “How can we introduce security without losing the benefits of containers?” This research effort was designed to confront all aspects of container security, from developer desktops to production deployments, to illustrate the numerous places where security controls and monitoring can be introduced into the ecosystem. Tools and technologies are available to run containers with high security and strong confidence that they are no less secure than any other applications. We also have access to capabilities which validate security claims through scans and reports on the security controls. We would like to thank Aqua Security and Tripwire for licensing this research and participating in some of our initial discussions. As always we welcome comments and suggestions. If you have questions please feel free to email us: info at securosis.com. You can download all or part of this reseach from the website of either licensee, grab a copy from our Research Library, or just download a copy of the paper directly: Complete Guide to Enterprise Container Security (PDF). Share:

Share:
Read Post

Evolving to Security Decision Support: Laying the Foundation

As we resume our series on Evolving to Security Decision Support, let’s review where we’ve been so far. The first step in making better security decisions is ensuring you have full visibility of your enterprise assets, because if you don’t know assets exist, you cannot make intelligent decision about protecting them. Next we discussed how threat intelligence and security analytics can be brought to bear to get both internal and external views of your attack environment, again with the goal of turning data into information you can use to better prioritize efforts. Once you get to this stage, you have the basic capabilities to make better security decisions. Then the key is to integrate these practices into your day-to-day activities. This requires process changes and a focus on instrumentation within your security program to track effectiveness in order to constantly improve performance. Implementing SDS To implement Security Decision Support you need a dashboard of sorts to help track all the information coming into your environment, to help decide what to do and why. You need a place to visualize alerts and determine their relative priority. This entails tuning your monitors to your particular environment so prioritization improves over time. We know – the last thing you want is another dashboard to deal with. Yet another place to collect security data, which you need to keep current and tuned. But we aren’t saying this needs to be a new system. You have a bunch of tools in place which certainly could provide these capabilities. Your existing SIEM, security analytics product, and vulnerability management service, just to name a few. So you may already have a platform in place, but these advanced capabilities have yet to be implemented or fully utilized. That’s where the process changes come into play. But first things first. Before you worry about what tool will to do this work, let’s go through the capabilities required to implement this vision. The first thing you need in a decision support platform to visualize security issues is, well, data. So what will feed this system? You need to understand your technology environment so integration with your organizational asset inventory (usually a CMDB) provides devices and IP addresses. You’ll also want information from your enterprise directory, which provides people and can be used to understand a specific user’s role and what their entitlements should be. Finally you need security data from security monitors – including any SIEM, analytics, vulnerability management, EDR, hunting, IPS/IDS, etc. You’ll also need to categorize both devices and users based on their importance and risk to the organization. Not to say some employees are more important than others as humans (everyone is important – how’s that for political correctness?). But some employees pose more risk to the organization than others. That’s what you need to understand, because attacks against high-risk employees and systems should be dealt with first. We tend to opt for simplicity here, suggesting 3-4 different categories with very original names: High: These are the folks and systems which, if compromised, would cause a bad day for pretty much everyone. Senior management fits into this category, as well as resources and systems with access to the most sensitive data in your enterprise. This category poses risk to the entire enterprise. Medium: These employees and systems will cause problems if stolen or compromised. The difference is that the damage would be contained. Meaning these folks can only access data for a business unit or location, not the entire enterprise. Low: These people and systems don’t really have access to much of import. To be clear, there is enterprise risk associated with this category, but it’s indirect. Meaning an adversary could use a low-risk device or system to gain a foothold in your organization, and then attack stuff in a higher-risk category. We recommend you categorize adversaries and attack types as well. Threat intelligence can help you determine which tactics are associated with which adversaries, and perhaps prioritize specific attackers (and tactics) by motivation to attack your environment. Once this is implemented you will have a clear sense of what needs to happen first, based on the type of attack and adversary; and the importance of the device, user and/or system. It’s a kind of priority score but security marketeers call it a risk score. This is analogous to a quantitative financial trading system. You want to take most of the emotion out of decisions, so you can get down to what is best for the organization. Many experienced practitioners push back on this concept, preferring to make decisions based on their gut – or even worse, using a FIFO (First In, First Out) model. We’ll just point out that pretty much every major breach over the last 5 years produced multiple alerts of attack in progress, and opportunities to deal with it, before it became a catastrophe. But for whatever reason, those attacks weren’t dealt with. So having a machine tell you what to focus on can go a long way toward ensuring you don’t miss major attacks. The final output of a Security Decision Support process is a decision about what needs to happen – meaning you will need to actually do the work. So integration with a security orchestration and automation platform can help make changes faster and more reliable. You will probably want to send the required task(s) to a work management system (trouble ticketing, etc.) to route to Operations, and to track remediation. Feedback Loop We call Security Decision Support a process, which means it needs to adapt and evolve to both your changing environment and new attacks and adversaries. You want a feedback loop integrated with your operational platform, learning over time. As with tuning any other system, you should pay attention to: False Negatives: Where did the system miss? Why? A false negative is something to take very seriously, because it means you didn’t catch a legitimate attack. Unfortunately you might not know about a false negative until you get a breach notification. Many organizations have started threat hunting to find active adversaries their security monitoring system miss. False Positives: A bit

Share:
Read Post

The TENTH Annual Disaster Recovery Breakfast: Are You F’ing Kidding Me?

What was the famous Bill Gates quote? “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.” Well, we at Securosis actually can gauge that accurately given this is the TENTH annual RSA Conference Disaster Recovery Breakfast. I think pretty much everything has changed over the past 10 years. Except that stupid users still click on things they shouldn’t. And auditors still give you a hard time about stuff that doesn’t matter. And breaches still happen. But we aren’t fighting for budget or attention much anymore. If anything, they beat a path to your door. So there’s that. It’s definitely a “be careful what you wish for” situation. We wanted to be taken seriously. But probably not this seriously. We at Securosis are actually more excited for the next 10 years, and having been front and center on this cloud thing we believe over the next decade the status quo of both security and operations will be fundamentally disrupted. And speaking of disruption, we’ll also be previewing our new company – DisruptOPS – at breakfast, if you are interested. We remain grateful that so many of our friends, clients, and colleagues enjoy a couple hours away from the insanity that is the RSAC. By Thursday it’s very nice to have a place to kick back, have some quiet conversations, and grab a nice breakfast. Or don’t talk to anyone at all and embrace your introvert – we get that too. The DRB happens only because of the support of CHEN PR, LaunchTech, CyberEdge Group, and our media partner Security Boulevard. Please make sure to say hello and thank them for helping support your recovery. As always the breakfast will be Thursday morning (April 19) from 8-11 at Jillian’s in the Metreon. It’s an open door – come and leave as you want. We will have food, beverages, and assorted non-prescription recovery items to ease your day. Yes, the bar will be open. You know how Mike likes the hair of the dog. Please remember what the DR Breakfast is all about. No spin, no magicians (since booth babes were outlawed) and no plastic lightsabers (much to Rich’s disappointment) – it’s just a quiet place to relax and have muddled conversations with folks you know, or maybe even go out on a limb and meet someone new. We are confident you will enjoy the DRB as much as we do. To help us estimate numbers, please RSVP to rsvp (at) securosis (dot) com. Share:

Share:
Read Post

Evolving to Security Decision Support: Data to Intelligence

As we kicked off our Evolving to Security Decision Support series, the point we needed to make was the importance of enterprise visibility to the success of your security program. Given all the moving pieces in your environment – including the usage of various clouds (SaaS and IaaS), mobile devices, containers, and eventually IoT devices – it’s increasingly hard to know where all your critical data is and how it’s being used. So enterprise visibility is necessary, but not sufficient. You still need to figure out whether and how you are being attacked, as well as whether and how data and/or apps are being misused. Nobody gets credit just for knowing where you can be attacked. You get credit for stopping attacks and protecting critical data. Ultimately that’s all that matters. The good news is that many organizations already collect extensive security data (thanks, compliance!), so you have a base to work with. It’s really just a matter of turning all that security data into actual intelligence you can use for security decision support. The History of Security Monitoring Let’s start with some historical perspective on how we got here, and why many organizations already perform extensive security data collection. It all started in the early 2000s with deployment of the first SIEM, deployed to make sense of the avalanche of alerts coming from firewalls and intrusion detection gear. You remember those days, right? SIEM evolution was driven by the need to gather logs and generate reports to substantiate controls (thanks again, compliance!). So the SIEM products focused more on storing and gathering data than actually making sense of it. You could generate alerts on things you knew to look for, which typically meant you got pretty good at finding attacks you had already seen. But you were pretty limited in ability to detect attacks you hadn’t seen. SIEM technology continues to evolve, but mostly to add scale and data sources to keep up with the number of devices and amount of activity to be monitored. But that doesn’t really address the fact that many organizations don’t want more alerts – they want better alerts. To provide better alerts, two separate capabilities have come together in an interesting way: Threat Intelligence: SIEM rules were based on looking for what you had seen before, so you were limited in what you could look for. What if you could leverage attacks other companies have seen and look for those attacks, so you could anticipate what’s coming? That’s the driver for external threat intelligence. Security Analytics: The other capability isn’t exactly new – it’s using advanced math to look at the security data you’ve already collected to profile normal behaviors, and then look for stuff that isn’t normal and might be malicious. Call it anomaly detection, machine learning, or whatever – the concept is the same. Gather a bunch of security data, build mathematical profiles of normal activity, then look for activity that isn’t normal. Let’s consider both these capabilities to gain a better understanding how they work, and then we’ll be able to show how powerful integrating them can be for generating better alerts. Threat Intel Identifies What Could Happen Culturally, over the past 20 years, security folks were generally the kids who didn’t play well in the sandbox. Nobody wanted to appear vulnerable, so data breaches and successful attacks were the dirty little secret of security. Sure, they happen, but not to us. Yeah, right. There were occasional high-profile issues (like SQL*Slammer) which couldn’t be swept under the rug, but they hit everyone so weren’t that big a deal. But over the past 5 years a shift has occurred within security circles, borne out of necessity as most such things are. Security practitioners realized no one is perfect, and we can collectively improve our ability to defend ourselves by sharing information about adversary tactics and specific indicators from those attacks. This is something we dubbed “benefiting from the misfortune of others” a few years ago. Everyone benefits because once one of us is attacked, we all learn about that attack and can look for it. So the modern threat intelligence market emerged. In terms of the current state of threat intel, we typically see the following types of data shared within commercial services, industry groups/ISACs, and open source communities: Bad IP Addresses: IP addresses which behave badly, for instance by participating in a botnet or acting as a spam relay, should probably be blocked at your egress filter, because you know no good will come from communicating with that network. You can buy a blacklist of bad IP addresses, probably the lowest-hanging fruit in the threat intel world. Malware Indicators: Next-generation attack signatures can be gathered and shared to look for activity representative of typical attacks. You know these indicate an attack, so being able to look for them within your security monitors helps keep your defenses current. The key value of threat intel is to accelerate the human, as described in our Introduction to Threat Operations research. But what does that even mean? To illustrate a bit, let’s consider retrospective search. This involves being notified of a new attack via a threat intel feed, and using those indicators to mine your existing security data to see if you saw this attack before you knew to look for it: retrospective search. Of course it would be better to detect the attack when it happens, but the ability to go back and search for new indicators in old security data shortens the detection window. Another use of threat intel is to refine your hunting process. This involves having a hunter learn about a specific adversary’s tactics, and then undertake a hunt for that adversary. It’s not like the adversary is going to send out a memo detailing its primary TTPs, so threat intel is the way to figure out what they are likely to do. This makes the hunter much more efficient (“accelerating the human”) by focusing on typical tactics used by likely adversaries. Much of the threat intel available today is focused on data to be pumped into traditional controls, such as SIEM and egress

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.