Today’s post discusses the changing needs and requirements organizations have for security management customers, which is just a fancy way of saying “Here’s why customers are unhappy.” The following items are the main discussion points when we speak with end users, and the big picture reasons motivating SIEM users to consider alternatives.
The Changing Needs of Security Management
- Malware/Threat Detection: Malware is by far the biggest security issue enterprises face today. It is driving many of the changes rippling through the security industry, including SIEM and security analytics. SIEM is designed to detect security events, but malware is designed to be stealthy and evade detection. You may be looking for malware, but you don’t always know what it looks like. Basically you are hunting for anomalies that kinda-sorta could be an attack, or just odd stuff that may look like an infection. The days of simple file-based detection are gone, or at least anything resembling the simple malware signature of a few years ago. You need to detect new and novel forms of advanced malware, which requires adding different data sources to the analysis and observing patterns across different event types. We also need to leverage emerging security analytics capabilities to examine data in new and novel ways. Even if we do all this, it might still not be enough. This is why feeding third-party threat intelligence feeds into the SIEM are becoming increasingly common – allowing organizations to look for attacks happening to others.
- Cloud & Mobile: As firms move critical data into cloud environments and offer mobile applications to employees and customers, the definition of system now encompasses use cases outside the classical corporate perimeter, changing the definition and scope of infrastructure to monitor. Compounding the issue is the difficulty in monitoring mobile devices – many of which you do not fully control, and it’s even harder because of the lack of effective tools to gather telemetry and metrics from the devices. Even more daunting is the lack of visibility (basically log and event data) of what’s happening within your cloud service providers. Some cloud providers cannot provide infrastructure logs to customers, because their event streams combine events from all their customers. In some cases they cannot provide logs because there is simply no ‘network’ to tap because it’s virtual, and existing data collectors are useless. In other cases the cloud provider is simply not willing to share the full picture of events, and you may be prohibited contractually from capturing events. The net result is that you need to tackle security monitoring and event analysis in a fundamentally different fashion. This typically involves collecting the events you can gather (application, server, identity and access logs) and massaging them into your SIEM. For Infrastructure as a Service (IaaS) environments, you should look at adding your own cloud-friendly collectors in the flow of application traffic.
- General Analytics: If you collect a mountain of data from all IT systems, much more information is available than just security events. This is a net positive, but cuts both ways – some event analysis platforms are set up for IT operations first, and both security and business operations teams piggyback off that investment. In this case analysis, reporting, and visualization tools must be not just accessible to a wider audience (like security), but also optimized to do true correlation and analysis.
What Customers Really Want
These examples are what we normally call “use cases”, which we use to reflect the business drivers creating the motivation to take action. These situations are significant enough (and they are unhappy enough) for customers to consider jettisoning current solutions and going through the pain of re-evaluating their requirements and current deficiencies. Those bullet points do represent the high-level motivations, but they fail to tell the whole story. These are the business reasons firms are looking, but they fail to capture why many of the current platforms fail to meet expectations. For that we need to take a slightly more technical look at the requirements.
Deeper Analysis Requires More Data
To address the use cases described above, especially for malware analysis, more data is required. This necessarily means more event volume – such as capturing and storing full packet streams, if even for only a short period. It also means more types of data – such as human-readable data mixed in with machine logs and telemetry from networks and other devices. It includes gathering and storing complex data types; such as binary or image files; which are not easy to parse, store, or even categorize.
The Need for Information Requires Better and More Flexible Analysis
Simple correlation of events – who, what, and when – is insufficient for the kind of security analysis required today. This is not only because those attributes are insufficient to distinguish bad from good, but also because data analysis approaches are fundamentally evolving. Most customers we speak with want to profile normal traffic and usage; this profile helps understand how systems are being used, and also helps detect anomalies likely to indicate misuse.
There is some fragmentation in how customers use analysis – some choose to leverage SIEM for more real-time altering and analysis; others want big picture visibility, created by combining many different views for an overall sense of activity. Some customers want fully automated threat detection, while others want more interactive ad hoc and forensic analysis. To make things even harder for the vendors, today’s hot analysis methods could very well be irrelevant a year or two down the road. Many customers want to make sure they can update analytics as requirements develop – optimized hard-wired analytics are now a liability rather than an advantage for the products.
The Velocity of Attacks Requires Threat Intelligence
When we talk about threat intelligence we do not limit the discussion to things like IP reputation or ‘fingerprint’ hashes of malware binaries – those features are certainly in wide use, but the field of threat intelligence includes far more. Some threat intelligence feeds look at social connections or credit histories of customers connecting to retail sites. Some services ‘scrape’ known hacker sites for indicators of pending DoS attacks. Other highlight specific botnet C&C actions that identify infected systems within your networks. You can also analyze uploaded binary files and images to profile malware and then leverage those indicators within your environments. But these feeds do not come in syslog
format – they are custom feeds which require integration into SIEM, as well as analytic adjustments to take advantage of them.
The Sheer Volume of Data Requires Enhanced Speed, Scale, and Accuracy
When you collect more data from more sources the volume of information to parse, manage, and inspect grows dramatically, possibly exponentially. The scale of SIEM clusters continues to increase rapidly to accommodate this additional data, and the data and equipment dedicated to the SIEM continue to grow substantially faster than other IT functions. This creates another problem for security event notification: if event capture increases by a factor of 10 and accuracy stays the same, false positive rates grow 10 fold – which is unacceptable. As SIEM architectures scale, filtering and analysis must become both much faster and more accurate to avoid further overwhelming the already overwhelmed security personnel.
The Skills Gap Requires Better Automation and Efficiency
Co-sourcing the SIEM capabilities with third party Security Operations Centers (SOCs) or even fully outsourcing the function is increasingly common by necessity. Large firms continue to have challenges in staffing, training, and supplying an adequate security operations team. Worse, since security is a growing problem across all industries, there is more demand on an already small pool of talented SIEM operators and security experts, making it harder to recruit from the outside.
As one of our readers pointed out in a comment on this series: “if you have the resources to invest in a SIEM, you might as well invest in various open-source ‘big data’ technologies.” Many customers tell us they turned to big data systems to supplement their SIEM platforms. But the key limitation is not funding – it is finding the talent to architect big data solutions and develop the security analysis scripts and policies already embedded in SIEM. Customers are embracing big data infrastructure and big data style capabilities – both directly in-house and through third-party SOCs and SIEM vendors. The shift toward using big data to fill SIEM gaps underscores the need to “do more with more” as we discussed above, but the gating factor is often talent to run it.
In the next post, we’ll talk about how SIEMs are evolving to handle these stringent demands.
Reader interactions
3 Replies to “Security Management 2.5: Changing Needs”
@Paul – interesting observation, and my feeling is your on target. When we see the analytics of today customers show us trend data that in essence builds a profile that defines what normal is, and we also outliers that suggest something wrong – either operationally or from a security standpoint. I’d not considered adjusting the terminology to reflect the move away from binary ‘good’ or ‘bad’ but you’re right, SoCs are using a more analog representation of events. I’ll keep that in mind.
Thanks,
Adrian
Great post. Totally agree with the general gist and it’s 100% in line with what were looking to do.
Only thing I might comment on is your characterization of “accuracy”. I’m starting to think that the term “false positive” has run its course since the sliding scale between good and bad is now so great. The way we’re currently talking about this is assuming that there’s an automaton in the corner that acts like a fire alarm, rather than a system that acts as a source of knowledge for an analyst/responder/investigator
What we’re all trying to do is make sure analysts get to focus on *the most* important issues. Most things that we call “false positives” are not completely innocuous, they’re just not important enough to spend time on – and that varies based upon what the organization is trying to protect and how much “time” they have at their disposal to deal with it.
So rather than “greater accuracy” I think we’re talking about “better prioritization”, and thinking differently about what it means to be notified (e.g. what other “stuff” were you doing when you were notified, and how did you receive the stimulus to go and deal with the issue).
I think it raises some important issues around how we staff and scope Security Operations and Incident Response teams, and how we design tools to be more around “leading an analyst towards the most pressing issues of the day/hour/minute” rather than being a system that will tap you on the shoulder when something is wrong but otherwise leave you alone.
Great background on the market forces driving the evolution of SIEM solutions. Looking forward to some of the comments explaining the value security teams receive.