This post discusses evolutionary changes in SIEM, focusing on how underlying platform capabilities have evolved to meet the requirements discussed in the last post. To give you a sneak peek, it is all about doing more with more data. The change we have seen in these platforms over the past few years has been mostly under the covers. It’s not sexy, but this architectural evolution was necessary to make sure the platforms scaled and could perform the needed analysis moving forward. The problem is that most folks cannot appreciate the boatload of R&D which has been required to enable many platforms to receive a proverbial brain transplant. We will start with the major advancements.

Architectural Evolution

To be honest, we downplayed the importance of SIEM’s under-the-hood changes in our previous paper. The “brain transplant” was the significant change that enabled a select few vendors to address the performance and scalability issues plaguing the first generation of the platforms, which were built on RDBMS. For simplicity’s sake we skipped over the technical details of how and why. Now it’s time to explore that evolution.

The fundamental change is that SIEM platforms are no longer based on a centralized massive service. By leveraging a distributed approach, using a cooperative cluster of many servers independently collecting, digesting, and processing events, policies are distributed across multiple systems to more effectively and efficiently handle the load. If you need to support more locations or pump in a bunch more data, just add nodes to the cluster. If this sounds like big data that’s because it essentially is. Several platforms leverage big data technologies under the hood.

The net result is parallel event processing resources deployed ‘closer’ to event sources, faster event collection, and systems designed to scale without massive reconfiguration. This architecture enables different deployment models; it also better accommodates distributed IT systems, cloud providers, and virtual environments – which increasingly constitute the fabric of modern technology infrastructure. The secret sauce making it all possible is distributed system management. It is easy to say “big data”, but much harder to do heavy-duty security analysis at scale. Later, when we discuss proof-of-concept testing and final decision-making, we will explore substantiating these claims.

The important parts, though, are the architectural changes to enable scaling and performance, and support for more data sources. Without this shift nothing else matters.

Serving Multiple Use Cases

The future of security management is not just about detecting the advanced threats and malware, although that is the highest-profile use case. We still need to get work done today, which means adding value to the operations team, as well as to compliance and security functions. This typically involves analyzing vulnerability assessment information so security teams can ensure basic security measures are in place. You can analyze patch and configuration data similarly to help operations teams keep pace within dynamic – and increasingly virtual – environments. We have even seen cases where operations teams detected application DoS attacks through infrastructure event data. This kind of derivative security analysis is the precursor to allowing risk and business analytics teams to make better business decisions – to redeploy resources, take applications offline, etc. – by leveraging data collected from the SIEM.

Enhanced Visibility

Attackers continually shift their strategies to evade detection, increase efficiency, and maximize the impact of their attacks. Historically one of SIEM’s core value propositions has been an end-to-end view, enabled by collecting all sorts of different log files from devices all around the enterprise. Unfortunately that turned out not to be enough – log files and NetFlow records rarely contain enough information to detect or fully investigate an attack. We needed better visibility into what is actually happening within the environment – rather than expecting analysts to wade through zillions of event records to figure out when you are under attack.

We have seen three technical advances which, taken together, provide the evolutionary benefit of much better visibility into the event stream. In no particular order they are more (and better) data, better analysis techniques, and better visualization capabilities.

  • More and Better Data: Collect application events, full packet capture – not just metadata – and other sources that taxed older SIEM systems. In many cases the volume or format of the data was incompatible with the underlying data management engine.
  • Better Analysis: These new data sources enable more detailed analysis, longer retention, and broader coverage; together those improved capabilities provide better depth and context for our analyses.
  • Better Visualization: Enhanced analysis, combined with advanced programmatic interfaces and better visualization tools, substantially improves the experience of interrogating the SIEM. Old-style dashboards, with simplistic pie charts and bar graphs, have given way to complex data representations that much better illuminate trends and highlight anomalous activity.

These improvements might look like simple incremental improvements to existing capabilities, but combined they enable a major improvement in visibility.

Decreased Time to Value

The most common need voiced by SIEM buyers is to have their platforms provide value without requiring major customization and professional services. Customers are tired of buying SIEM toolkits, and then needing to take time and invest money to build a custom SIEM system tailored to their particular environment. As we mentioned in our previous post, collecting an order of magnitude more data requires a similar jump in analysis capabilities – the alternative is to be drowned in a sea of alerts.

The same math applies to deployment and management – monitoring many more types of devices and analyzing data in new ways means platforms need to be easier to deploy and manage simply to maintain the old level of manageability. The good news is that SIEM platform vendors have made significant investments to support more devices and offer better installation and integration, which combined make deployment less resource intensive.

As these platforms integrate the new data sources and enhanced visibility described above, the competitiveness of a platform can be determined by the simplicity and intuitiveness of its management interface, and the availability of out-of-the-box policies and reports which make use of the new data types. But given the need for increasingly sophisticated analysis, and the skills gap of folks able to perform this analysis, organizations need a way to kickstart their analysis functions. This can make the biggest difference in time to value for these new platforms.

Hybrid Deployments and Streamlined Integration

Very few firms are actually in the business of delivering security, so leveraging third parties to either a) manage an on-premise SIEM from a remote location, or b) even run the SIEM at an off-site Security Operations Center, has become common. These alternative deployment models enable a handful of experts to help set up and manage a SIEM, and provide expert policy development and analysis unavailable in most organizations.

We also see several interesting hybrid deployment models appearing. One uses a managed service to do the first level of alerting and validating alerts. The internal team also collects and manages overlapping data within its own monitoring and analysis platforms, for forensic analysis and more advanced uses of security data. This allows the organization to allocate its skilled staff to real issues, leaving simple alert reduction and initial analysis to a commodity security monitoring provider.

We also see hybrids which leverage external intelligence feeds to alert internal SoC staff to potential areas of trouble. These intelligence sources provide a much broader set of indicators to drive analysis or tune monitoring, based on what is actually happening out in the wild. That kind of data can’t be found internally because your organization hasn’t seen the attacks yet. It is an excellent way to benefit from the misfortune of others, who have already been targeted by the attack.

With the new customer use cases defined, along with the technical advances that have emerged to address the requirements, we are ready to take a critical look at your requirements in the next post. We will walk you through a process to assess what is and isn’t working with your existing platform, understand the impact of these new requirements, and then prioritize deficiencies into an actionable list to drive the decision of whether or not to move to another platform.

Share: