Monitoring up the Stack: Platform ConsiderationsBy Adrian Lane
So far in the Monitoring up the Stack series, we have focused on a number of additional data types and analysis techniques that extend security monitoring to gain a deeper and better perspective of what’s happening. We have been looking at the added value that is all good, but we all know there is no free lunch. So now let’s look at some of the problems, challenges, and extra work that come along with deeper monitoring goodness. We know most of you who have labored with scalability and configuration challenges with your SIEM product were waiting for the proverbial other shoe to drop. Each new data type and the associated analysis impact the platform. So in this post we will discuss some of these considerations and think a bit about how to work around the potential issues.
To be fair, it’s not all bad news. Some additional data sources are already integrated with the SIEM (as in the case of identity and database activity monitoring), minimizing deployment concerns. However, most options for application, database, user, and file monitoring are not offered as fully integrated features. Monitoring products sometimes need to be set up in parallel – yep, that means another product to deploy, configure, and manage. You’ll configure the separate monitor to feed some combination of events, configuration details, and/or alerts to the SIEM platform – but the integration likely stops there. And each type of monitoring we have discussed has its own idiosyncrasy and/or special deployment requirement, so the blade cuts both ways. To add hard-to-get data and real-time analysis for these additional data sources comes at a cost. But what fun would it be if everything was standardized and worked out of the box? So you know what you’re getting yourself into, the following is a checklist of platform issues to consider when adding these additional data types to your monitoring capabilities.
Scalability: When adding monitoring capabilities, integrated or standalone, you need additional processing power. SIEM solutions offer distributed models to leverage multi-tier or multi-platform deployments which may provide the horsepower to process additional data types. You may need to reconfigure your collection and/or analysis architecture to redistribute compute power for these added capabilities. Alternatively, many application and/or database monitoring approaches utilize software agents on the target platform. In some cases this is to access data otherwise not available, or to remove network latency from analysis response times, as well as to distribute the processing load across the organization. Of course there is a downside to agents: overhead and memory consumption could impact the target platform, as well as the normal installation & management headaches. The point is that you need to be aware of the extra work being performed and where, and you will need to absorb that requirement on the target platforms or add horsepower to the SIEM system. Regardless of the deployment model you choose, you will need additional storage to accomodate the extra data collected. You may already be monitoring some application events through syslog, but transaction history can increase event volume per application by an order of magnitude. All monitoring platforms can be set to filter out events by policy, but filtering too much defeats the purpose of monitoring these other sources in the first place.
Integration: There are three principle integration points to consider. The first is how to get data into the SIEM and integrated with other event types, and second is how to configure the monitors regarding what to look for. Fully integrated SIEM systems account for both policy management and normalization / correlation of events. While you may need to alter some of your correlation rules and reports to take advantage of these new data types, it can all be performed from a single management console. Standalone monitoring systems can easily be configured to send events, configuration settings, and alerts directly to a SIEM, or drop the data into files for batch processing. SIEM platforms are adept at handling data from heterogenous sources so you just change the correlation, event filtering, and data retention rules to account for the additional data. The second – and most challenging – part of integration is sharing policies & reports between the two systems (SIEM and standalone monitor). Keep in mind that things like configuration analysis, behavioral monitoring, and file integrity monitoring all work by comparing current results against reference values. Unlike hard-coded attribute comparisons in most SIEM platforms, these reference values change over time (by definition). Policies need to be flexible enough to handle these dynamic values so if your SIEM platform can’t you’ll need to use the monitoring platform’s interface for policies, reporting, and data management. We see that with most of the Database Activity Monitoring platforms, where the SIEM is not flexible enough to alert properly. Thus customers need to maintain separate rule bases in the two products. Whenever a rule changes on either side, this disconnection requires manual verification that settings remain consistent between the two platforms. Some monitoring tools have import and export features so you can create a master policy set for all servers, and provide policy reports that detail which rules are active for audit purposes. The third point to consider is that most monitoring systems leverage smart agents, with agent deployment and maintenance managed from the console. Most SIEM platforms leverage a web-based management platform which facilitates central location management, or even the merging of consoles. Many standalone monitoring systems for content, file integrity, and web application monitoring are Windows-specific applications that can’t easily be merged and must be managed as standalone applications.
Analysis: Each new data type needs its own set of analysis policies, alerting rules, dashboards, and reports. This is really where the bulk of the effort is spent – to make these broader data sources available and effective. It’s not just that we have new types of data being collected – the flexibility of flat-file event storage used within SIEM products adapts readily enough – but that monitoring tools should leverage more than merely attribute analysis. To detect SQL injection attacks, data exfiltration, or even something as simple as spam, we need to do more with the data we have. Content analysis, behavioral analysis, and contextual analysis – three of the most common options – look at the same events differently. The SIEM platform must have the flexibility to incorporate these analysis techniques, either as part of the remote data collectors, or as add-on functions within the platform. Lower-end platforms won’t have this and probably don’t need to, but leveraging these additional monitoring capabilities within SIEM requires an architecture flexible enough to incorporate different analytics engines. When we refer to the SIEM platform this is what we are talking about. It’s basically an analysis engine, and must be flexible enough to take lots of data and provide multi-variate correlation and alerting.
Other considerations: Application monitors are more likely to intercept sensitive data, as they dig around in places built-in SIEM collectors don’t look. You may not care about the privacy of
syslogdata over the network but that won’t fly for application, database, or identity traffic. You need to secure application requests and database queries because some of this information is private and therefore protected by any number of regulatory hierarchies. SIEM securely stores data once it has collected it, and offers the option to encrypt stored data (with a performance cost). If you need to encrypt the event stream as it is routed to the SIEM platform, you’ll need to set up the platform – or the collector software itself – to secure data transmissions. Collection architecture also needs to account for the intended use case – for instance application and database monitors used to block activity or perform virtual patching must be deployed “in front” of the platform they monitor. And to monitor web applications in the DMZ the collectors must support different network addressing and tunnel data between the collector and SIEM; separately, you might need to alter your network topology.
But as we have discussed throughout this series, there are plenty of reasons to extend monitoring beyond the traditional security and network devices. With the growing popularity of application and database attacks you cannot afford not to monitor these additional data sources. So at least go into the project with your eyes open as to how these additional data types will impact your existing monitoring infrastructure.
That concludes the technical details of this series. If you feel we left anything out that should be discussed within this series, just let us know in the comments. We’ll wrap the series next week by introducing a phased approach to adding these additional data types to address the threats we talked about in the threats post.