In the last post on Data Collection we introduced the complicated process of gathering data. Now we need to understand how to put it into a manageable form for analysis, reporting, and long-term storage for forensics.

Aggregation

SIEM platforms collect data from thousands of different sources because these events provide the data we need to analyze the health and security of our environment. In order to get a broad end-to-end view, we need to consolidate what we collect onto a single platform. Aggregation is the process of moving data and log files from disparate sources into a common repository. Collected data is placed into a homogenous data store – typically purpose-built flat file repositories or relational databases – where analysis, reporting, and forensics occur; and archival policies are applied.

The process of aggregation – compiling these dissimilar event feeds into a common repository – is fundamental to Log Management and most SIEM platforms. Data aggregation can be performed by sending data directly into the SIEM/LM platform (which may be deployed in multiple tiers), or an intermediary host can collect log data from the source and periodically move it into the SIEM system. Aggregation is critical because we need to manage data in a consistent fashion: security, retention, and archive policies must be systematically applied. Perhaps most importantly, having all the data on a common platform allows for event correlation and data analysis, which are key to addressing the use cases we have described.

There are some downsides to aggregating data onto a common platform. The first is scale: analysis becomes exponentially harder as the data set grows. Centralized collection means huge data stores, greatly increasing the computational burden on the SIEM/LM platform. Technical architectures can help scale, but ultimately these systems require significant horsepower to handle an enterprise’s data. Systems that utilize central filtering and retention policies require all data to be moved and stored – typically multiple times – increasing the burden on the network.

Some systems scale using distributed processing, where filtering and analysis occur outside the central repository, typically at the distributed data collection point. This reduces the compute burden on the central server and allows processing to occur on smaller, more manageable data sets. It does require that policies, along with the code to process them, be distributed and kept current throughout the network. Distributed agent processes are a handy way to “divide and conquer”, but increase IT administration requirements. This strategy also adds a computational burden o the data collection points, degrading their performance and potentially slowing enough to drop incoming data.

Data Normalization

If the process of aggregation is to merge dissimilar event feeds into one common platform, normalization takes it one step further by reducing the records to just common event attributes. As we mentioned in the data collection post, most data sources collect exactly the same base event attributes: time, user, operation, network address, and so on. Facilities like syslog not only group the common attributes, but provide means to collect supplementary information that does not fit the basic template. Normalization is where known data attributes are fed into a generic template, and anything that doesn’t fit is simply omitted from the normalized event log. After all, to analyze we want to compare apple to apples, so we throw away an oranges for the sake of simplicity.

Depending upon the SIEM or Log Management vendor, the original non-normalized records may be kept in a separate repository for forensics purposes prior to later archival or deletion, or they may simply be discarded. In practice, discarding original data is a bad idea, since the full records are required for any kind of legal enforcement. Thus, most products keep the raw event logs for a user-specified period prior to archival. In some cases, the SIEM platform keeps a link to the original event in the normalized event log which provides ‘drill-down’ capability to easily reference extra information collected from the device.

Normalization allows for predicable and consistent storage for all records, and indexes these records for fast searching and sorting, which is key when battling the clock in investigating an incident. Additionally, normalization allows for basic and consistent reporting and analysis to be performed on every event regardless of the data source. When the attributes are consistent, event correlation and analysis – which we will discuss in our next post – are far easier.

Technically normalization is no longer a requirement on current platforms. Normalization was a necessity in the early days of SIEM, when storage and compute power were expensive commodities, and SIEM platforms used relational database management systems for back-end data management. Advances in indexing and searching unstructured data repositories now make it feasible to store full source data, retaining original data, and eliminating normalization overhead.

Enriching the Future

In reality, we are seeing a number of platforms doing data enrichment, adding supplemental information (like geo-location, transaction numbers, application data, etc.) to logs and events to enhance analysis and reporting. Enabled by cheap storage and Moore’s Law, and driven by ever-increasing demand to collect more information to support security and compliance efforts, we expect more platforms to increase enrichment. Data enrichment requires a highly scalable technical architecture, purpose-built for multi-factor analysis and scale, making tomorrow’s SIEM/LM platforms look very similar to current business intelligence platforms.

But that just scratches the surface in terms of enrichment, because data from the analysis can also be added to the records. Examples include identity matching across multiple services or devices, behavioral detection, transaction IDs, and even rudimentary content analysis. It is somewhat like having the system take notes and extrapolate additional meaning from the raw data, making the original record more complete and useful. This is a new concept for SIEM, so the enrichment will ultimately encompass is anyone’s guess. But as the core functions of SIEM have standardized, we expect vendors to introduce new ways to derive additional value from the sea of data they collect.


Other Posts in Understanding and Selecting SIEM/LM

  1. Introduction.
  2. Use Cases, Part 1.
  3. Use Cases, part 2.
  4. Business Justification.
  5. Data Collection.
Share: