In our first post on Leveraging Threat Intelligence in Security Monitoring (TISM), Benefiting from the Misfortune of Others, we discussed threat intelligence as a key information source for shortening the window between compromise and detection. Now we need a look in terms of security monitoring – basically how monitoring processes need to adapt to the ability to leverage threat intelligence.
We will start with the general monitoring process first documented in our Network Security Operations Quant research. This is a good starting point – it details all the gory details involved in monitoring things. Of course its focus is firewalls and IPS devices, but expanding it to include the other key devices which require monitoring isn’t a huge deal.
Network Security Monitoring
In this phase we define the depth and breadth of our monitoring activities. These are not one-time tasks but processes to revisit every quarter, as well as after incidents that triggers policy review.
- Enumerate: Find all the security, network, and server devices which are relevant to the security of the environment.
- Scope: Decide which devices are within scope for monitoring. This involves identifying the asset owner; profiling the device to understand data, compliance, and policy requirements; and assessing the feasibility of collecting data from it.
- Develop Policies: Determine the depth and breadth of the monitoring process. This consists of two parts: organizational policies (which devices will be monitored and why); and device & alerting policies (which data will be collected from. It may include any network, security, computing, application, or data capture/forensics device.
For device types in scope, device and alerting policies are developed to detect potential incidents which require investigation and validation. Defining these policies involves a QA process to test the effectiveness of alerts. A tuning process must be built into alerting policy definitions – over time alert policies need to evolve as the targets to defend change, along with adversaries’ tactics.
Finally, monitoring is part of a larger security operations process, so policies are required for workflow and incident response. They define how monitoring information is leveraged by other operational teams and how potential incidents are identified, validated, and investigated.
In this phase monitoring policies are put to use, gathering data and analyzing it to identify areas for validation and investigation. All collected data is stored for compliance, trending, and reporting as well.
- Collect: Collect alerts and log records based on the policies defined under Plan. This can be performed within a single-element manager or abstracted into a broader Security Information and Event Management (SIEM) system for multiple devices and device types.
- Store: Collected data must be stored for future access, for both compliance and forensics.
- Analyze: The collected data is analyzed to identify potential incidents based on alerting policies defined in Phase 1. This may involve numerous techniques, including simple rule matching (availability, usage, attack traffic policy violations, time-based rules, etc.) and/or multi-factor correlation based on multiple device types.
When an alert fires in the analyze step, this phase kicks in to investigate and determine whether further action is necessary.
- Validate/Investigate: If and when an alert is generated, it must be investigated to validate the attack. Is it a false positive? Is it a real issue that requires further action? If the latter, move to the Action phase. If this was not a ‘good’ alert, do policies need to be tuned?
- Action/Escalate: Take action to remediate the issue. This may involve a hand-off or escalation to Operations.
After a few alert validations it is time to determine whether policies must be changed or tuned. This must be a recurring feedback loop rather than a one-time activity – networks and attacks are both dynamic, and require ongoing diligence to ensure monitoring and alerting policies remain relevant and sufficient.
What Has Changed
Security monitoring has undergone significant change over the past few years. We have detailed many of these changes in our Security Management 2.5 series, but we will highlight a few of the more significant aspects. The first is having to analyze much more data from many more sources. We will go into detail later in this post.
Next, the kind of analysis performed on the collected data is different. Setting up rules for a security monitoring environment was traditionally a static process – you would build a threat model and then define rules to look for that kind of attack. This approach requires you to know what to look for. For reasonably static attacks this approach can work.
Nowadays planning around static attacks will get you killed. Tactics change frequently and malware changes daily. Sure, there are always patterns of activity to indicate a likely attack, but attackers have gotten proficient at evading traditional SIEMs. Security practitioners need to adapt detection techniques accordingly.
So you need to rely much more on detecting activity patterns, and looking for variations from normal patterns to trigger the alerts and investigation. But how can you do that kind of analysis on what could be dozens of disparate data sources? Big data, of course. Kidding aside, that is actually the answer, and it is not overstating to say that big data technologies will fundamentally change how security monitoring is done – over time.
Broadening Data Sources
In Security Management 2.5: Platform Evolution, we explained that to keep pace with advanced attackers, security monitoring platforms must do more with more data. Having more data opens up very interesting possibilities. You can integrate data from identity stores to trace behavior back to users. You can pull information from applications to look for application misuse, or gaming legitimate application functionality including search and shopping carts. You can pull telemetry from server and endpoint devices, to search for specific indicators of compromise – which might represent a smoking gun and point out a successful attack.
We have always advocated for collecting more data, and monitoring platforms are beginning to develop capabilities to take advantage of additional data for analytics. As we mentioned, security monitoring platforms are increasingly leveraging advanced data stores, supporting much different (and more advanced) analytics to find patterns among many different data sources.
That doesn’t mean the tools will magically do big data analytics on a zillion data sources overnight – monitoring platforms are just now integrating these technologies. But the future is promising – this kind of unstructured analysis is critical to detecting nimble attacks from innovative attackers.
As we continue pushing things forward, threat intelligence (TI) fits into the mix by opening up a wealth of additional possibilities for analysis. As another data source, security monitoring platforms need to both automate ingestion of TI; they also need to dynamically tune rules, reports, and visualizations to handle data that changes hourly – and perhaps even more frequently.
Our next post will talk about what is required to gather threat intelligence, and offer an updated security monitoring process model that factors in TI and the platform evolution we have been discussing.