As we wrap up our Network-based Threat Detection series, we have already covered why prevention isn’t good enough and how to find indications that an attack is happening, based on what you see on the network. Our last post worked through adding context to collected data to allow some measure of prioritization for alerts. To finish things off we will discuss additional context and making alerts operationally useful.
Leveraging Threat Intelligence for Detection
This analysis is still restricted to your organization. You are gathering data from your networks and adding context from your enterprise systems. Which is great but not enough. Factoring data from other organizations into your analysis can help you refine it and prioritize your activities more effectively. Yes, we are talking about using threat intelligence in your detection process.
For prevention, threat intel can be useful to decide which external sites should be blocked on your egress filters, based on reputation and possibly adversary analysis. This approach helps ensure devices on your network don’t communicate with known malware sites, bot networks, phishing sites, watering hole servers, or other places on the Internet you want nothing to do with. Recent conversations with practitioners indicate much greater willingness to block traffic – so long as they have confidence in the alerts.
But this series isn’t called Network-based Threat Prevention, so how does threat intelligence help with detection? TI provides a view of network traffic patterns used in attacks on other organizations. Learning about these patterns enables you to look for them (Domain Generating Algorithms, for example) within your own environment. You might also see indicators of internal reconnaissance or lateral movement typically used by certain adversaries, and use them to identify attacks in process. Watching for bulk file transfers, for example, or types of file encryption known to be used by particular crime networks, could yield insight into exfiltration activities.
Like the burden of proof is far lower in civil litigation than in criminal litigation, the bar for useful accuracy is far lower in detection modes than in prevention. When you are blocking network traffic for prevention, you had better be right. Users get cranky when you block legitimate network sessions, so you will be conservative about what you block. That means you will inevitably miss something – the dreaded false negative, a legitimate attack. But firing an alert provides more leeway, so you can be a bit less rigid.
That said, you still want to be close – false positives are still very expensive. This is where the approach mapped out in our last post comes into play. If you see something that looks like an attack based on external threat intel, you apply the same contextual filters to validate and prioritize.
Retrospection
What happens when you don’t know an attack is actually an attack when the traffic enters your network? This happens every time a truly new attack vector emerges. Obviously you don’t know about it, so your network controls will miss it and your security monitors won’t know what to look for. No one has seen it yet, so it doesn’t show up in threat intel feeds. So you miss, but that’s life. Everyone misses new attacks. The question is: how long do you miss it?
One of the most powerful concepts in threat intelligence is the ability to use newly discovered indicators and retrospectively look through security data to see if an attack has already hit you. When you get a new threat intel indicator you can search your network telemetry (using your fancy analytics engine) to see if you’ve seen it before. This isn’t optimal because you already missed. But it’s much better than waiting for an attacker to take the next step in the attack chain. In the security game nothing is perfect. But leveraging the hard-won experience of other organizations makes your own detection faster and more accurate.
A Picture Is Worth a Thousand Words
At this point you have alerts, and perhaps some measure of prioritization for them. But one of the most difficult tasks is deciding how to navigate through the hundreds or thousands of alerts that happen in networks at scale. That’s where visualization techniques come into play. A key criterion for choosing a detection offering is getting information presented in a way that makes sense to you and will work in your organization’s culture.
Some like the traditional user experience, which looks like a Top 10 list of potentially compromised devices, with the grid showing details of the alert. Another way to visualize detection data is as a heat map showing devices and potential risks visually, offering drill-down into indicators and alert causes. There is no right or wrong here – it is just a question of what will be most effective for your security operations team.
Operationalizing Detection
As compelling as network-based threat detection is conceptually, a bunch of integration needs to happen before you can provide value and increase your security program’s effectiveness. There are two sides to integration: data you need for detection, and information about alerts that is sent to other operational systems. For the former, these connections to identity systems and external threat intelligence drive analytics for detection. The latter includes the ability to pump the alert and contextual data to your SIEM or other alerting system to kick off your investigation process.
If you get comfortable enough with your detection results you can even configure workarounds such as IPS blocking rules based on these alerts. You might prevent compromised devices from doing anything, blocking C&C traffic, or block exfiltration traffic. As described above, prevention demands minimization of false positives, but disrupting attackers can be extremely valuable. Similarly, integration with Network Access Control can move a compromised device onto a quarantine network until it can be investigated and remediated.
For network forensics you might integrate with a full packet capture/network forensics platform. In this use case, when a device shows potential compromise, traffic to and from it could be captured for forensic analysis. Such captured network traffic may provide a proverbial smoking gun. This approach could also make you popular with the forensics folks, because you would be able to provide the actual packets from the attack. Prioritized alerts enable you to be more precise and efficient about what traffic to capture, and ultimately what to investigate.
Automation of these functions is still in its infancy. But we expect all sorts of security automation to emerge within the short-term planning horizon (18-24 months). We will increasingly see security controls reconfigured based on alerts, network traffic redirected, and infrastructure quarantined and pulled offline for investigation. Attacks hit too fast to do it any other way, but automation scares many security professionals. We expect to see this play out over the next 5-7 years, but have no doubt that it will happen.
When to Remediate, and When Not to
It may be hard to believe, but there are real scenarios where you might not want to immediately remediate a compromised device. The first – and easiest to justify – is when it is part of an ongoing investigation; HR, legal, senior management, law enforcement, or anyone else may mandate that the device be observed but otherwise left alone. There isn’t much wiggle room in this scenario. With the remediation decision no longer in your hands, and the risk of an actively compromised device on your network determined to be acceptable, you then take reasonable steps to monitor the device closely and ensure it is unable to exfiltrate data.
Another scenario where remediation may not be appropriate is when you need to study and profile your adversary, and the malware and command and control apparatus they use, through direct observation. Obviously you need a sophisticated security program to undertake a detailed malware analysis process (as described in Malware Analysis Quant), but clearly understanding and identifying indicators of compromise can help identify other compromised devices, and enable you to deploy workarounds and other infrastructure protections such as IPS rules and HIPS signatures.
That said, in most cases you will just want to pull the device off the network as quickly as possible, pull a forensic image, and then reimage it. That is usually the only way to ensure the device is clean before letting it back into the general population. If you are going to follow an observational scenario, however, it and your decision tree need to be documented and agreed on as part of your incident response plan.
With that we wrap up our Network-based Threat Detection series. We will be assembling this series into a white paper, and posting it in the Research Library soon. As always, if you see something here that doesn’t make sense or doesn’t reflect your experience or issues, please let us know in the comments. That kind of feedback makes our research more impactful.
Comments