Today we launch our next blog series, on a topic we believe is critical to success in today’s threat environment. It is network security analysis, a rather grand and nebulous term, but consider this the next step on the path which started with Incident Response Fundamentals and continued with React Faster and Better.

The issues are pretty straightforward. We cannot assume we can stop the attackers, so we have to plan for a compromise. The difference between success and failure breaks down to how quickly you can isolate the attack, contain the damage, and then remediate the issue. So we build our core security philosophy around monitoring critical networks and devices, facilitating our ability to find the root cause of any attack.

Revisiting Monitor Everything

Back in early 2010, we published a set of Network Security Fundamentals, one of which was Monitor Everything. If you read the comments at the bottom of the post, you’ll see some divergent opinions of what everything means to different folks, but nobody really disagrees with broad monitoring as a core tenet of security nowadays. We can thank the compliance gods for that.

To understand the importance of monitoring everything, let’s excerpt some research I published back in early 2008 that is still relevant today.

New attacks are happening at a fast and furious pace. It is a fool’s errand to spend time trying to anticipate where the issues are. REACT FASTER first acknowledges that all attacks cannot be stopped. Thus, focus remains on understanding typical traffic and application usage trends and monitoring for anomalous behavior, which could indicate an attack. By focusing on detecting attacks earlier and minimizing damage, security professionals both streamline their activities and improve their effectiveness.

That post then discusses some data sources you can (and should) monitor, including firewalls, IDS/IPS, vulnerability scans, network flows, device configurations, and content security devices. But we are still looking at this data in terms of profiling what has happened and using that as a baseline. Then watch for variations beyond tolerance and alert when you see them.

We still fundamentally believe in this approach. It’s clearly the place to start for most organizations, for which any data is more than they have now. But for maturing security organizations, let’s examine why logs are only the start.

Logs are not enough

Back when I was in the SIEM space, it was clear that event logs are a great basis for compliance reporting, because they effectively substantiate implemented controls. As long as the logs are not tampered with, at least. But when you are working to isolate a security issue, the logs tell you what happened, but lack the depth to truly understand how it happened. Isolating a security attack using log data requires having logs from all points in the path between attacker and target. If you aren’t capturing information from the application servers, databases, and applications themselves, visibility is severely impaired.

Contrast that against the ability to literally replay an attack from a full network packet capture. You could follow along as the attacker broke your stuff. See the path they took to traverse your network, the exploits they used to compromise devices, the data they exfiltrated, and how they covered their tracks by tampering with the logs. Of course this assumes you are capturing the right network traffic along the attacker’s path, and it might not be feasible to capture all traffic all the time. But still, if you look to implement a full network packet capture sandwich (as we described in the React Faster and Better series), incident responders have much more information to work with. We’ll discuss how to deploy the technology to address some of these issues later in this series. Given that you need additional data to do your job, where should you look?

The Network Doesn’t Lie

For the purposes of this discussion, let’s assume time starts at the moment an attacker gains a foothold in your network. That could be by compromising a device (through whatever means) already on the network, or by having a compromised device connect to the internal network. At that point the attacker is in the house, so the clock is ticking. What do they do next? An attacker will try to move through your environment to achieve their ultimate goal, whether that be compromising a specific data store or adding to their bot army, or whatever.

There are about a zillion specific things the attacker could do, and 99% of them depend on the network in some way. They can’t find another target(s) without using the network to locate it. They can’t attack the target without trying to connect to it, right? Furthermore, even if they are able to compromise the ultimate target, the attackers must then exfiltrate the data. So they will try to use the network to move the data.

They need the network, pure and simple. Which means they will leave tracks, but only if you are looking. This is why we favor (as described in React Faster and Better) capturing the full network packet data as possible. Attackers could compromise network devices and delete log records. They could generate all sorts of meaningless traffic to confuse network behavioral analysis.

But they can’t alter the packet stream as it’s captured, which becomes the linchpin of the data you’ll collect to perform this advanced network security analysis.

Data is not information

But just collecting data isn’t enough. You need to use the data to draw conclusions about what’s happening in your environment. That requires indexing the data, supplementing and enriching it with additional context, alerting on the data, and then searching through the data to pursue an investigation. This is all technically demanding. Just capturing the full network packet stream requires a purpose-built data store, which does some black magic to digest and index network traffic at sufficient speed to provide usable, actionable information to shorten the exploit window.

To get an idea of the magnitude of this challenge, note that many SIEM platforms struggle to handle 10,000-15,000 events per second. We are talking here about capturing 10-100gbps of honest-to-goodness network traffic – not 100kbyte log records. Don’t try this on your SIEM or log aggregation device.

Leveraging the information

Most folks only think about enhanced data collection in terms of forensics because that is the obvious use case. We won’t neglect forensics, but let’s keep in mind other use cases for which full packet streams are invaluable. These include:

  • Improving Alerts: By properly staging out a specific threat model and enumerating it in a tool (as described in our Understanding and Selecting SIEM paper), alerts based on log data can point you in the right direction to determine whether you are under attack. But what if you could look for specific attack strings or parse database calls as they happen? You would have greater precision and detect attacks faster. It’s not easy, given the specificity of what you need to look for, but it’s possible.
  • Malware Analysis: We have been talking for years about the folly of traditional anti-malware approaches, which are enforced on the endpoint. At that point it is generally too late to do anything but know which machines you need to clean up. What if you could look into network traffic and see known malware coming into the network? This requires knowing what to look for, but is much better than simply capturing events from your AV console – which tells you what already happened, rather than what is about to happen.
  • Breach Confirmation: Given the difficulty of isolating an attack preemptively, how to you know what really happened? From a copy of all traffic sent to a specific device, you can generally tell quickly whether you have a problem – as well as what’s wrong and how serious it is. Again, this is not available from traditional event or configuration logs.

We do not claim that capturing full network traffic is as good as a time machine – it doesn’t tell you what’s happening before it happens. Nor do we position full packet capture as an alternative to SIEM/Log Management. We believe in using both, because the objective is to close the exploit window as quickly as possible and contain the damage. Capturing network traffic is rapidly becoming a must-have capability for organizations which want to be more effective.

We will dig into the specifics of capturing full network traffic, doing the analysis to leverage it, and diving into the additional use cases to really illuminate how valuable the full packet capture stream can be – not just when you are dealing with an incident, but in everyday practice.

We also should thank Solera Networks for sponsoring this research. As with all our blog series, we will use our Totally Transparent Research method to keep everything objective and above board, and we welcome comments on the posts – please help keep us honest.

Share: