In our last post New Data for New Attacks, we delved into the types of data we want to systematically collect, through both log record aggregation and full packet capture. As we’ve said many times, data isn’t the issue – it’s the lack of actionable information for prioritizing our efforts. That means we must more effectively automate analysis of this data and draw the proper conclusions about what is at risk and what isn’t.

Automate = Tools

As much as we always like to start with process (since that’s where most security professionals fail), automation is really about tools. And there plenty of tools to bring to bear on setting alerts to let you know when something is funky. You have firewalls, IDS/IPS devices, network monitors, server monitors, performance monitors, DLP, email and web filtering gateways … and that’s just the beginning. In fact there is a way to monitor everything in your environment. Twice. And many organizations pump all this data into some kind of SIEM to analyze it, but this continues to underscore that we have too much of the wrong kind of data, at least for incident response.

So let’s table the tools discussion for a few minutes and figure out what we are really looking for…

Threat Modeling

Regardless of the tool being used to fire alerts, you need to 1) know what you are trying to protect; 2) know what an attack on it looks like; and 3) understand relative priorities of those attacks. Alerts are easy. Relevant alerts are hard. That’s why we need to focus considerable effort early in the process on figuring out what is at risk and how it can be attacked.

So we will take a page from Security 101 and spend some time building threat models. We’ve delved into this process in gory detail in our Network Security Operations Quant research, so we won’t repeat it all here, but these are the key steps:

  1. Define what’s important: First you need to figure out what critical information/applications will create the biggest issues if compromised.
  2. Model how it can be attacked: It’s always fun to think like a hacker, so put on your proverbial black hat and think about ways to exploit and compromise the first of the most important stuff you just identified.
  3. Determine the data those attacks would generate: Those attacks will result in specific data patterns that you can look for using your analysis tools. This isn’t always an attack signature – it may be the effect of the attack, such as excessive data egress or bandwidth usage.
  4. Set alert thresholds: Once you establish the patterns, then figure out when to actually trigger an alert. This is an art, and most organization start with fairly broad thresholds, knowing they result in more alerts initially.
  5. Optimize thresholds: Once your systems start hammering you with alerts, you’ll be able to tune the system by tightening the thresholds to focus on real alerts and increase the signal-to-noise ratio.
  6. Repeat for next critical system/data: Each critical information source/application will have its own set of attacks to deal with. Once you’ve modeled one, go back and repeat the process. You can’t do everything at once, so don’t even try. Start with the most critical stuff, get a quick win, and then expand use of the system.

Keep in mind that the larger your environment, the more intractable modeling everything becomes. You will never know where all the sensitive stuff is. Nor can you build a threat model for every known attack. That’s why under all our research is the idea of determining what’s really important and working hard to protect those resources.

Once we have threat models implemented in our monitoring tool(s) – which include element managers, analysis tools like SIEM, and even content monitoring tools like DLP – these products can (and should) be configured to alert based on a scenario in the threat model.

More Distant Early Warning

We wish the threat models could be comprehensive, but inevitably you’ll miss something – accept this. And there are other places to glean useful intelligence, which can be factored into your analysis and potentially show attacks not factored into the threat models.

  1. Baselines: Depending on the depth of monitoring, you can and should be looking at establishing baselines for your critical assets. That could mean network activity on protected segments (using Netflow), or perhaps transaction types (SQL queries on a key database), but you need some way to define normal for your environment. Then you can start by alerting on activities you determine are not normal.
  2. Vendor feeds: These feeds come from your vendors – mostly IDS/IPS – because they have a research teams tasked with staying on top of emerging attack plans. Admittedly this is reactive, and needs to be built on known attacks, but the vendors spend significant resources making sure their tools remain current. Keep in mind you’ll want to tailor these signatures to your organization/industry – obviously you don’t need to look for SCADA attacks if you don’t have those control systems, but the inclusive side is a bit more involved.
  3. Intelligence sharing: Larger organizations see a wide variety of stuff, mostly because they are frequently targeted and have the staff to see attack patterns. Many of these folks do a little bit of co-opetition and participate in sharing groups (like FS-ISAC) to leverage each other’s experiences. This could be a formal deal or just informal conversations over beers every couple weeks. Either way, it’s good to know what other peer organizations are seeing.

The point is that there are many places to leverage data and generate alerts. No one information source can identify all emerging attacks. You’re best served by using many, then establishing a method to prioritize alerts which warrant investigation.

Visualization

Just about every organization – particularly large enterprises – generates more alerts than it has the capability to investigate. If you don’t, there’s a good chance you aren’t alerting enough. So prioritization is a key skill that governs success or failure. We generally advocate tiered response, where a first tier of analysts handles the initial alerts. Then additional tiers of experts come into play when needed, depending on the sophistication and criticality of the attack.

But how do you figure out what warrants a response in the first place, and where to look for answers? One key tools we’ve seen for prioritizing is alert visualization. It could be a topology map to pinpoint areas of the network under attack, or alert categorization by class of device (good for recognizing something like a weaponized Windows exploit making the rounds). The idea is to have a mechanism to detect patterns which would not be obvious simply from scaning the event/alert stream.

We’ll deal with visualization and how to prioritize and escalate response later in this series when we roll up our sleeves and start dealing with potential incidents.

Share: