As we dive back into the Threat Intelligence Program, we have summarized why a TI program is important and how to (gather intelligence. Now we need a programmatic approach for using TI to improve your security posture and accelerate your response & investigation functions.

To reiterate (because it has been a few weeks since the last post), TI allows you to benefit from the misfortune of others, meaning it’s likely that other organizations will get hit with attacks before you, so you should learn from their experience. Like the old quote, “Wise men learn from their mistakes, but wiser men learn from the mistakes of others.” But knowing what’s happened to others isn’t enough. You must be able to use TI in your security program to gain any benefit.

First things first. We have plenty of security data available today. So the first step in your program is to gather the appropriate security data to address your use case. That means taking a strategic view of your data collection process, both internally (collecting your data) and externally (aggregating threat intelligence). As described in our last post, you need to define your requirements (use cases, adversaries, alerting or blocking, integrating with monitors/controls, automation, etc.), select the best sources, and then budget for access to the data.

This post will focus on using threat intelligence. First we will discuss how to aggregate TI, then on using it to solve key use cases, and finally on tuning your ongoing TI gathering process to get maximum value from the TI you collect.

Aggregating TI

When aggregating threat intelligence the first decision is where to put the data. You need it somewhere it can be integrated with your key controls and monitors, and provide some level of security and reliability. Even better if you can gather metrics regarding which data sources are the most useful, so you can optimize your spending. Start by asking some key questions:

  • To platform or not to platform? Do you need a standalone platform or can you leverage an existing tool like a SIEM? Of course it depends on your use cases, and the amount of manipulation & analysis you need to perform on your TI to make it useful.
  • Should you use your provider’s portal? Each TI provider offers a portal you can use to get alerts, manipulate data, etc. Will it be good enough to solve your problems? Do you have an issue with some of your data residing in a TI vendor’s cloud? Or do you need the data to be pumped into your own systems, and how will that happen?
  • How will you integrate the data into your systems? If you do need to leverage your own systems, how will the TI get there? Are you depending on a standard format like STIX/TAXXI? Do you expect out-of-the-box integrations?

Obviously these questions are pretty high-level, and you’ll probably need a couple dozen follow-ups to fully understand the situation.

Selecting the Platform

In a nutshell, if you have a dedicated team to evaluate and leverage TI, have multiple monitoring and/or enforcement points, or want more flexibility in how broadly you use TI, you should probably consider a separate intelligence platform or ‘clearinghouse’ to manage TI feeds. Assuming that’s the case, here are a few key selection criteria to consider when selecting a stand-alone threat intelligence platforms:

  1. Open: The TI platform’s task is to aggregate information, so it must be easy to get information into it. Intelligence feeds are typically just data (often XML), and increasingly distributed in industry-standard formats such as STIX, which make integration relatively straightforward. But make sure any platform you select will support the data feeds you need. Be sure you can use the data that’s important to you, and not be restricted by your platform.
  2. Scalable: You will use a lot of data in your threat intelligence process, so scalability is essential. But computational scalability is likely more important than storage scalability – you will be intensively searching and mining aggregated data, so you need robust indexing. Unfortunately scalability is hard to test in a lab, so ensure your proof of concept testbed is a close match for your production environment, and that you can extrapolate how the platform will scale in your production environment.
  3. Search: Threat intelligence, like the rest of security, doesn’t lend itself to absolute answers. So make TI the beginning of your process of figuring out what happened in your environment, and leverage the data for your key use cases as we described earlier. One clear requirement for all use cases is search. Be sure your platform makes searching all your TI data sources easy.
  4. Scoring: Using Threat Intelligence is all about betting on which attackers, attacks, and assets are most important to worry about, so a flexible scoring mechanism offers considerable value. Scoring factors should include assets, intelligence sources, and attacks, so you can calculate a useful urgency score. It might be as simple as red/yellow/green, depending on the sophistication of your security program.

Key Use Cases

Our previous research has focused on how to address these key use cases, including preventative controls (FW/IPS), security monitoring, and incident response. But a programmatic view requires expanding the general concepts around use cases into a repeatable structure, to ensure ongoing efficiency and effectiveness.

The general process to integrate TI into your use cases is consistent, with some variations we will discuss below under specific use cases.

  1. Integrate: The first step is to integrate the TI into the tools for each use case, which could be security devices or monitors. That may involve leveraging the management consoles of the tools to pull in the data and apply the controls. For simple TI sources such as IP reputation, this direct approach works well. For more complicated data sources you’ll want to perform some aggregation and analysis on the TI before updating rules running on the tools. In that case you’ll expect your TI platform for integrate with the tools.
  2. Test and Trust: The key concept here is trustable automation. You want to make sure any rule changes driven by TI go through a testing process before being deployed for real. That involves monitor mode on your devices, and ensuring changes won’t cause excessive false positives or take down any networks (in the case of preventative controls). Given the general resistance of many network operational folks to automation, it may be a while before everyone trusts automatic changes, so factor that into your project planning.
  3. Tuning via Feedback: In our dynamic world the rules that work today and the TI that is useful now will need to evolve. So you’ll constantly be tuning your TI and rulesets to optimize for effectiveness and efficiency. You are never done, and will constantly need to tune and assess new TI sources to ensure your defenses stay current.

From a programmatic standpoint, you can look back to our Applied Threat Intelligence research for granular process maps for integrating threat intelligence with each use case in more detail.

Preventative Controls

The idea when using TI within a preventative control is to use external data to identify what to look for before it impacts your environment. By ‘preventative’ we mean any control that is inline and can prevent attacks, not just alert. These include:

  • Network security devices: This category encompasses firewalls (including next-generation models) and Intrusion Prevention Systems. But you might also include devices such as web application firewalls, which operate at different levels in the stack but are inline and can block attacks.
  • Content security devices/services: Web and email filters can also function as preventative controls because they inspect traffic as it passes through, and can enforce policies to block attacks.
    • Endpoint security technologies: Protecting an endpoint is a broad mandate, and can include traditional endpoint protection (anti-malware) and newfangled advanced endpoint protection technologies such as isolation and advanced heuristics.

We want to use TI to block recognized attacks, but not crater your environment with false positives, or adversely impact availability.

So the greatest sensitivity, and the longest period of test and trust, will be for preventative controls. You only get one opportunity to take down your network with an automated TI-driven rule set, so make sure you are ready before you deploy blocking rules operationally.

Security Monitoring

Our next case uses Threat Intelligence to make security monitoring more effective. As we’ve written countless times, security monitoring is necessary because you simply cannot prevent everything, so you need to get better and faster at responding. Improving detection is critical to effectively shortening the window between compromise and discovery.

Why is this better than just looking for well-established attack patterns like privilege escalation or reconnaissance, as we learned in SIEM school? The simple answer is that TI data represents attacks happening right now on other networks. Attacks you otherwise wouldn’t see or know to look for until too late. In a security monitoring context leveraging TI enables you to focus your validation/triage efforts, detect faster and more effectively, and ultimately make better use of scarce resources which need to be directed at the most important current risk.

  • Aggregate Security Data: The foundation for any security monitoring process is internal security data. So before you can worry about external threat intel, you need to enumerate devices to monitor in your environment, scope out the kinds of data you will get from them, and define collection policies and correlation rules. Once this data is available in a repository for flexible, fast, and efficient search and analysis you are ready to start integrating external data.
  • Security Analytics: Once the TI is integrated, you let the advanced math of your analytics engine do its magic, correlating and alerting on situations that warrant triage and possibly deeper investigation.
  • Action/Escalation: Once you have an alert, and have gathered data about the device and attack, you need to determine whether the device was actually compromised or the alert was a false positive. Once you verify an attack you’ll have a lot of data to send to the next level of escalation – typically an incident response process.

The margin for error is a bit larger when integrating TI into a monitoring context than a preventative control, but you still don’t want to generate a ton of false positives and have operational folks running around chasing then. Testing and tuning processes remain critical (are you getting the point yet?) to ensure that TI provides sustainable benefit instead of just creating more work.

Incident Response

Similar to the way threat intelligence helps with security monitoring, you can use TI to focus investigations on the devices most likely to be impacted, help identify adversaries, and lay out their tactics to streamline your response. Just to revisit the general steps of an investigation, here’s a high-level view of incident response:

  • Phase 1: Current Assessment: This involves triggering your process and escalating to the response team, then triaging the situation to figure out what’s really at risk. A deeper analysis follows to prove or disprove your initial assessment and figure out whether it’s a small issue or a raging fire.
  • Phase 2: Investigate: Once the response process is fully engaged you need to get the impacted devices out of harm’s way by quarantining them and taking forensically clean images for chain of custody. Then you can start to investigate the attack more deeply to understand your adversary’s tactics, build a timeline of the attack, and figure out what happened and what was lost.
  • Phase 3: Mitigation and Clean-up: Once you have completed your investigation you can determine the appropriate mitigations to eradicate the adversary from your environment and clean up the impacted parts of the network. The goal is to return to normal business operations as quickly as possible. Finally you’ll want a post-mortem after the incident is taken care of, to learn from the issues and make sure they don’t happen again.

The same concepts apply as in the other use cases. You’ll want to integrate the TI into your response process, typically looking to match indicators and tactics against specific adversaries to understand their motives, profile their activities, and get a feel for what is likely to come next. This helps to understand the level of mitigation necessary, and determine whether you need to involve law enforcement.

Optimizing TI Spending

The final aspect of the program for today’s discussion is the need to optimize which data sources you use – especially the ones you pay for. Your system should be tuned to normalize and reduce redundant events, so you’ll need a process to evaluate the usefulness of your TI feeds. Obviously you should avoid overlap when buying feeds, so understand how each intelligence vendor gets their data. Do they use honeypots? Do they mine DNS traffic and track new domain registrations? Have they built a cloud-based malware analysis/sandboxing capability? Categorize vendors by their tactics to help pick the best fit for your requirements.

Once the initial data sources are integrated into your platform and/or controls you’ll want to start tracking effectiveness. How many alerts are generated by each source? Are they legitimate? The key here is the ability to track this data, and if these capabilities are not built into the platform you are using, you’ll need to manually instrument the system to extract this kind of data. Sizable organizations invest substantially in TI data, and you want to make sure you get a suitable return on that investment.

At this point you have a systematic program in place to address your key use cases with threat intelligence. But taking your TI program to the next level requires you to think outside your contained world. That means becoming part of a community to increase the velocity of your feedback loop, and be a contributor to the TI ecosystem rather than just a taker. So our next post will focus on how you can securely share what you’ve learned through your program to help others.

Share: