As we wrap up our Applied Threat Intelligence series, we have already defined TI and worked our way through a number of the key use cases (security monitoring, incident response, and preventative controls) where TI can help improve your security program, processes, and posture. The last piece of the puzzle is building a repeatable process to collect, aggregate, and analyze the threat intelligence. This should include a number of different information sources, as well as various internal and external data analyses to provide context to clarify what the intel means to you.

As with pretty much everything in security, handing TI is not “set and forget”. You need to build repeatable process to select data providers and continually reassess the value of those investments. You will need to focus on integration; as we described, data isn’t helpful if you can’t use it in practice. And your degree of comfort in automating processes based on threat intelligence will impact day-to-day operational responsibilities.

First you need to decide where threat intelligence function will fit organizationally. Larger organizations tend to formalize an intelligence group, while smaller entities need to add intelligence gathering and analysis to the task lists of existing staff. Out of all the things that could land on a security professional, an intelligence research responsibility isn’t bad. It provides exposure to cutting-edge attacks and makes a difference in your defenses, so that’s how you should sell it to overworked staffers who don’t want yet another thing on their to-do lists.

But every long journey begins with the first step, so let’s turn our focus to collecting intel.

Gather Intelligence

Early in the intelligence gathering process you focused your efforts with an analysis of your adversaries. Who they are, what they are most likely to try to achieve, and what kinds of tactics they use to achieve their missions – you need to tackle all these questions. With those answers you can focus on intelligence sources that best address your probable adversaries. Then identify the kinds of data you need. This is where the previous three posts come in handy. Depending on which use cases you are trying to address you will know whether to focus on malware indicators, compromised devices, IP reputation, command and control indicators, or something else.

Then start shopping. Some folks love to shop, others not so much. But it’s a necessary evil; fortunately, given the threat intelligence market’s recent growth, you have plenty of options. Let’s break down a few categories of intel providers, with their particular value:

  • Commercial: These providers employ research teams to perform proprietary research, and tend to attain highly visibility by merchandising findings with fancy exploit names and logos, spy thriller stories of how adversary groups compromise organizations and steal data, and shiny maps of global attacks. They tend to offer particular strength regarding specific adversary classes. Look for solid references from your industry peers.
  • OSINT: Open Source Intelligence (OSINT) providers specialize in mining the huge numbers of information security sources available on the Internet. Their approach is all about categorization and leverage because there is plenty of information available free. These folks know where to find it and how to categorize it. They normalize the data and provide it through a feed or portal to make it useful for your organization. As with commercial sources, the question is how valuable any particular source is to you. You already have too much data – you only need providers who can help you wade through it.
  • ISAC: There are many Information Sharing and Analysis Centers (ISAC), typically built for specific industries, to communicate current attacks and other relevant threat data among peers. As with OSINT, quality can be an issue, but this data tends to be industry specific so its relevance is pretty well assured. Participating in an ISAC obligates you to contribute data back to the collective, which we think is awesome. The system works much better when organizations both contribute and consume intelligence, but we understand there are cultural considerations. So you will need to make sure senior management is okay with it before committing to an ISAC.

Another aspect of choosing intelligence providers is figuring out whether you are looking for generic or company-specific information. OSINT providers are more generic, while commercial offerings can go deeper. Though various ‘Cadillac’ offerings include analysts dedicated specifically to your organization – proactively searching grey markets, carder forums, botnets, and other places for intelligence relevant to you.

Managing Overlap

With disparate data sources it is a challenge to ensure you don’t waste time on multiple instances of the same alert. One key to determining overlap is an understanding of how the intelligence vendor gets their data. Do they use honeypots? Do they mine DNS traffic and track new domain registrations? Have they built a cloud-based malware analysis/sandboxing capability? You can categorize vendors by their tactics to help you pick the best for your requirements.

To choose between vendors you need to compare their services for comprehensiveness, timeliness, and accuracy. Sign up for trials of a number of services and monitor their feeds for a week or so. Does one provider consistently identify new threats earlier? Is their information correct? Do they provide more detailed and actionable analysis? How easy will it be to integrate their data into your environment for your use cases.

Don’t fall for marketing hyperbole about proprietary algorithms, Big Data analysis, or staff linguists penetrating hacker dens and other stories straight out of a spy novel. It all comes down to data, and how useful it is to your security program. Buyer beware, and make sure you put each intelligence provider through its paces before you commit.

Our last point to stress is the importance of short agreements, especially up front. You cannot know how these services will work for you until you actually start using them. Many of these intelligence companies are startups, and might not be around in 3 or 4 years. Once you identify a set of core intelligence feeds, longer deals can be cut, but we recommend against doing that before your TI process matures and your intelligence vendor establishes a track record addressing your needs.

To Platform or Not to Platform

Now that you have chosen intelligence feeds, what will you do with the data? Do you need a stand-alone platform to aggregate all your data? Will you need to stand up yet another system in your environment, or can you leverage something in the cloud? Will you actually use your intelligence providers’ shiny portals? Or do you expect alerts to show up in your existing monitoring platforms, or be sent via email or SMS?

There are many questions to answer as part of operationalizing your TI process. First you need to figure out whether you already have a platform in place. Existing security providers (specifically SIEM and network security vendors) now offer threat intelligence ‘supermarkets’ to enable you to easily buy and integrate data into their monitoring and control environments. Even if your vendors don’t offer a way to easily buy TI, many support standards such as STIX and TAXXI to facilitate integration.

We are focusing on applied threat intelligence, so the decision hinges on how you will use the threat intelligence. If you have a dedicated team to evaluate and leverage TI, have multiple monitoring and/or enforcement points, or want more flexibility in how broadly you use TI, you should probably consider a separate intelligene platform or ‘clearinghouse’ to manage TI feeds.

Selecting the Platform

If you decide to look at stand-alone threat intelligence platforms there are a couple key selection criteria to consider:

  1. Open: The TI platform’s task is to aggregate information, so it must be easy to get information into it. Intelligence feeds are typically just data (often XML), and increasingly distributed in industry-standard formats such as STIX which make integration relatively straightforward. But make sure any platform you select will support the data feeds you need. Make sure you can use the data that’s important to you, and will not be restricted by your platform.
  2. Scalable: You will use a lot of data in your threat intelligence process, so scalability is important. But computational scalability is likely more important – you will be intensively search and mine aggregated data so you need robust indexing. Unfortunately scalability is hard to test in a lab, so ensure your proof of concept is a close match to your production environment.
  3. Search: Threat intelligence (like the rest of security) doesn’t lend itself to absolute answers. So make TI the start of your process of figuring out what happened in your environment, and leverage the data for your particular use cases as we described earlier. One clear requirement, for all use cases, is search. So make sure your platform makes it easy to search all your TI data sources.
  4. Urgency Scoring: Applied Threat Intelligence is all about betting on which attackers, attacks, and assets are the most important to worry about, so you will find considerable value in a flexible scoring mechanism. Scoring factors should include assets, intelligence sources, and attacks, so you can calculate an urgency score. It might be as simple as red/yellow/green, depending on the sophistication of your security program.

Determining Relevance in the Heat of Battle

So how can you actually use the threat intelligence you painstakingly collected and aggregated? Relevance to your organization depends on the specifics of the threat, and whether it can be used against you. Focus on real potential exploits – a vulnerability which does not exist in your environment is not a real concern. For example you probably don’t need to worry about being financial malware if you don’t hold or have access to credit card data. That doesn’t mean you shouldn’t pay any attention to these attacks – many exploits leverage a variety of interesting tactics, which might become a part of a relevant attack in the future. Relevance encompasses two aspects:

  1. Attack surface: Are you vulnerable to the specific attack vector? Weaponized Windows 2000 exploits aren’t relevant if you don’t have any Windows 2000 systems. Once you have patched all instances of a specific vulnerability on your devices you get a respite from worrying about the exploit. Your asset base and internally collected vulnerability information provide this essential context.
  2. Intelligence Reliability: You need to keep re-evaluating each threat intelligence feed to determine its usefulness. A feed which triggers many false positives is less relevant. On the other hand, if a feed usually nails a certain type of attack, you should those warnings particularly seriously. Note that attack surface may not be restricted to your own assets and environment. Service providers, business partners, and even customers represent indirect risks – if one of them is compromised, an attacker might have a direct path to you.

Constantly Evaluating Intelligence

How can you determine the reliability of a TI source? Threat data ages very quickly and TI sources such as IP reputation can change hourly. Any system you use to aggregate threat intelligence should be able to report on the number of alerts generated from each TI source, without hurting your brain building reports. These reports show value from your TI investment – it is a quick win if you can show how TI identified an attack earlier than you would have detected it otherwise. Additionally, if you use multiple TI vendors, these reports enable you to compare them based on actual results.

Marketing Success Internally

Over time, as with any security discipline, you will refine your verification/validation/investigation process. Focus on what worked and what didn’t, and tune your process accordingly. It can be bumpy when you start really using TI sources – typically you start by receiving a large number of alerts, and following them down a bunch of dead ends. It might remind you, not so fondly, of the SIEM tuning process. But security is widely regarded as overhead, so you need a Quick Win with any new security technology.

TI will find stuff you didn’t know about and help you get ahead of attacks you haven’t seen yet. But that success story won’t tell itself, so when the process succeeds – likely early on – you will need to publicize it early and often. A good place to start is discovery of an attack in progress. You can show how you successfully detected and remediated the attack thanks to threat intelligence. This illustrates that you will be compromised (which must be constantly reinforced to senior management), so success is a matter of containing damage and preventing data loss. The value of TI in this context is in shortening the window between exploit and detection.

You can also explain how threat intelligence helped you evolve security tactics based on what is happening to other organizations. For instance, if you see what looks like a denial of service (DoS) attack on a set of web servers, but already know from your intelligence efforts that DoS is a frequent decoy to obscure exfiltration activities, you have better context to be more sensitive to exfiltration attempts. Finally, to whatever degree you quantify the time you spend remediating issues and cleaning up compromises, you can show how much you saved by using threat intelligence to refine efforts and prioritize activities.

As we have discussed through this series, threat intelligence can even the battle between attackers and defenders, to a degree. But to accomplish this you must be able to gather relevant TI and leverage it in your processes. We have shown how to use TI both a) at the front end of your security process (in preventative controls) to disrupt attacks, and b) to more effectively monitor and investigate attacks – both in progress and afterwards. We don’t want to portray any of this as ‘easy’, but nothing worthwhile in security is easy. It is about constantly improving your processes to favorably impact your security posture, on an ongoing and continuous basis.