Network Security Fundamentals: Correlation

By Mike Rothman

In the last Network Security Fundamentals post, we talked about monitoring (almost) everything and how that drives a data/log aggregation and collection strategy. It’s great to have all that cool data, but now what?

That brings up the ‘C word’ of security: correlation. Most security professionals have tried and failed to get sufficient value from correlation relative to the cost, complexity, and effort involved in deploying the technology. Understandably, trepidation and skepticism surface any time you bring up the idea of real-time analysis of security data. As usual, it comes back to a problem with management of expectations.

First we need to define correlation – which is basically using more than one data source to identify patterns because the information contained in a single data source is not enough to understand what is happening or not enough to make a decision on policy enforcement. In a security context, that means using log records (or other types of data) from more than one device to figure out whether you are under attack, what that attack means, and the severity of attack.

The value of correlation is obvious. Unfortunately networks typically generate ten of thousands data records an hour or more, which cannot be analyzed manually. So sifting through potentially millions of records and finding the 25 you have to worry about represents tremendous time savings. It also provides significant efficiencies when you understand threats in advance, since different decisions require different information sources. The technology category for such correlation is known as SIEM: Security Information and Event Management.

Of course, vendors had to come along and screw everything up by positioning correlation as the answer to every problem in security-land. Probably the cure for cancer too, but that’s beside the point. In fairness, end users enabled this behavior by hearing what they wanted. A vendor said (and still says, by the way) they could set alerts which would tell the user when they were under attack, and we believed. Shame on us.

10 years later, correlation is achievable. But it’s not cheap, or easy, or comprehensive. But if you implement correlation with awareness and realistic expectations, you can achieve real value.

Making Correlation Work 4 U

I liken correlation to how an IPS can and should be used. You have thousands of attack signatures available to your IPS. That doesn’t mean you should use all of them, or block traffic based on thousands of alerts firing. Once again, Pareto is your friend. Maybe 20% of your signatures should be implemented, focusing on the most important and common use cases that concern you and are unlikely to trigger many false positives. The same goes for correlation. Focus on the use cases and attack scenarios most likely to occur, and build the rules to detect those attacks. For the stuff you can’t anticipate, you’ve got the ability to do forensic analysis, after you’ve been pwned (of course).

There is another more practical reason for being careful with the rules. Multi-factor correlation on a large dataset is compute intensive. Let’s just say a bunch of big iron was sold to drive correlation in the early days. And when you are racing the clock, performance is everything. If your correlation runs a couple days behind reality, or if it takes a week to do a forensic search, it’s interesting but not so useful. So streamlining your rule base is critical to making correlation work for you.

Defining Use Cases

Every SIEM/correlation platform comes with a bunch of out-of-the-box rules. But before you ever start fiddling with a SIEM console, you need to sit down in front of a whiteboard and map out the attack vectors you need to watch. Go back through your last 4-5 incidents and lay those out. How did the attack start? How did it spread? What data sources would have detected the attack? What kinds of thresholds need to be set to give you time to address the issue?

If you don’t have this kind of data for your incidents, then you aren’t doing a proper post-mortem, but that’s another story. Suffice it to say 90% of the configuration work of your correlation rules should be done before you ever touch the keyboard. If you haven’t had any incidents, go and buy a lottery ticket – maybe you’ll hit it big before your number comes up at work and you are compromised.

A danger of not properly defining use cases is the inability to quantify the value of the product once implemented. Given the amount of resources required to get a correlation initiative up and running, you need all the justification you can get. The use cases strictly define what problem you are trying to solve, establish success criteria (in finding that type of attack) and provide the mechanism to document the attack once detected. Then your CFO will pipe down when he/she wants to know what you did with all that money.

Also be wary of vendor ‘consultants’ hawking lots of professional service hours to implement your SIEM. As part of the pre-sales proof of concept process, you should set up a bunch of these rules. And to be clear, until you have a decent dataset and can do some mining using your own traffic, paying someone $3,000 per day to set up rules isn’t the best use of their time or your money.

Gauging Effectiveness

Once you have an initial rule set, you need to start analyzing the data. Regardless of the tool, there will be tuning required, and that tuning takes time and effort. When the vendor says their tool doesn’t need tuning or can be fully operational in a day or week, don’t believe them.

First you need to establish your baselines. You’ll see patterns in the logs coming from your security devices and this will allow you to tighten the thresholds in your rules to only fire alerts when needed. A few SIEM products analyze network flow traffic and vulnerability data as well, allowing you to use that data to make your rules smarter based on what is actually happening on your network, instead of relying on generic rules provided as a lowest common denominator by your vendor.

For a deeper description of making correlation work, you should check out Rocky DeStefano’s two posts (SIEM 101 & SIEM 201) on this topic. Rocky has forgotten more about building a SOC than you probably know, so read the posts.

Putting the Security Analyst in a Box

I also want to deflate this idea that a SIEM/correlation product can provide a “security analyst in a box.” That is an old wives’ tale created by SIEM vendors in an attempt to justify their technology, versus adding a skilled human security analyst. Personally, I’ll take a skilled human who understands how things should look over a big honking correlation engine every time. To be clear, the data reduction and simple correlation capabilities of a SIEM can help make a human better at what they do – but cannot replace them. And any marketing that makes you think otherwise is disingenuous and irresponsible.

All that said, analysis of collected network security data is fundamental to any kind of security management, and as I dig into my research agenda through the rest of this year I’ll have lots more to say about SIEM/Log Management.

No Related Posts

In a vegetarian town, a little red meat goes a long way. I live in Boulder and know quite a few vegan/veggie-inclined folk. Good people, but they lack pinkness in their cheeks. My red meat reference is twofold: first, I like it. Secondly, I think SIEM-based correlation is absolutely silly and waste of time. Perhaps there are those among us who like combing through 10s of thousands of events. There is on occasion, I admit, a need to do these kind of exercises. My position is that with a proactive perimeter packet-monitoring process, much of the correlation madness can be simplified and reduced without all the SIEM cycle burn. And for the record, I abhor what Michael Vick did, but I am willing stand up for my beliefs and suffer rabid dog attacks from the acolytes of SIM/SIEM correlation. Correlating tens of thousands or millions or billions of records is like undergoing surgery without anethtesia. For me personally, “billions and billions of anything”, with all due respect to Carl Sagan, who I think would agree with me, pretty much commits one for the rest of their life, a life sentence of mundane. I hope that’s not too harsh. I also think all IRS agents, accountants and tax attorneys should be set free from slavery.

I believe that there’s been so much hype, everyone is conditioned to bark when the correlation bell is rung. Maybe I’m the rabid one? Administrators should become intimately familiar with their networks. The only advantage they have over the ne-er’-do-well is they know what SHOULD be on the network. Executives, for that matter, should stop regarding IT operations as a cost and start measuring it in bottom-line contribution. The potentates authorized the tech-tool investment so why not expect a return and specify the use? “You can do this. You can’t do that. And if you do what you shouldn’t, here’s the consequence.”

Buying a box or series of boxes that alert when things go wrong creates a never-ending alarm-response continuum. When alarms sound it’s too late. Becoming familiar with the network is a more efficient way to identify anomalies. It should be a comfortable process like sitting in your living room and reading a book. Familiarity means when someone moves your stuff you know about it. “Hey, who moved my stuff!” When machines match signatures, it only takes one degree right of the decimal point to cause problems. Mistakes can occur any time. Alarm thresholds unmet and slow trends that don’t alert, by the machine method, aren’t important enough to act? If one has a feel for their environment, these symptoms have texture and can be identified. “Somebody is moving my stuff and I know who!”

I think correlation has merit but not in the SIEM context. Linear correlation, by my way of thinking, is the purest and most sensible correlation strategy and begins at the perimeter. All communication vectors, both in and out, should map to well-defined business processes. If a port serves no well-defined purpose it shouldn’t be open. If traffic is bouncing off the inside of the firewall, address the machines in question. A miss-configuration or compromise exists. A machine spewing packets, though blocked at the firewall, can spread its evil seed should the firewall port be turned on. It’s also burns resources.

Every device has an associated correlation pattern of two-way communication including source and destination, frequency, amplitude, content and impact. These are easy-to-grasp correlations and when one becomes familiar with the LAN-scape, identifying out-layers becomes second nature.

Next, authorized users. These represent users granted network access, from inside or out. Already now we’ve jumped from SIEM to access control devices illustrating how ugly the correlation game becomes. There will always be Internet scanning and probing. It’s an opportunistic thing. Minimizing one’s business risk includes keeping minimalist firewall rules AND watching the guests. Even a Madam keeps an eye on their clientele…knowing that at any moment business can go South very quickly. I digress. Yes, monitoring use is necessary because there’s no way to claim 100% network security and compliance by deploying technology alone. Management is a process. There is NO substitute for “knowing.”

Everything on the network is communications whether it’s a watchdog pulse, system command, Web page, FTP or email. I’m suggesting that only authorized communications and content, which serves meeting the business goals, enter and leave the network. Today, this control concept is embodied in lots of proprietary gear including DLP, SWG, UTM and XYZ. It’s just more of the same. Though there are huge complex networks where such alphabet solutions have appeal, ultimately someone has to look at the data and make a judgment call: yes or no does it go?

Correlate we must, the bank shall not bust. Regular effort pays big, snort is the sound of a pig. To alarm is to react. To be proactive is to know your network and your users. Speak with them. Set expectations. Teach. It’s not about who has the bigger hammer. A small hammer in the hands of a skilled tinker can produce remarkable things. Big hammers, with a punitive ring, make lots of noise but sort of suck for fine detailed work. The correlation equation involves people where fine detail works best. It’s the malleable stuff that squeezes through all those hard machine controls. It’s only through observation and knowing that issues can be spotted and addressed before an alarm goes off.

Linear correlation is pretty effective based on my experience and a heck of a lot less resource intensive than SIEM stuff. But hey, just as some people abhor red meat, others like to suffer mind numbing amounts of network events. Go figure. To each their own, but for me I like Congruity Inspector linearity.


By smithwill

If you like to leave comments, and aren’t a spammer, register for the site and email us at and we’ll turn off moderation for your account.