Threat Detection Evolution: Quick Wins
As we wrap up this series on Threat Detection Evolution, we’ll work through a quick scenario to illustrate how these concepts come together to impact on your ability to detect attacks. Let’s assume you work for a mid-sized super-regional retailer with 75 stores, 6 distribution centers, and an HQ. Your situation may be a bit different, especially if you work in a massive enterprise, but the general concepts are the same. Each of your locations is connected via an Internet-based VPN that works well. You’ve been gradually upgrading the perimeter network at HQ and within the distribution centers by implementing NGFW technology and turning on IPS on the devices. Each store has a low-end security gateway that provides separate networks for internal systems (requiring domain authentication) and customer Internet access. There are minimal IT staff and capabilities outside HQ. A technology lead is identified for each location, but they can barely tell you which lights are blinking on the boxes, so the entire environment is built to be remotely managed. In terms of other controls, the big project over the past year has been deploying whitelisting on all fixed function devices in distribution centers and stores, including PoS systems and warehouse computers. This was a major undertaking to tune the environment so whitelisting did not break systems, but after a period of bumpiness the technology is working well. The high-profile retail attacks of 2014 freed up budget for the whitelisting project, but aside from that your security program is right out of the PCI-DSS playbook: simple logging, vulnerability scanning, IPS, and AV deployed to pass PCI assessment; but not much more. Given the sheer number of breaches reported by retailer after retailer, you know that the fact you haven’t suffered a successful compromise is mostly good luck. Getting ahead of PoS attacks with whitelisting has helped, but you’ve been doing this too long to assume you are secure. You know the simple logging and vulnerability scanning you are doing can easily be evaded, so you decide it’s time to think more broadly about threat detection. But with so many different technologies and options, how do you get started? What do you do first? Getting Started The first step is always to leverage what you already have. The good news is that you’ve been logging and vulnerability scanning for years. The data isn’t particularly actionable, but it’s there. So you can start by aggregating it into a common place. Fortunately you don’t need to spend a ton of money to aggregate your security data. Maybe it’s a SIEM, or possibly an offering that aggregates your security data in the cloud. Either way you’ll start by putting all your security data in one place, getting rid of duplicate data, and normalizing your data sources, so you can start doing some analysis on a common dataset. Once you have your data in one place, you can start setting up alerts to detect common attack patterns in your data. The good news is that all the aggregation technologies (SIEM and cloud-based monitoring) offer options. Some capabilities are more sophisticated than others, but you’ll be able to get started with out-of-the-box capabilities. Even open source tools offer alerting rules to get you started. Additionally, security monitoring vendors invest significantly in research to define and optimize the rules that ship with their products. One of the most straightforward attack patterns to look for involves privilege escalation after obvious reconnaissance. Yes, this is simple detection, but it illustrates the concept. Now that you have server and IPS logs in one place, you can look for increased network port scans (usually indicating reconnaissance) and then privilege escalation on a server on one of the networks being searched. This is a typical rule/policy that ships with a SIEM or security monitoring service. But you could just as easily build this into your system to get started. Odds are that once you start looking for these patterns you’ll find something. Let’s assume you don’t because you’ve done a good job so far on security fundamentals. After starting by going through your first group of alerts, next you can look for assets in your environment which you don’t know about. That entails either active or passive discovery of devices on the network. Start by scanning your entire address space to see what’s there. You probably shouldn’t do that during business hours, but a habit of checking consistently – perhaps weekly or monthly – is helpful. In between active scans you can also passively listen for network devices sending traffic, by either looking at network flow records or deploying a passive scanning capability specifically to look for new devices. Let’s say you discover your development shop has been testing out private cloud technologies to make better use of hardware in the data center. The only reason you noticed was passive discovery of a new set of devices communicating with back-end datastores. Armed with this information, you can meet with that business leader to make sure they took proper precautions to securely deploy their systems. Between alerts generated from new rules and dealing with the new technology initiative you didn’t know about, you feel pretty good about your new threat detection capability. But you’re still looking for stuff you already know you should look for. What really scares you is what you don’t know to look for. More Advanced Detection To look for activity you don’t know about, you need to first define normal for your environment. Traffic that is not ‘normal’ provides a good indicator of potential attack. Activity outliers are a good place to start because network traffic and transaction flows tend to be reasonably stable in most environments. So you start with anomaly detection by spending a week or so training your detection system, setting baselines for network traffic and system activity. Once you start getting alerts based on anomalies, you will spend a bit of time refining thresholds and decreasing the noise you see from alerts. This tuning time may be irritating, but it’s a necessary evil to optimize