In New Data for New Attacks we discussed why there is usually too much data early in the process. Then we talked about leveraging the right data to alert and trigger the investigative process. But once the incident response process kicks in too much data is rarely the problem, so now let’s dig deeper into the most useful data for the initial stages of incident response. At this early stage, when we don’t yet know what we are dealing with, it’s all about triaging the problem. That usually means confirming the issue with additional data sources and helping to isolate the root cause.
We assume that at this stage of investigation a relatively unsophisticated analyst is doing the work. So these investigation patterns can and should be somewhat standard and based on common tools. At this point the analyst is trying to figure out what is being attacked, how the attack is happening, how many devices are involved, and ultimately whether (and what kind of) escalation is required.
Once you understand the general concept behind the attack, you can dig a lot deeper with cool forensics tools. But at this point we are trying to figure out where to dig. The best way to stage this discussion is to focus on the initial alert and then what kinds of data would validate the issue and provide the what, how, and how many answers we need at this stage. There are plenty of places we might see the first alert, so let’s go through each in turn.
Network
If one of your network alerts fires, what then? It becomes all about triangulating the data to pinpoint what devices are in play and what the attack is doing. This kind of process isn’t comprehensive, but should represent the kinds of additional data you’d look for and why.
- Attack path: The first thing you’ll do is check out the network map and figure out if there is a geographic or segment focus to the network alerts. Basically you are trying to figure out what is under attack and how. Is this a targeted attack, where only specific addresses are generating the funky network traffic? Or is it reconnaissance that may indicate some kind of worm proliferating? Or is it command and control traffic, which might indicate zombies or persistent attackers?
- Device events/logs/configurations: Once we know what IP addresses are in play, we can dig into those specific devices and figure out what is happening and/or what changed. At this stage of investigation we are looking for obvious stuff. New accounts or executables, or configuration changes, are typical indications of some kind of issue with the device. For the sake of both automation and integrity, this data tends to be centrally stored in one or more system management platforms (SIEM, CMDB, Endpoint Protection Platform, Database Activity Monitor, etc.).
- Egress path and data: Finally, we want to figure out what information is leaving your network and (presumably) going into the hands of the bad guys, and how. While we aren’t concerned with a full analysis of every line item, we want a general sense of what’s headed out the door and an understanding of how it’s being exfiltrated.
Endpoint
The endpoint may alert first if it’s some kind of drive-by download or targeted social engineering attack. You also can have this kind of activity in the event of a mobile device doing something bad outside your network, then connecting to your internal network and wreaking havoc.
- Endpoint logs/configurations: Once you receive an alert that there is something funky happening on an endpoint, the first thing you do is investigate the device to figure out what’s happening. You are looking for new executables on the device or a configuration change that indicates a compromise.
- Network traffic: Another place to look when you get an endpoint alert is the network traffic originating from and terminating on the device. Analyzing that traffic can give you an idea of what is being targeted. Is it a back-end data store? Is it other devices? How and where is the device is getting instructions? Also be aware of exfiltration activities, which indicate not only a successful compromise, but also a breach. The objective is to profile the attack and understand the objective and tactics.
- Application targets: Likewise, if it’s obvious a back-end datastore is being targeted, you can look at the transaction stream to decipher what the objective is and how widely has the attack spread. You also need to understand the target to figure out whether and how remediation should occur.
Upper Layers
If the first indication of an attack happens at the application layer (including databases, application servers, DLP, etc.) – which happens more and more, due to the nature of application-oriented attacks – then it’s about quickly understanding the degree of compromise and watching for data loss.
- Network traffic: Application attacks are often all about stealing data, so at the network layer you are looking primarily for signs of exfiltration. Secondarily, understanding the attack path will help discover which devices are compromised, and understand short and longer term remediation options.
- Application changes Is your application functioning normally? Or is the bad guy inserting malware on pages to compromise your customers? While you won’t perform a full application assessment at this point, you need to look for key indicators of the bad guy’s activities that might not show up through network monitoring.
- Device events/logs/configurations: As with the other scenarios, understanding to what degree the devices involved in the application stack are compromised is important for damage assessment.
- Content monitors: Given the focus of most application attacks on data theft, you’ll want to consult your content monitors (DLP, as well as outbound web and email filters) to gauge whether the attack has compromised data and to what degree. This information is critical for determining the amount of escalation required.
Incident Playbook
Obviously there are infinite combinations of data you can look at to figure out what is going on (and whether you’ll need to investigate and/or escalate), but we recommend that the first steps in the process be scripted and somewhat standardized. The higher up the response pyramid you go, the more leeway the analysts will need to do what they think is right. But the only way to make sure the right information is provided to each succeeding level of escalation is to be very specific and clear what data is required before escalating an issue.
Chain of Custody
Depending on the type and objective of the attack, you may want to consider prosecution, which entails a certain amount of care with data handling and integrity. This becomes really important at higher levels in the escalation process, but it’s a good habit to make sure that any evidence gathered (even for escalation) is collected in a way that does not preclude prosecution. So part of your incident playbook and analyst training should specify how to gather forensically acceptable data:
- Isolating the machine(s): Depending on what you find on a device, you may want to take a clean image for law enforcement before continuing your investigation.
- Investigation management: Many larger organization use some kind of case management tool to manage the investigation process within a workflow. The first level of response populates this tool, and provides structure for the entire investigation. For smaller organizations this may be overkill, but it’s worth defining how data will be collected in some detail, and where it will be stored to ensure proper handling.
- Ensuring log file and data integrity: There are many rules about how to handle log records to ensure their integrity. NIST has a good guide (PDF) for what that typically involves, but ultimately your legal team will need to define the specifics for your organization. Your logging infrastructure must meet the integrity requirements.
Once you have this initial data collected, in a forensically acceptable manner, then you need to get into actual investigation and analysis. Additional techniques and tools are required to do this correctly, especially given the new kinds of attacks we are seeing, so that’s what we will discuss in the next few posts.
Comments