Now that we have the inputs (both internal and external) to our incident response/management process we are ready to go operational. So let’s map out the IR/M process in detail to show where threat intelligence and other security data allows you to respond faster and more effectively. Trigger and Escalate You start the incident management process with a trigger that kicks off the incident response process, and the basic information you gather varies based on what triggered the alert. You may get alerts from all over the place, including any of your monitoring systems and the help desk. Nobody has a shortage of alerts – the problem is finding the critical alerts and taking immediate action. Not all alerts require a full incident response – much of what you already deal with on a day-to-day basis is handled by existing security processes. Incident response/management is about those situations that fall outside the range of your normal background noise. Where do you draw the line? That depends entirely on your organization. In a small business, a single system infected with malware might require a response because all devices have access to critical information. But a larger company might handle the same infection within standard operational processes. Regardless of where the line is drawn, communication is critical. All parties must be clear on which situations require a full incident investigation and which do not before you can decided whether to pull the trigger or not. For any incident you need a few key pieces of information early to guide next steps. These include: What triggered the alert? If someone was involved or reported it, who are they? What is the reported nature of the incident? What is the reported scope of the incident? This is basically the number and nature of systems/data/people involved. Are any critical assets involved? When did the incident occur, and is it ongoing? Are there any known precipitating events for the incident? Is there a clear cause? Gather what you can from this list to provide an initial picture of what’s going on. When the initial responder judges an incident to be more serious it’s time to escalate. You should have guidelines for escalation, such as: Involvement of designated critical data or systems. Malware infecting a certain number of systems. Sensitive data detected leaving the organization. Unusual traffic/behavior that could indicate an external compromise. Once you escalate it is time to assign an appropriate resource, request additional resources if needed, and begin the response with triage. Response Triage Before you can do anything, you will need to define accountabilities among the team. That means specifying the incident handler, or the responsible party until a new responsible party is defined. You also need to line up resources to help based on answers to the questions above, to make sure you have the right expertise and context to work through the incident. We have more detail on staffing the response in our Incident Response Fundamentals series. The next thing to do is to narrow down the scope of data you need to analyze. As discussed in the last post, you spend considerable time collecting events and logs, as well as network and endpoint forensics. This is a tremendous amount of data so narrowing down the scope of what you investigate is critical. You might filter on the segments attacked, or logs of the application in question. Perhaps you will take forensics from endpoints at a certain office if you believe the incident was contained. This is all to make the data mining process manageable. With all this shiny big data technology, do you need to actually move the data? Of course not, but you will need flexible filters so you can see only items relevant to this incident in your forensic search results. Time is of the essence in any response, so you cannot afford to get bogged down with meaningless and irrelevant results as you work through collected data. Analyze Once you have filters in place you will want to start analyzing the data to answer several questions: Who is attacking you? What tactics are they using? What is extent of the potential damage? You may have an initial idea based on the alert that triggered the response, but now you need to prove that hypothesis. This is where threat intelligence plays a huge role in accelerating your response. Based on the indicators you found, a TI service can help identify a potentially responsible party. Or at least a handful of them. Every adversary has their preferred tactics, and whether through adversary analysis (discussed in Really Responding Faster) or actual indicators, you want to leverage external information to understand the attacker and their tactics. It is a bit like having a crystal ball, allowing you to focus your efforts and what the attacker likely did, and where. Then you need to size up or scope out the damage. This comes down to the responder’s initial impressions as they roll up to the scene. The goal here is to take the initial information provided and expand on it as quickly as possible to determine the true extent of the incident. To determine scope you will want to start digging into the data to establish the systems, networks, and data involved. You won’t be able to pinpoint every single affected device at this point – the goal is to get a handle on how big a problem you might be facing, and generate some ideas on how to best mitigate it. Finally, based on the incident handler’s initial assessment, you need to decide whether this requires a formal investigation due to potential law enforcement impact. If so you will need to start thinking about chain of custody for the evidence so you can prove the data was not tampered with, and tracking the incident in a case management system. Some organizations treat every incident this way, and that’s fine. But not all organizations have the resources or capabilities for that, in