You may have noticed we’ve renamed the React Faster and Better series to Incident Response Fundamentals. Securosis shows you how the security research sausage gets made, and sometimes it’s messy. We started RFAB with the idea that it would focus on advanced incident response tactics and the like. As we started writing, it was clear we first had to document the fundamentals. We tried to do both in the series, but it didn’t work out. So Rich and I re-calibrated and decided to break RFAB up into two distinct series.

The first, now called Incident Response Fundamentals, goes into the structure and process of responding to incidents. The follow-up series, which will be called React Faster and Better, will delve deeply into some of the advanced topics we intended to cover. But enough of that digression.

When we left off, we had talked about what you have to do from a structural standpoint (command principlesroles and organizational structureresponse infrastructure and preparatory steps), an infrastructure perspective (data collection/monitoring), before the attack, during the attack (trigger, escalate, and size up and contain, investigate, and mitigate, and finally after the attack (mop up, analyze, and QA) to get a broad view of the entire incident response process.

But many of you are likely thinking, “That’s great, where do I start?” And that is a very legitimate question. It’s unlikely that you’ll be able to eat the elephant in one bite, so you will need to look at breaking the process into logical phases and adopt those processes. After integrating small pieces for a while, you will be able to adopt the entire process effectively. After lots of practice, that is.

So here are some ideas on how you can break up the process into logical groups:

  1. Monitor more: The good news is that monitoring typically falls under the control of the tech folks, so this is something you can (and should) do immediately. Perhaps it’s about adding additional infrastructure components to the monitoring environment, or maybe databases, or applications. We continue to be fans of monitoring everything (yes, Adrian, we know – as practical), so the more data the better. Get this going ASAP.
  2. Install the organization: Here is where you need all your persuasive powers, and then some. This takes considerable coercion within the system, and doesn’t happen overnight. Why? Because everyone needs to buy in on both the process and their response responsibilities & accountabilities. It’s not easy to get folks to step up on the record, even if they have been doing so informally. So you should get this process going ASAP as well, and coercion (you can call it ‘persuasion’) can happen concurrently with the additional monitoring.
  3. Standardize the analysis: One of the key aspects of a sustainable process is that it’s bigger than just one person; that takes some level of formality and, even more important, documentation. So you and your team should be documenting how things should get done for network investigation, endpoint investigation, and database/application attacks as well. You may want to consult an external resource for some direction here, but ultimately this kind of documentation allows you to scale your response infrastructure, as well as set expectations for what and how things need to get done in the heat of battle. This again can be driven by the technical folks.
  4. Stage a simulation: Once the powers that be agree to the process and organizational model, congratulations. Now the work can begin: it’s time to practice. We will point out over and over again that seeing a process on the white board is much different than executing it in a high-stress situation. So we recommend you run simulations periodically (perhaps without letting the team know it’s a simulation) and see how things go. You’ll quickly quickly the gaps in the process/organization (and there are always gaps) and have an opportunity to fix things before the attacks start happening for real.
  5. Start using (and improving) it: At this point, the entire process should be ready to go. Good luck. You won’t do everything right, but hopefully the thought you’ve put into the process, the standard analysis techniques, and the practice allow you to contain the damage faster, minimizing downtime and economic impact. That’s the hope anyway. But remember, it’s critical to ensure the QA/post-mortem happens so you can learn and improve the process for the next time. And there is always a next time.

With that, we’ll put a ribbon on the Incident Response Fundamentals series and start working on the next set of advanced incident response-related posts.

Share: