We spent the first few posts in this series on understanding what our data collection infrastructure should look like and how we need to organize our incident response capability in terms of incident command, roles and organizational structure and Response Infrastructure. Now we’ll turn to getting ready to detect an attack. It turns out many of your operational activities are critical to incident response, and this post is about providing the context to show why.
Operationally, we believe parts of the Pragmatic Data Security process, which Rich and Adrian have been pushing for years, represent the key operational activities needed Before the Attack:
We’ve been beating the drum for a formal data classification step for as long as I can remember, and are mostly still evangelizing the need to understand what is important in your organization. Historically security folks have treated almost all data equally, which drove a set of security controls applied to all parts of the organization. But in the real world some information resources are very important to your organization, but most aren’t. We recommend folks build a security environment to lock down the minority of data which if lost would result in senior people looking for other jobs. You do your best for everything else.
This is critical for incident response because it both helps to prioritize your monitoring infrastructure (never mind the rest of your security) and prioritizes your response effort when an incident triggers. The last thing you want to waste time on is figuring out whether the incident involves an important asset or not.
The first step is to define what is important. The only way to do that is to get out of your chair and go ask the folks who drive the business. Whoever you ask, they’ll think their pet data and projects are the most important. So a key skill is to decipher what folks think is important and what really is important. Then confirm that with senior decision makers. If arbitration is required (to define protection priorities), senior folks will do that.
It’s key to know what data is important, but that information isn’t useful until you know where it is. So the next step is to discover where the data is. This means looking in files, on networks, within databases, on endpoints, etc. Yes, automation can be very helpful in this discovery process, but whether you use tools or not, you still have to figure out where the data is before you can build an architecture to protect it.
After discovery, we recommend you establish baselines within your environment to represent normal behavior. We realize normal doesn’t really mean much, because it’s only normal at a particular point in time. What we are really trying to establish a pattern of normalcy, which then enables us to recognize when things aren’t normal. You can develop baselines for all sorts of things:
- Application activity: Normally derived from transaction and application logs.
- Database activity: Mostly SQL queries, gathered via database activity monitoring gear and/or database logs.
- Network activity: Typically involves analyzing flow data, but can also be network and security log/event analysis.
Obviously there is much more to discovery and baselining than we can put into this series. If you want to dig deeper, you can check out our reports on Content Discovery and Database Activity Monitoring. We also recently did a series on advanced monitoring, which includes a great deal of information on monitoring applications and identity. The point is that there is no lack of data, but focusing collection efforts and understanding normal behavior are the first steps to reacting faster.
The next step to preparing for the inevitable incident involves implementing an ongoing monitoring process for all the data you are collecting. Again, you won’t monitor devices, systems, and applications specifically for incident response. But the efforts you make for monitoring can (and will) be leveraged when investigating each incident.
The key to any monitoring initiative is to both effectively define and maintain the rules used to monitor the infrastructure. We detailed a 9 step process for monitoring in our Network Security Operations Quant research project, providing a highly granular view of monitoring. Getting to that level is overkill for this research, but we do recommend you check that out and adopt many of those practices.
But don’t lose site of why you are monitoring these critical assets: to both gather the data and ensure the systems are available. Those are usually the first indications you will get of an incident, and the information gathered through monitoring will give you the raw material to analyze, investigate, and isolate the root cause of the attack and remediate quickly. In terms of the Pragmatic Data Security cycle, we left out Secure and Protect, but we are focused in this series on how we detect an attack as quickly as possible (React Faster) and respond effectively to contain the damage (React Better). Defense is a totally different ballgame.
But let’s not get ahead of ourselves. The attack hasn’t even happened. So far we have discussed the foundation we need to be ready for the inevitable attack. In the next posts we’ll jump into action once we have an indication that an attack is underway.