I read Nassim Taleb’s “Black Swan” a few years ago and it was very instructive for me. I wrote about it a few times in a variety of old Incites (here and here), and the key message I took away was the futility of trying to build every scenario into a threat model, defensive posture, or security strategy.
In fact, I made that very point yesterday in the NSO Quant post on Defining Policies. You can’t model every threat, so don’t try. Focus on the highest perceived risk scenarios based on your knowledge, and work on what really represents that risk. What brings me back to this topic is Alex’s post: Forget trying to color the Swan, focus on what you know. Definitely read the post, as well as the comments.
Alex starts off by trying to clarify what a Black Swan is and what it isn’t, and whether our previous models and distributions apply to the new scenario. My own definition is that a Black Swan breaks the mold. We haven’t seen it before, therefore we don’t really understand its impact – not ahead of time anyway. But ultimately I don’t think it matters whether our previous distributions apply or not. Whether a Swan is Black, Creme, Plaid, or Gray is inconsequential when you have a situation and it needs to be dealt with. This gets back to approaches for dealing with incidents and what you can do when you don’t entirely understand the impact or the attack vector.
Dealing with the Swan involves doing pattern matching as part of your typical validation activity. You know something is funky, so the next step is to figure out what it is. Is it something you’ve seen before? If so, you have history for how to deal with it. That’s not brain surgery.
If it’s something you haven’t seen before, it gets interesting. Then you need to have some kind of advanced response team mobilized to figure out what it is and what needs to be done. Fun stuff like advanced forensics and reverse engineering could be involved. Cool. Most importantly, you need to assess whether your existing defenses are sufficient or if other (more dramatic) controls are required. Do you need to pull devices off the network? Shut down the entire thing?
Black Swans have been disruptive through history because folks mostly thought their existing contingencies and/or defenses were sufficient. They were wrong, and it was way too late by the time they realized. The idea is to make sure you assess your defenses early enough to make a difference.
It’s like those people who always play through the worst case scenario, regardless of how likely that scenario is. It makes me crazy because they get wrapped up in scenarios that aren’t going to happen, and they make everyone else around them miserable as they are doing it. But that doesn’t mean there isn’t a place for worst case scenario analysis. This is one of them. At the point you realize you are in uncharted territory, you must start running the scenarios to understand contingencies and go-forward plans in the absence of hard data and experience.
That’s the key message here. Once you know there is an issue, and it’s something you haven’t seen before, you have to start asking some very tough questions. Questions about survivability of devices, systems, applications, etc. Nothing can be out of bounds. You don’t hear too much about the companies/people that screw this up (except maybe in case studies) because they are no longer around. There, that’s your pleasant thought for today.
Reader interactions
3 Replies to “Color-blind Swans and Incident Response”
100% right, Mike. That’s why my blog is called “Security _Balance_” 🙂
@Augusto –
The excerpt you mention is from the post yesterday on Defining Monitoring Policies and specifically addresses the need to build threat models when trying to define correlation policies.
I agree with your other points (and it’s really the focus of much of my network security philosophy). But when building a defensive strategy/architecture (and we do need to do that, if only to prevent what we can) we do have to build risk scenarios.
The answer is both. I don’t think we can exclusive think about only reacting faster or trying to prevent every attack. As with most of life, the answer is somewhere in the middle, but I’ll always lean towards visibility and reacting faster, if only because it does allow us to address attacks we haven’t seen before.
Hope that clarifies things a bit.
Mike, I must say it’s interesting to read “You focus on the highest perceived risk scenarios, based on your knowledge and work of what really represents that risk” when you are talking about black swans. One of the key aspects of the Black Swan is that it wont fit your knowledge and work of what really represents the highest perceived risk scenarios!
What we should be taking from the Black Swan concept in Infosec is that we should work to protect our assets independently of risk scenarios, based on the idea that we cannot properly predict them. That’s why I like when you say REACT FASTER. Things like being prepared to react faster no matter the nature, type and impact of the incident you are dealing with. In my opinion that means being able to quickly apply changes to the environment and existent controls without causing even more problems and to have as much visibility as possible of what is happening to the network (Bejtlich NSM style). You will need to put some prioritization on that effort, but we need to keep in mind that every time we do that we are increasing the chances of unexpected outcomes, as we might be wrong in our assumptions for that prioritization effort. It’s the famous “e-mail is not critical” assumption that we keep hearing all the time 🙂