Being knee deep in a bunch of research projects doesn’t give me enough time to comment on the variety of interesting posts I see each week. Of course we try to highlight them both in the Incite (with some commentary) and in the Friday Summary. But some posts deserve a better, more detailed treatment. We haven’t done an analysis, but I’d guess we find a pretty high percentage of what Richard Bejtlich writes interesting. Here’s a little hint: it’s because he’s a big brained dude.
Early this week he posted a Security Effectiveness Model to document some of his ideas on threat-centric vs. vulnerability-centric security. I’d post the chart here but without Richard’s explanations it wouldn’t make much sense. So check out the post. I’ll wait.
When I took a step back, Richard’s labels didn’t mean much to me. But there is an important realization in that Venn diagram. Richard presents a taxonomy to understand the impact of the bets we make every day. No, I’m not talking about heading off to Vegas on a bender that leaves you… well, I digress. But the reality is that security people make bets every day. Lots of them.
We bet on what’s interesting to the attackers. We bet on what defenses will protect those interesting assets. We bet on how stupid our employees are (they remain the weakest link). We also bet on how little we can do to make the auditors go away, since they don’t understand what we are trying to do anyway.
And you thought security was fundamentally different than trading on Wall Street?
Here’s the deal. A lot of those bets are wrong, and Richard’s chart shows why. With limited resources we have to make difficult choices. So we start by guessing what will be interesting to attackers (Richard’s Defensive Plan). Then we try to protect those things (Live Defenses). Ultimately we won’t know everything that’s interesting to attackers (Threat Actions). We do know we can’t protect everything, so some of the stuff we think is important will go unprotected. Oh well.
Even better, we won’t be right on what we assume the attackers want, nor on what defenses will work. Not entirely. So some of the stuff we think is important isn’t. So of our defenses protect things that aren’t important. As in advertising, a portion of our security spend is wasted – we just don’t know which portion. Oh well. We’ll also miss some of the things the attacker thinks are important. That makes it pretty easy for them, eh? Oh, well.
And what about when we are right? When we think something will be a target, and the attackers actually want it? And we have it defended? Well, we can still lose – a persistent attacker will still get its way, regardless of what we do. Isn’t this fun?
But the reason I so closely agree with most of what Richard writes is pretty simple. We realize the ultimate end result, which he summed up pretty crisply on Twitter (there are some benefits to a 140 character limit):
“Managing risk,” “keeping the bad guys out,” “preventing compromise,” are all failed concepts. How fast can you detect and correct failures?
and http://twitter.com/taosecurity/status/108527362597060608:
The success of a security program then ultimately rests w/ the ability to detect & respond to failures as quickly & efficiently as possible.
React Faster and Better anyone?
Reader interactions
One Reply to “Making Bets”
You know, if this mindset continues, I might have to rethink my stance on honeypots!
I don’t consider honeypots or things like them to be of any use to most orgs, unless you have a vested interest in security or threat intelligence, which isn’t even a minor goal of a huge majority of orgs.
But if you want to focus heavily on detection and response, I’d argue that you need more exercises (both live and controlled) to test your skills. I’d include a test/attackable environment in that regimen.
Of course, the painful reality is that takes time from teams who are already time-strapped.