As some of you know, I’ve always been pretty critical of quantitative risk frameworks for information security, especially the Annualized Loss Expectancy (ALE) model taught in most of the infosec books. It isn’t that I think quantitative is bad, or that qualitative is always materially better, but I’m not a fan of funny math.
Let’s take ALE. The key to the model is that your annual predicted losses are the losses from a single event, times the annual rate of occurrence. This works well for some areas, such as shrinkage and laptop losses, but is worthless for most of information security. Why? Because we don’t have any way to measure the value of information assets.
Oh, sure, there are plenty of models out there that fake their way through this, but I’ve never seen one that is consistent, accurate, and measurable. The closest we get is Lindstrom’s Razor, which states that the value of an asset is at least as great as the cost of the defenses you place around it. (I consider that an implied or assumed value, which may bear no correlation to the real value).
I’m really only asking for one thing out of a valuation/loss model:
The losses predicted by a risk model before an incident should equal, within a reasonable tolerance, those experienced after an incident.
In other words, if you state that X asset has $Y value, when you experience a breach or incident involving X, you should experience $Y + (response costs) losses. I added, “within a reasonable tolerance” since I don’t think we need complete accuracy, but we should at least be in the ballpark. You’ll notice this also means we need a framework, process, and metrics to accurately measure losses after an incident.
If someone comes into my home and steals my TV, I know how much it costs to replace it. If they take a work of art, maybe there’s an insurance value or similar investment/replacement cost (likely based on what I paid for it). If they steal all my family photos? Priceless – since they are impossible to replace and I can’t put a dollar sign on their personal value. What if they come in and make a copy of my TV, but don’t steal it? Er… Umm… Ugh.
I don’t think this is an unreasonable position, but I have yet to see a risk framework with a value/loss model that meets this basic requirement for information assets.
Reader interactions
34 Replies to “FireStarter: The Only Value/Loss Metric That Matters”
@Anders –
I think that we agree about many things.
I haven’t ever said anything much about being exact – I wouldn’t think that exactness is attainable.
Regarding accuracy, there are degrees. To drive from Dallas to Austin, I don’t really need a high degree of accuracy – just a little bit of data will enable me to get the job done – get onto I-35, as opposed to I-45 or I-20, and go south as opposed to any other direction.
Whether TJX or Heartland are misrepresenting their costs in the SEC filings or not? I don’t think so – they are reporting what they have actually spent and estimating future expenses by creating reserves. If you read successive filings, you see that the reserves go up and down based upon how the cost estimates look in each quarter. Harder to know are the accelerations in CAPEX that may occur because of a breach. Also, cost per record may not be the most relevant common denominator fro breach cost – it doesn’t work so well for loss of intellectual property, for example.
The key here is reduction of uncertainty – if you don’t think you can do that enough to make better decisions, then there isn’t much else I can say.
But it’s the reason I advocate quantitative methods, probability distributions, and looking at data across a range of distributions and estimates.
BTW – Have you read Doug Hubbard’s books? “How to Measure Anything” and the “Failure of Risk Management”?
If not, I highly recommend them for giving an interesting perspective on these issues.
These are difficult problems – no question about it – Dan Geer is on record saying that infosec is one of the most intellectually challenging activities out there.
But if we keep saying that nothing is possible, then nothing ever will be possible
Best regards,
Patrick
Re: medical data – you would not be pleased to know, I don’t think, how much junk gets past peer review – I have reviewed thousand’s of peer-reviewed clinical articles in medicine – the data presented are often very problematic, but lots of times the article is accepted for publication just because of the author’s name.
@Patrick,
Well, we certainly agree on one thing: No one has perfect data
But I think we have worse data than, for instance, medical research field.
I should’ve said “there’s very little reliable information” in my last post. At least that is my impression. I perceive the $x/rec estimates that are published as mostly guesswork. Informed guesses, maybe, but still biased. I question the methods for coming up with these numbers more than I question the various risk calculation methods we put them into afterwards.
That “much of the medical data that is published .. leaves a lot to be desired” is no surprise, this goes just as well for a lot of scientific articles. But I am assuming that they would not be published, if the reviewers had no faith in the collection methods.
Can we trust the scientific validity of the TJX or Ponemon data?
If not, then we might see them as interesting, and they might give useful insights or pointers in the right direction.
But the minute we assume a certain accuracy, we’re in trouble.
And that false sense of accuracy occurs as soon a $ figure is on some manager’s Powerpoint slide, or we put it into a risk calculation.
We can never eliminate uncertainty, it just seems to me that the uncertainties are currently so large that our attempts to be exact fail.
@Anders –
I disagree –
There is actually quite a lot of information available – check the SEC Filings from TJX, Heartland, and others (Forms 10-Q and 10-K). The numbers they report are informative. Or check the Maine Breach report – it breaks some things out for TJX and Hannaford. Even read the Ponemon reports and try to understand the context of those data. I don’t think that the evidence supports Ponemon’s estimate of approx $200/rec – even so, there is much to be learned there.
Loss magnitude is one of the easier things to develop data for.
Threat capability, threat frequency, and the effectiveness of controls/defenses is much more difficult, but still possible to model.
I think that it depends upon what you think measurement means and is for.
I think that it’s for the purpose of reducing, not eliminating uncertainty.
That’s also what the statistical functions are for – modeling the uncertainty and variablity – two different things, by the way.
I would suggest to you that your statement about statistical functions misses this point.
With regard to your last statement that it doesn’t matter, I also strongly disagree.
No one has perfect data – doctors don’t, engineers don’t, scientists don’t, and we don’t.
I worked in medical outcomes research for 17 years and will tell you that much of the medical data that is published in peer-reviewed journals leaves a lot to be desired.
Best regards,
Patrick
Patrick,
since we have no sound method of actually measuring loss magnitude, methods like FAIR, FIRM or whatever are not going to work much better than others.
Applying statistical functions successfully implies knowledge of statistical distributions, but as others have pointed out, there’s very little information available, thus denying us such knowledge.
So until we find that way of getting data that’s both accurate and plentiful, which risk calculation you choose is somewhat insignificant.