As some of you know, I’ve always been pretty critical of quantitative risk frameworks for information security, especially the Annualized Loss Expectancy (ALE) model taught in most of the infosec books. It isn’t that I think quantitative is bad, or that qualitative is always materially better, but I’m not a fan of funny math.
Let’s take ALE. The key to the model is that your annual predicted losses are the losses from a single event, times the annual rate of occurrence. This works well for some areas, such as shrinkage and laptop losses, but is worthless for most of information security. Why? Because we don’t have any way to measure the value of information assets.
Oh, sure, there are plenty of models out there that fake their way through this, but I’ve never seen one that is consistent, accurate, and measurable. The closest we get is Lindstrom’s Razor, which states that the value of an asset is at least as great as the cost of the defenses you place around it. (I consider that an implied or assumed value, which may bear no correlation to the real value).
I’m really only asking for one thing out of a valuation/loss model:
The losses predicted by a risk model before an incident should equal, within a reasonable tolerance, those experienced after an incident.
In other words, if you state that X asset has $Y value, when you experience a breach or incident involving X, you should experience $Y + (response costs) losses. I added, “within a reasonable tolerance” since I don’t think we need complete accuracy, but we should at least be in the ballpark. You’ll notice this also means we need a framework, process, and metrics to accurately measure losses after an incident.
If someone comes into my home and steals my TV, I know how much it costs to replace it. If they take a work of art, maybe there’s an insurance value or similar investment/replacement cost (likely based on what I paid for it). If they steal all my family photos? Priceless – since they are impossible to replace and I can’t put a dollar sign on their personal value. What if they come in and make a copy of my TV, but don’t steal it? Er… Umm… Ugh.
I don’t think this is an unreasonable position, but I have yet to see a risk framework with a value/loss model that meets this basic requirement for information assets.
Reader interactions
34 Replies to “FireStarter: The Only Value/Loss Metric That Matters”
@jeff –
This is a very interesting line of inquiry.
The law is still catching up with the reality, I think.
As I recall, about a year ago, a Federal judge hearing a class action suit against Hannaford Bros. disqualified all but one plaintiff.
The plaintiffs had alledged financial harm to do efforts and worries over possible identity theft. The judge initially took the point of view that since federal law limited financial loss to $50, and since only one of the plaintiffs (the one he allowed to stay in the suit) had demonstrated any financial damage, the other class plaintiffs had no grounds to sue.
Basically, he seemed to be saying: “No harm – no foul – no right to sue”
But then, some weeks later, the judge reversed his ruling and pushed the issue up to a higher court to decide.
I asked my daughter, a lawyer, about this, and her response was helpful. She said that without any actual damages to work from, no matter how sympathetic a judge might be, there is simply no basis for pulling a number out of the air.
I don’t know where the Hannaford matter stands today – maybe someone else can update.
Patrick
“value/loss model that meets this basic requirement for information assets’
how do you value the loss/damage of a breach of a persons privacy impacting reputation ,employment or simple right to privacy etc
once the information is “out there” , eg health information- is it persistent for ever?
@ds –
There already is a framework for doing this. You don’t need two models.
It’s called FAIR. It addresses both of the issues you bring up: event frequency and loss magnitude. I don’t think you have risk without both of these.
FAIR uses Monte Carlo simulation, the structured solicitation of subject matter expert opinion, actual data when available, and probability density sampling functions to provide ranges of estimates for a variety of parameters.
The Jack Jones who has been active in this discussion is the originator of FAIR.
But, you don’t have to use FAIR. Just go to Wikipedia and do a little reading on risk, Monte Carlo simulation, and probability distributions.
Or, buy and read one of Doug Hubbard’s books (“How to Measure Anything” and “The Failure of Risk Management”), download his Excel sample files, and start fooling around.
You don’t have to have a PhD in statistics or Ops Research to build effective models that will help to reduce uncertainty and lead to better decisions.
It will be a good thing when more people realize how easy it is to do this kind of stuff, how defensible it is, and how useful it is.
With regard to sharing information, I wonder?
Do insurance companies share data? Perhaps someone else can clear this up.
Patrick
This fire has turned into a blaze, I haven’t seen this lively a discussion in a while. Fun!
I think I see two points playing together here. The first is a model to predict the value at risk and the second is a model to predict the probability of a loss occuring.
Predicting the probability of a loss feels easier, and in the physical world it is. Hence, I disagree with Mike’s comment:
>>
I agree that we can
Ben,
I really like that – it speaks to the linguist in me.
Patrick
As per usual, context is everything, eh? Letters have little to no value until formed into words. Words have some value, but not nearly as much (generally) as when they’re chained to form sentences and paragraphs and so on. It’s not the representation, but the contextual interpretation or use that is important.
Kevin,
Very interesting!
I would submit that information has no value at all except in its use or mis-use.
On the positive side, it’s the value of the business process that the information enables that matters.
On the loss side, maybe it’s a bit more complicated – your business process could be compromised due to loss of information or processing capability. Or, your information could fall into the wrong hands and create any number of liability scenarios. Or both.
IT hardware assets, given the rapid pace of change, also have little value, except in their use or mis-use.
Forget about what is carried on a company’s fixed asset ledger as book and depreciated value.
Once you install a server or a data center, what is it really worth if it isn’t doing anything to support business processes? Very little.
Have you ever tried to sell a used server or a data center or software?
This point was driven home to me about 25 years ago when the company I worked for shut down suddenly. It was a small service bureau that had about $1M worth of medium scale mainframe and DEC minicomputer hardware – that was $1M cost carried on the books.
At auction, the $1M of hardware fetched less that $40k.
Patrick
Hi Kevin,
I believe you’re right that our business colleagues should be able to tell us (at least roughly) what the business value of the information is. That said, that information is only relevant if the scenario we’re analyzing involves either the loss of that data (as in, it goes away) or damaged integrity of the information. If the data is still in our possession and we’re still able to generate/realize its value in our business processes, then losses tend to be associated with liability (i.e., secondary loss from stakeholder reactions) and the costs associated with responding to the event. Consequently, it becomes important in our analyses to distinguish between the different types of events (confidentiality vs. integrity vs. availability).
Jack
This is a fascination discussion, but one thing jumps out at me. There has been quite a bit of discussion here about determining loss, the probability of bad things happening, and other associated factors when trying to determine risk, but one comment/question was made about determining the value of our information. Not the value of loss, but the value of the information to the organization.
Call me naive, but shouldn’t our business partners be able to tell us what the value of their information is? Notice I said their information.
It would seem to me that since we see constant forecasts for revenue and expectations for profit, we (they) should be able to tie that back to the value of the information they maintain in some meaningful manner.
I’m not saying it should be or is easy, but it seems imminently reasonable to me, not that I have accomplished this in my own organization.
It is definitely giving me something to think about though.
Kevin
With regard to actual loss –
In the cases of the big breaches I have followed from 10-K filings as well as press reports, it’s clear to me that these costs unfold over time – years, in fact – and that the estimates and set-asides change and go up and down. Just take a look at the TJX 10-K’s for 2007, 2008, and 2009.
In addition, there are capex components that may be involved – accelerated spending, delayed spending, etc., that make it hard to tell what the costs are.
I don’t really agree about learning from other disciplines – although I have worked in IT for 30 years, what really turned on the lights for me with regard to risk was the 17 years of that 30 years that I spent part-time in clinical outcomes research – that’s where I learned my statistics, Bayesian techniques, and decision analysis stuff.
Since then I have accelerated my studies in statistics and quantitative risk analysis. Monte Carlo techniques and probability distributions are not hard to use correctly, even if you cannot do the math by hand.
Mike, very few things in our modern world aren’t black-box-like, wouldn’t you agree? Modern cars are a completely mystery to me, as are iPods and even refrigerators. But they work.
I am probably an exception to the rule – an old-fashioned generalist with fairly deep skill.
I never saw this coming 40 years ago when I graduated from UT Austin with a degree in Classical Greek.