One thing that’s really tweaked me over the years when evaluating data breaches is the complete lack of consistency in costs reporting. On one side we have reports and surveys coming up with “per record” costs, often without any transparency as to where the numbers came from. On the other side are those that try and look at lost share value, or directly reported losses from public companies in their financial statements, but I think we all know how inconsistent those numbers are as well.
Also, from what I can tell, in most of the “per record” surveys, the biggest chunk (by far) are fuzzy soft costs like “reputation damage”. Not that there aren’t any losses due to reputation damage, but I’ve never seen any sort of justified model that accurately measures those costs over time. Take TJX for example – they grew sales after their breach.
So here’s a modest proposal for how we could break out breach costs in a more consistent manner:
Per Incident (Hard Costs):
- Incident investigation
- Incident remediation/recovery
- PR/media relations costs
- Optional: Legal fees
- Optional: Compliance violation penalties
- Optional: Legal settlements
Per Record (Hard Costs):
- Notification costs (list creation, printing, postal fees).
- Optional: Customer response costs (help desk per-call costs).
- Optional: Customer protection costs (fraud alerts, credit monitoring).
Per Incident
(Soft Costs… e.g., not always directly attributable to the incident): Trending is key here – especially trends that predate the incident.
- Customer Churn (% increase over trailing 6 month rate): 1 week, 1 month, 6 months, 12 months, n months.
- Stock Hit (not sure of best metric here, maybe earnings per share): 1 week, 1 month, 6 months, 12 months, n months.
- Revenue Impact (compared to trailing 12 months): 1 week, 1 month, 6 months, 12 months, n months.
I tried to break them out into hard and soft costs (hard being directly tied to the incident, soft being polluted by other factors). Also, I recognize that not every organization can measure every category for every incident.
Not that I expect everyone to magically adopt this for standard reporting, but until we transition to a mechanism like this we don’t have any chance of really understanding breach costs.
Reader interactions
7 Replies to “Creating a Standard for Data Breach Costs”
@Rich:
Regarding Adam’s point – the other problem with measuring over a wider period is that the longer you observe, the greater the likelihood that some event other than the breach is impacting share price. Also, depending on which economic theory you buy into, you may not have sound reason to expect it to take longer than a few days for the new information represented by the breach to be reflected in share prices. A decent overview of the methodological issues is available at http://tinyurl.com/mcqpzu (it’s a PDF version of a 1997 article from the Journal of Economic Literature).
May I could suggest some additional ideas for “Per Incident”? They are not necessarily simple to quantify.
a) Decreased employee morale and therefore higher staff churn; thus loss of business knowledge and greater recruitment costs. Good employees are perhaps more likely to leave first.
b) We’ve also seen a hidden side-effect in data quality. Future/new customers may trust the organisation less, and thus provide reduced or inaccurate data (as far as they can) in their dealings with the organisation that had the breach. If this is used to support future business decisions, it can have a long-lasting detrimental effect.
c) Increased insurance premiums (e.g. cyber liability insurance), or transactional costs (e.g. credit card charges), or loss of licence (e.g. to trade). These might be in your “Compliance violation penalties” category.
d) The resources to a business are always finite, and the time/effort to respond to the breach will almost certainly delay or cancel other business activities and development, which may have an effect on future profitability
e) Reduced investor confidence leading to difficulties in raising finance, or an increased cost of this.
Rich,
I have sent you a lengthy response offline regarding FAIR and other issues.
The last time I tried to post something lengthy here it was classified as spam and dumped.
Adam,
If it’s noise after only a few days, is it even relevant data? If we can’t attribute the impact to stock price beyond a few days, seems like it then doesn’t matter?
Hi Rich,
On the stock hit, you’ll need to measure in days, at least according to Acquisiti, Telang and Freidman’s 2005/6 work. A month later, these things are noise.
For customer & revenue churn, I think it’s key to know not only trending, but volatility of those numbers before the event disclosure.
Patrick,
Good point. I’m trying to keep this focused on the narrow measurements in the victim organization, since the issues get far more complex (and fuzzier) when we expand out of that.
How well is FAIR aligning?
I have been working on something similar.
Distinguishing between those costs that make sense on a per record basis and those that make sense on a per incident basis is very important.
Some breaches can be measured by records lost. Others, like IP theft, cannot be measured in that way, so it’s important to take that into consideration, too.
As a guide, I am using the breakout of loss magnitude provided by FAIR – primary and secondary losses – and the various categories within each.
Also, when talking about the “cost” of a data breach, it’s important to recognize that a number of parties might have costs – the breached entity, business partners, customers, individuals, law enforcement (hence the public at large), shareholders, etc.
So it also becomes a question of whose costs we are talking about.
Best –
Patrick Florer
Dallas