One of our readers, Jon Damratoski, is putting together a DLP program and asked me for some ideas on metrics to track the effectiveness of his deployment. By ‘ask’, I mean he sent me a great list of starting metrics that I completely failed to improve on.
Jon is looking for some feedback and suggestions, and agreed to let me post these. Here’s his list:
- Number of people/business groups contacted about incidents – tie in somehow with user awareness training.
- Remediation metrics to show trend results in reducing incidents – at start of DLP we had X events, after talking to people for 30 days about incidents we now have Y events.
- Trend analysis over 3, 6, & 9 month periods to show how the number of events has reduced as remediation efforts kick in.
- Reduction in the average severity of an event per user, business group, etc.
- Trend: number of broken business policies.
- Trend: number of incidents related to automated business practices (automated emails).
- Trend: number of incidents that generated automatic email.
- Trend: number of incidents that were generated from service accounts – (emails, batch files, etc.)
I thought this was a great start, and I’ve seen similar metrics on the dashboards of many of the DLP products.
The only one I have to add to Jon’s list is:
- Average number of incidents per user.
Anyone have other suggestions?
Reader interactions
6 Replies to “Some DLP Metrics”
Hi
I found Jon’s metrics a very good starting point – I also appreciate the pattern matching points.
Do you think these metrics are sufficient still going into 2014 with the uptake of cloud based collaboration software?
kind regards
You can’t accurately measure it, but I do think there’s some value in those figures. At the end of the day we need to present numbers the business understands from the preventative controls and place, and what is a better metric than potential loss prevented. The reputation damage is a tought one but 201 CMR 17 (Mass. Data Breach Law) and HITECH do have discrete fines for individual records that can be used.
I hate that per-record figure. It’s pretty much fiction, and really easy to poke holes in. A big chunk of the number is “reputation damage”, not that you can measure that.
Similar metrics to what we are trying to develop in our environment. Our clean up centered around the protocol level so reporting at those elements by number of events is important.
I’ve also been toying with the idea of representing the total amount of individual records into a single loss avoidance monetary figure using the ~$200 per record figure. Never sure though if those kinds of numbers have an impact with the business.
my two cents is the first three are the most critical and would be the most easy to get/understand if I presented to my CIO. Organizations are different so that doesn’t make it necessarily the right ones just personally what I think I would focus on. (And will probably apply some numbers to myself as I’ve got about 9 months of data and have been thinking of this very topic.)
Thanks for the post Rich and Jon for sharing.
Cheers
couple ideas:
– identify target values to define success vs. trends. You can change them later as long as it’s transparent.
– you do a good job showing benefit, might consider measuring cost/efficiency e.g. # or % of false positives or time spent on investigation
If the above scope of the deployment is scrubbing email with pattern matching and assuming you have an approved service to transfer sensitive data, might want to show # of users/processes migrated to [secure transfer service].
If your scope includes scanning repositories for data:
– % environment scanned per [period] to show progress of coverage
If you have a policy and solution where sensitive data is supposed to be:
– % sensitive data residing in managed repositories. This might be a great one to tie into incident impact.