We have spent much of this series on why benchmarking is important. But we also need to point out some situations where benchmarking may not be appropriate. There are clearly situations where you can’t benchmark, particularly is on granular operational data, which I call Ninja Metrics.
Dependency: Peer Group Data
Most organizations have ‘nascent’ metrics programs, which may actually be too kind. But not all. Some have embraced detailed programs that gathers all sorts of data, mostly focused on operations. This represents the next step of a metrics program, and can be represented by some of the ideas put forth through our Quant research projects. We have created highly granular process maps (with associated metrics) for Patch Management, Network Security Operations, and Database Security. Each report specifies 50+ distinct metrics you can measure for that discipline. Yes, they are comprehensive.
But there is a clear issue regarding benchmarking at this level. You will have a hard time finding similarly granular data from other companies for comparison. So the key dependency in implementing a benchmarking effort is the availability of peer group data for comparison.
Compare to Yourself
What do world class athletes do when they reach the top of the heap? You know, folks like Michael Phelps, who has basically shattered every record there is to shatter. They start comparing themselves to their past performance. Improvement is measured internally rather than externally. Even if no one else has ever done better, you know you can. And this is what you will likely need to do the most granular operational functions.
When you take a step back this makes a lot of sense. The reality is that you aren’t necessarily trying to ‘win’ relative to operational excellence. You want to improve. That said, it is important to have an idea of where you stand in comparison to everybody else, at least on the high-level operational metrics. But for the most granular metrics, not so much.
We hope that over time enough companies will start tracking granular operational metrics, and become comfortable enough with benchmarking, to share their data. But that’s not going to happen tomorrow or even the day after. In the meantime you can (and should) continue to push your metrics program forward – just understand your comparisons may need to be internal.
As we wrap up the Benchmarking series, we’ll look at how to get some Quick Wins and see the process in action.
Comments