Just before release, the Center for Internet Security sent us a preview copy of the CIS Consensus Metrics. I’m a longtime fan of the Center, and, once I heard they were starting on this project, was looking forward to the results.

Overall I think they did a solid job on a difficult problem. Is it perfect? Is it complete? No, but it’s a heck of a good start. There are a couple things that stand out:

  • They do a great job of interconnecting different metrics, and showing you how you can leverage a single collected data attribute across multiple higher-level metrics. For example, a single “technology” (product/version) is used in multiple places, for multiple metrics. It’s clear they’ve designed this to support a high degree of automation across multiple workflows, supporting technologies, and operational teams.
  • I like how they break out data attributes from security metrics. Attributes are the feeder data sets we use to create the security metrics. I’ve seen other systems that intermix the data with the metrics, creating confusion.
  • Their selected metrics are a reasonable starting point for characterizing a security program. They don’t cover everything, but that makes it more likely you can collect them in the first place. They make it clear this is a start, with more metrics coming down the road.
  • The metrics are broken out by business function – this version covering incident management, vulnerability management, patch management, application security, configuration management, and financial.
  • The metric descriptions are clear and concise, and show the logic behind them. This makes it easy to build your own moving forward.

There are a few things that could also be improved:

  • The data attributes are exhaustive. Without automated tool support, they will be very difficult to collect.
  • The document suggests prioritization, but doesn’t provide any guidance. A companion paper would be nice.

This isn’t a mind-bending document, and we’ve seen many of these metrics before, but not usually organized together, freely available, well documented, or from a respected third party. I highly recommend you go get a copy.

Now on to the CIS Consensus Metrics and Project Quant

I’ve had some people asking me if Quant is dead thanks to the CIS metrics. While there’s the tiniest bit of overlap, the two projects have different goals, and are totally complementary. The CIS metrics are focused on providing an overview for an entire security program, while Quant is focused on building a detailed operational metrics model for patch management. In terms of value, this should provide:

  1. Detailed costs associated with each step of a patch management process, and a model to predict costs associated with operational changes.
  2. Measurements of operational efficiency at each step of patch management to identify bottlenecks/inefficiencies and improve the process.
  3. Overall efficiency metrics for the entire patch management process.

CIS and Quant overlap for the last goal, but not for the first two. If anything, Quant will be able to feed the CIS metrics. The CIS metrics for patch management include:

  • Patch Policy Compliance
  • Patch Management Coverage
  • Mean Time to Patch

I highly suspect all of these will appear in Quant, but we plan on digging into much greater depth to help the operational folks directly measure and optimize their processes.

Share: