Login  |  Register  |  Contact

Introducing Project Quant

Much to our own surprise, we’ve been doing a lot of work on security metrics over the past year. From our work with Mozilla, to the Business Justification for Data Security, we’ve found ourselves slicing and dicing the numbers and methodologies to develop tools that provide a little more insight into managing security operations, help communicate with the business side, and justify where and what we spend on security. Security, like any maturing industry, is more than just banging away on the latest technologies. We require methodologies to assist us in optimizing our programs, prioritizing our efforts, and make sound business (of security) decisions. And I don’t mean only justification metrics to communicate risk and value to get budgets, but internal, operational measurements that directly apply to daily security decisions.

That’s why I’m excited to announce we were approached by Jeff Jones at Microsoft to work with him on a new project around the metrics of patch management. We are handling this one very differently than our other projects, and it’s as much an experiment with a new research process as it is one of security metrics.

As you know, we are incredible sticklers about our objectivity and producing research that’s free of bias (well, except for our bias). For our other projects, even when they were sponsored by vendors, the sponsor wasn’t involved in the creation of the research at all. For this project Jeff wanted to be involved, but also asked for an open, unbiased model that will be useful to community at large (in other words, he didn’t ask for a sales tool). Rather than us developing something back at the metrics lab, Jeff asked us to lead an open community project with as much involvement from different corners of the industry as possible.

We feel this fits with our Totally Transparent Research process where all the research is developed out in the open, and everyone gets to contribute, comment on, and review the content during development. We feel this is the best way to reduce bias, and even if there is bias, at least there’s a paper trail. Yes, it’s risky for us to allow direct involvement of the sponsor, but we’re hoping that the process works as we think it will, which also happens to match Microsoft’s project goals.

In this post we’re going to describe the process, and in the next post we’ll detail the project goals. We’d like feedback on both the project process and goals, since that helps keep them straight. We’re totally serious – none of us wants a biased or narrowly useful result; we wouldn’t participate in this project if we didn’t feel we could provide something of value to the community that also fits with our objectives as independent analysts.

  1. We are establishing a project landing site here at Securosis which will contain all material and research as it is developed. Right now we have comments set up for feedback, and we should have that switched over to a forums system very soon. [DONE]
  2. Every piece of research will be posted for public comment. No comments will be filtered unless they are spam, totally off topic, or personal insults. On the off chance you don’t see your comment right after posting, it may have gotten stuck in our blog spam filters, so please email me directly to pull it out.
  3. Everyone is encouraged to comment and contribute – including competing vendors – and anonymous comments are supported. We only ask that if you are a vendor with skin in the game (a product related to patch management) that you identify yourself (we’ll call you out if we think you aren’t being open).
  4. All significant contributors will be acknowledged in the final report. The bad side is that we won’t be able to financially compensate you, and the project itself will retain ownership rights. Someday we’ll figure out a better way to handle that, and suggestions are appreciated.
  5. All material will be released under a Creative Commons license (TBD).
  6. Spreadsheets will be released in both Excel and open formats. Other written documents will be released as PDF (no, it’s not technically open, but if you have real problem with PDF email me).
  7. On the back end, we are tagging, archiving, and making public all our project-related emails. We won’t be recording phone calls, but will be releasing meeting notes.
  8. All materials will be consolidated on the project site, with major deliverables also posted to the Securosis blog.

In short, we are developing all research out in the open, soliciting community involvement at every stage, making all the materials public, acknowledging contributors, and eventually releasing the final results for free and public use. The end goal of the project is to deliver a metrics model for patch management response to help organizations assess their costs, optimize their process, and achieve their business goals.

Let us know what you think, even if you think we’re just full of it…

(Oh, and we’re not totally thrilled with the project name, so please send us better ideas)

—Rich

| | Next entry: Project Quant: Goals

Comments:

By Wolfgang Kandek  on  04/15  at  11:14 PM

Rich,

sounds like a great project. I am looking forward to see the first material and will be very interested in contributing.

-
Wolfgang

By Chris Green  on  04/16  at  09:26 AM

This is a very good thing to be doing.  I struggle with several metrics-related issues when it comes to doing patch/vulnerability management in the higher-education realm.  We currently group vulnerabilities by affected area and track (issue discovered, issue last seen) dates from a couple different platforms.

Things I’ve not adequately dealt with:

- # of vulnerabilities versus # managed versus asset criticality of asset versus compensating controls. 
- # of vulnerabilities versus scanning capabilities.  If I have local scanning credentials, things cna look worse than for an area I don’t have those capabilities for.  Normalizing this data gets very hairy.

By Sean Bodmer  on  03/24  at  07:20 AM

This sounds long overdue… With these metrics it will be easier to explain to leadership the costs of pro-active security versus re-active security - which has been the defacto standard for years… Glad someone finally stood up to do something in this area!

Name:

Email:

Location:

URL:

Remember my personal information

Notify me of follow-up comments?