Adrian has been out sick with the flu all week. He claims it’s just the normal flu, but I swear he caught it from those bacon bits I saw him putting on his salad the other day. Either that, or he’s still recovering from last week’s Buffett outing. He also conveniently timed his recovery with his wife’s birthday, which I consider to be entirely too suspicious for mere coincidence.

While Adrian was out, we passed a couple milestones with Project Quant. I think we’ve finally nailed a reasonable start to defining a patch management process, and I’ve drafted up a sample of our first survey. We could use some feedback on both of these if you have the time. Next week will be dedicated to breaking out all the patch management phases and mapping out specific sub-processes. Once we have those, we can start defining the individual metrics. I’ve taken a preliminary look at the Center for Internet Security’s Consensus Metrics, and I don’t see any conflicts (or too much overlap), which is nice.

When we look at security metrics we see that most fall into two broad categories. On one side are the fluffy (and thus crappy) risk/threat metrics we spend a lot of time debunking on this site. They are typically designed to feed into some sort of ROI model, and don’t really have much to do with getting your job done. I’m not calling all risk/threat work crap, just the ones that like to put a pretty summary number at the end, usually with a dollar sign, but without any strong mathematical basis.

On the other side are broad metrics like the Consensus Metrics, designed to give you a good snapshot view of the overall management of your security program. These aren’t bad, are often quite useful when used properly, and can give you a view of how you are doing at the macro level.

The one area where we haven’t seen a lot of work in the security community is around operational metrics. These are deep dive, granular models, to measure operational efficiency in specific areas to help improve associated processes. That’s what we’re trying to do with Quant – take one area of security, and build out metrics at a detailed enough level that they don’t just give you a high level overview, but help identify specific bottlenecks and inefficiencies. These kinds of metrics are far too detailed to achieve the high-level goals of programs like the Consensus Metrics, but are far more effective at benchmarking and improving the processes they cover.

In my ideal world we would have a series of detailed metrics like Quant, feeding into overview models like the Consensus Metrics. We’ll have our broad program benchmarks, as well as detailed models for individual operational areas. My personal goal is to use Quant to really nail one area of operational efficiency, then grow out into neighboring processes, each with its own model, until we map out as many areas as possible. Pick a spot, perfect it, move on.

And now for the week in review:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Favorite Outside Posts

Top News and Posts

Blog Comment of the Week

This week’s best comment was by Jim Heitela in response to Security Requirements for Electronic Medical Records:

Good suggestions. The other industry movement that really will amplify the need for healthcare organizations to get their security right is regional/national healthcare networks. A big portion of the healthcare IT $ in the Recovery Act are going towards establishing these networks, where the security of EPHI will only be as good as the weakest accessing node. Establishing adequate standards for partners in these networks will be pretty key. And, also thanks to changes that were started as a part of the Recovery Act, healthcare organizations are now being required to actually assess 3rd party risk for business associates, versus just getting them to sign a business associate agreement. Presumably this would be anyone in a RHIO/RHIN.