Adrian has been out sick with the flu all week. He claims it’s just the normal flu, but I swear he caught it from those bacon bits I saw him putting on his salad the other day. Either that, or he’s still recovering from last week’s Buffett outing. He also conveniently timed his recovery with his wife’s birthday, which I consider to be entirely too suspicious for mere coincidence.
While Adrian was out, we passed a couple milestones with Project Quant. I think we’ve finally nailed a reasonable start to defining a patch management process, and I’ve drafted up a sample of our first survey. We could use some feedback on both of these if you have the time. Next week will be dedicated to breaking out all the patch management phases and mapping out specific sub-processes. Once we have those, we can start defining the individual metrics. I’ve taken a preliminary look at the Center for Internet Security’s Consensus Metrics, and I don’t see any conflicts (or too much overlap), which is nice.
When we look at security metrics we see that most fall into two broad categories. On one side are the fluffy (and thus crappy) risk/threat metrics we spend a lot of time debunking on this site. They are typically designed to feed into some sort of ROI model, and don’t really have much to do with getting your job done. I’m not calling all risk/threat work crap, just the ones that like to put a pretty summary number at the end, usually with a dollar sign, but without any strong mathematical basis.
On the other side are broad metrics like the Consensus Metrics, designed to give you a good snapshot view of the overall management of your security program. These aren’t bad, are often quite useful when used properly, and can give you a view of how you are doing at the macro level.
The one area where we haven’t seen a lot of work in the security community is around operational metrics. These are deep dive, granular models, to measure operational efficiency in specific areas to help improve associated processes. That’s what we’re trying to do with Quant – take one area of security, and build out metrics at a detailed enough level that they don’t just give you a high level overview, but help identify specific bottlenecks and inefficiencies. These kinds of metrics are far too detailed to achieve the high-level goals of programs like the Consensus Metrics, but are far more effective at benchmarking and improving the processes they cover.
In my ideal world we would have a series of detailed metrics like Quant, feeding into overview models like the Consensus Metrics. We’ll have our broad program benchmarks, as well as detailed models for individual operational areas. My personal goal is to use Quant to really nail one area of operational efficiency, then grow out into neighboring processes, each with its own model, until we map out as many areas as possible. Pick a spot, perfect it, move on.
And now for the week in review:
Webcasts, Podcasts, Outside Writing, and Conferences
- Martin and I cover a diverse collection of stories in Episode 151 of the Network Security Podcast
- I wrote up the OS X Java vulnerability for TidBITS.
- I was quoted at MacNewsWorld on the same issue.
- Another quote, this time in EWeek on “data for ransom” schemes.
- Dark Reading covered Project Quant in its post on the Center for Internet Security’s Consensus Metrics
Favorite Securosis Posts
- Rich: The Pragmatic Data (Information-Centric) Security Cycle. I’ve been doing a lot of thinking on more practical approaches to security in general, and this is one of the first outcomes.
- Adrian: I’ve been feeling foul all week, and thus am going with the lighter side of security – I Heart Creative Spam.
Favorite Outside Posts
- Adrian: Yes, Brownie himself is now considered a cybersecurity expert. Or not..
- Rich: Johnny Long, founder of Hackers for Charity, is taking a year off to help the impoverished in Africa. He’s quit his job, and no one is paying for this. We just made a donation, and you should consider giving if you can.
Top News and Posts
- Good details on the IIS WebDAV vulnerability by Thierry Zoller.
- Hoff on the cloud and the Google outage.
- Imperva points us to highlights on practical recommendations from the FBI and Secret Service on reducing financial cybercrime.
- Oops – the National Archives lost a drive with sensitive information from the Clinton administration. As usual, lax controls were the cause.
- Some solid advice on controlling yourself when you really want that tasty security job. You know, before you totally piss off the hiring manager.
- We bet you didn’t know that Google Chrome was vulnerable to the exact same vulnerability as Safari in the Pwn2Own contest. That’s because they both use WebKit.
- Adobe launches a Reader and Acrobat security initiative. New incident response, patch cycles, and secure development efforts. This is hopefully Adobe’s equivalent to the Trustworthy Computing Initiative.
Blog Comment of the Week
This week’s best comment was by Jim Heitela in response to Security Requirements for Electronic Medical Records:
Good suggestions. The other industry movement that really will amplify the need for healthcare organizations to get their security right is regional/national healthcare networks. A big portion of the healthcare IT $ in the Recovery Act are going towards establishing these networks, where the security of EPHI will only be as good as the weakest accessing node. Establishing adequate standards for partners in these networks will be pretty key. And, also thanks to changes that were started as a part of the Recovery Act, healthcare organizations are now being required to actually assess 3rd party risk for business associates, versus just getting them to sign a business associate agreement. Presumably this would be anyone in a RHIO/RHIN.
Reader interactions
One Reply to “Friday Summary – May 22, 2009”
‘Recovered’ might be optimistic, but my wife expects me to be upright for her birthday. I am getting a t-Shirt that says “I Survived Swine Flu”. I owe it all to lemon juice and bourbon.
-Adrian