Login  |  Register  |  Contact

Project Quant: Goals

In our last post we introduced the overall idea behind this project, and the Totally Transparent Research process we will follow. Now it’s time to describe the project in a little more detail and lay out our overall goals. As with everything else in this project, the goals aren’t only open for comment/debate, but feedback (both positive and negative) is encouraged.

Objective: The objective of Project Quant is to develop a cost model for patch management response that accurately reflects the financial and resource costs associated with the process of evaluating and deploying software updates (patch management).

Additional Detail: As part of maintaining their technology infrastructure, all organizations of all sizes deploy software updates and patches. The goal of this project is to provide a framework for evaluating the costs of patch management, while providing information to help optimize the associated processes. The model should apply to organizations of different sizes, circumstances, and industries. Since patch management processes vary throughout the industry, Project Quant will develop a generalized model that reflects best practices and can be adapted to different circumstances. The model will encompass the process from monitoring for updates to confirming complete rollout of the software updates, and should apply to both workstations and servers. The model should be unbiased and vendor-neutral.

Deliverables: The end deliverable will include a written report and a spreadsheet-based model. Additional written material and presentations may be developed to support the project goals.

Research Process: All materials will be made publicly available throughout the project, including internal communications (the Totally Transparent Research process). The model will be developed through a combination of primary research, surveys, focused interviews, and public/community participation. Survey results and interview summaries will be posted on the project site, but certain materials may be anonymized to respect the concerns of interview subjects. All interviewees and survey participants will be asked if they wish their responses to remain anonymous, and details will only be released with consent. Securosis and Microsoft may use their existing customers and contacts for focused interviews and surveys, but will also release public calls for participation to minimize bias due to participant selection.

Deadline: The project deliverables should be released in the June timeframe.

We’re thinking the model will start with monitoring for updates, moving through evaluation, testing, and eventual rollout. It should include all different kinds of updates, reflect operational realities, and even include options for skipping patches or outsourcing. It should account for personnel/resourse costs, downtime, and all the other minutia we know affects patch management. We think we may end up having to define some roles, unless we can find something that’s somewhat standardized already out there.

Our next step is to develop the macro version of the model, which will likely be focused on identifying the patch management process and what’s included at each phase. To support this, we plan on interviewing, and will release a call for participation. We’ll also post our proposed interview questions for feedback before we actually start talking with people. Then we’ll post the results with our first overview, and seek public feedback.

So let us know what you think, and we should be back soon with the survey questions and our first general directions for the model. Keep in mind that since we’re working totally out in the open, most of what you see won’t be even close to polished and should always be considered work in progress.

—Rich

Previous entry: Introducing Project Quant | | Next entry: Forums are Live

Comments:

By Adam Shostack  on  04/15  at  01:02 PM

Back in 2002,Steve Beattie, Seth Arnold, Crispin Cowan, Perry Wagle, Chris Wright, and I presented a paper “Timing the Application of Security Patches for Optimal Uptime” at the USENIX 16th Systems Administration Conference

The paper makes the point that patch management is a tradeoff between downtime due to attacks and downtime due to instability.  I’d encourage you to look at the paper as you build your models, and consider if our framework works for you.

http://www.homeport.org/~adam/time-to-patch-usenix-lisa02.pdf

(Total transparency: I work for Microsoft, but am not involved in this from a MS perspective.  Crispin also works for Microsoft, but wasn’t even involved in this blog comment.)


(Even more transparency: Crispin works for MS, too, now.)

By Rich  on  04/15  at  01:25 PM

Thanks Adam- looks like that will be *extremely* helpful.

By Keith Dugan  on  04/16  at  08:45 AM

Rich,

I managed the Security patch process for my company, and have what I feel to be a comprehensive, repeatable, and consistent process. But the process is not without challenges, especially in the Tier 1/Business critical server environments.

I would be willing to share with the team the framework of our process and the challenges we face in the evolution of our Security Update process in hopes that this project could help me and others that face the daily challenges/nightmares of deploying Security Updates.

By ds  on  04/16  at  01:15 PM

I don’t like posing this as a security metrics project, since patch management isn’t 100% about security, this is really an operational cost issue. 

What I’m really looking forward to is how you solve the variability problem.  For example, consider how many free variables there are:

patching frequency,
software distribution / patch management system used to deliver them,
how close to 100% coverage an org wants to get,
how quickly they want to get to that point,
geographic distribution of company,
number of datacenters,
number of different methodologies (e.g., manually patch servers, automatically patch workstations),
quality and quantity of pre-patch testing,
Does testing varv by vendor (e.g., we trust vendor x to make good patches, we don’t trust vendor y),
number of machines needing a patch (and what percent of the total that represents),
how much effort it takes to identify and target that percentage,
does a patch require a reboot (impacts potential lost productivity)

I’m sure there are many, many more…

Sure to be a lot of work, I’m looking forward to following along.  Thanks for giving us readers the chance to watch through the windows!  (no pun intended)

By Jeff Jones  on  04/16  at  03:06 PM

With respect to the variability problem, I don’t think we can solve that easily.  However, what I do think we can do is start creating a framework of process functions and enumerating the likely or common variable values for that function - just as you did in your examples.

The delivery technology is a great example.  I think it will vary from “custom local manual process” to “built-in autoupdate technology” to “3rd-party SW distribution tool”.  However, the model can definitely include the concept/function of ‘deployment tools’ and describe many of these different options.

For any given company, the many options of the model should distill down to distinct values.  E.g. company X uses Red Hat Network infastructure for deploying their RH Linux server patches (or whatever)  Smaller companies may simplify down to unique values, but I imagine that bigger orgs would have multiple completed instances of the model to represent some of differences in the org that you list (e.g. do different geographic sites have indepedence/different processes?)

If we can get a good top-level functional diagram and how those functions are related, it should allow us to spin out discussions that dig deeper in indivdual areas.

Jeff

By Rich  on  04/16  at  06:49 PM

Keith,

That will be invaluable information. I’ve set up the forums, so let me know if you want to post it there, or you can email it over and I can put it up as a blog post. Assuming you are free to share, let me know what works for you.

By Rich  on  04/16  at  06:51 PM

ds- I just set up an Initial Thoughts topic in the forum, any chance you’d be interested in posting a longer version of your comment here over there? Maybe fill out some more of the variables you think we need?

Thanks

-rich

By Keith Dugan  on  04/17  at  06:56 AM

For legal reasons I will need to keep my company name out of the postings, but will certainly provide all the data, process, and challenges. I initially worked with Microsoft to build the framework for our process, but there are still a number of challenges, with one of them being the metrics to measure the entire process.

By Andrew Yeomans  on  04/17  at  01:31 PM

- How easily can you identify that systems have been patched and the patch is functional?
- What access rights are needed to do this? (Admin credentials, unprivileged, no credentials)
- How do you identify patch requirements on systems you don’t manage yourself? (black box appliances, rogue devices, remote devices)
- How many separate patching systems are required? As Windows desktops also need to consider add-ons such as flash, quicktime, java, acrobat reader, etc each of which may have its own process. Red Hat / Ubuntu may also have separate add-on products, though often just need additional repositories adding to the standard process.
- How many local agents are required to perform the patching?
- Stats on how much local testing of patches is required, by product? Is automatic patching the least risky option? For what proportion of my systems?
- How well does my patching process work for mobile systems? Ditto for verification of patching.
- Size of patches needing to be transferred.
- How does patching work for virtual system images? (All the above questions apply.)

Name:

Email:

Location:

URL:

Remember my personal information

Notify me of follow-up comments?