Login  |  Register  |  Contact
Thursday, June 11, 2009

Project Quant: Patch Evaluation Phase

By Rich

Okay, here’s my first stab at detailing out the Evaluation phase of the patch management cycle.

As with the Monitor for Advisories phase, I focused on the process, and listed out potential variables for each step in the process. Some of the variables are things like “completeness of …”. While those don’t have a direct cost, I’m thinking those will add a cost factor to increase the time involved. For example, if a given asset type isn’t properly listed in the asset type list, that could increase the time to evaluate that patch by Y%. For this model I don’t expect to determine some hard constant percentage, but hopefully with the survey work we plan on continuing we can at least provide some guidance.

As always, let me know what you think…

(Click to pop up the full-sized image)

image

–Rich

Wednesday, June 10, 2009

Details: Monitor for Advisories

By Rich

Project Quant post here…

Below is my first pass (based on the work in the forums by Daniel) on the detailed process for the first phase in the Patch Management Cycle.

Daniel included variables, but I decided to stick to the process level, and we can roll out the detailed variables once we get some consensus.

Here’s my thinking:

  1. This phase should only cover the resources required to monitor for releases. Once that happens, we move on to the evaluation phase.
  2. It needs to reflect initial and ongoing costs to maintain asset type lists, as well as advisory source lists.
  3. I’ve tried my best to define the variables, which I know we will need to detail more once we start moving this into spreadsheet format.
  4. This is the “uber-model” and should include everything you could possibly do… clearly not all organizations will follow all steps for all assets.

This is merely a first pass, so let me know what you think.

image

One thing I’m realizing is that since this is a cost model, it would be easy to misinterpret it to say “doing nothing is really cheap”. I think it’s important to remember that as an operational efficiency model, measurements of the security impact of doing nothing are out of scope. I’m getting some ideas on how to bring that into scope a little more, but I think we need to stay away from getting dragged into all the risk threat / stuff.

As with all the Project Quant posts, you can comment here or in the forums…

–Rich

Thursday, May 21, 2009

Project Update: Survey and Cycle

By Rich

Just a quick note on what we’ve been up to:

  1. We are close to launching the initial survey. As I posted in the forums I built it out in SurveyMonkey for review. Please let me know what you think, and then we can launch it. The goal right now is to capture what people are really doing at a high level. You can check it out here, and we will wipe any results people put in now before we go live.
  2. I think we’ve finally nailed the high-level process. It’s in the forums for discussion, and I’ve dropped the current image below. The biggest changes are adding shielding, fixing some of the definitions, and adding the sub-cycle.

The next step is to break out each phase of the cycle and start developing individual processes and metrics. To be honest, this is the hard part, and I’ll post each bit as we create it. Thanks to Daniel in the forums, we already have a good start for the Monitor phase.

image

–Rich

Wednesday, May 13, 2009

Draft Questions for Initial Survey

By Rich

One of our major milestones in the project is to perform an initial user survey to get a handle on how people are managing their patching process.

I just completed my first rough-draft of some survey questions over in the forums. The main goal is to understand to what degree people have a formal process, and how their process is structured.

I consider this very rough and in definite need of some help.

Please pop over to this thread in the forums and let me know what you think. You can also leave comments here if you don’t want to register for the site/forums.

In particular I’m not sure I’ve actually captured the right set of questions, based on our priorities for the project (I know survey writing is practically an art form).

–Rich

Thursday, May 07, 2009

Updated Patch Management Cycle

By Rich

Based on feedback from the forums, I updated the patch management cycle. Please take a look and let me know what you think. Here’s the direct link to the update in the forums.

The main changes are swapping the evaluate/acquire phases, including both pre and post package creation testing, and creating a sub-cycle for deploying-confirming-cleaning up.

–Rich

Thursday, April 30, 2009

Project Quant: Patch Management Cycle

By Rich

Although we posted some of our initial thoughts, and have been getting some great feedback from everyone, Jeff and I realized that we haven’t even defined a standard patch management cycle yet to start from. DS, Dutch, and a few others have started posting some metrics/variables, but we didn’t have a process to fit them into.

I’ve been researching other patch management cycles, and here’s my first stab at one for the project. You’ll notice it’s a little more granular than most of the other ones out there – I think we need to break out phases in more detail to both match the different processes used by different organizations, and to give us cleaner buckets for our metrics.

image

Here’s a quick outline of the steps:

  1. Monitor for Release/Advisory: Anything associated with tracking patch releases, since all vendors follow different processes.
  2. Acquire: Get the patch.
  3. Evaluate: Initial evaluation of the patch. What’s it for? Is it security-sensitive? Do we use that software? Is the issue relevant in our environment? Are there workarounds or dependencies?
  4. Prioritize/Schedule: Prioritize based on the nature of the patch itself, and your infrastructure/assets. Then build out a deployment schedule, based on your prioritization.
  5. Test and Certify/Accredit: Perform any required testing, and certify the patch for release. This could include any C&A requirements for you government types, compliance requirements, or internal policy requirements.
  6. Create Deployment Package: Prepare the patch for deployment.
  7. Deploy.
  8. Confirm Deployment: Verify that patches were properly deployed. This might include use of configuration management or vulnerability assessment tools.
  9. Clean up: Clean up any bad deployments, remnants of the patch application procedure, or other associated cruft/detritus.
  10. Document and Update Configuration Standards: Document the patch deployment, which may be required for regulatory compliance, and update any associated configuration standards/guidelines/requirements.

This is a quick and dirty pass and meant to capture the macro-level steps in the process. I know not all organizations follow, or need to follow, a process like this, but it will help us organize our metrics.

Let me know what you think – I’m sure I’m missing something…

–Rich

Tuesday, April 21, 2009

Project Quant Town Hall at RSA

By Rich

Hey folks,

Just a quick note that we had a few people ask if we were going to hold a meeting on Project Quant out here at RSA.

We know it’s last minute, but if you are interested in hearing more about the project and providing some input, we’ve decided to hold a Town Hall meeting right after the Securosis Recovery Breakfast on Wednesday morning at Jillian’s. The breakfast runs until 11, and we’ll gather up all the Quant people right after that and find a quiet corner.

The WASC meetup is at 12, so if you plan things right you can probably hang out at Jillian’s all day and never head over to Moscone.

Feel free to drop a comment if you think you might show, but otherwise we’ll see you there…

–Rich

Thursday, April 16, 2009

Jeff Jones’ Initial Thoughts

By Rich

To get us started Jeff sent over his initial thoughts. I think he’s done a good job of capturing a lot of the initial problem space we need to look at. Once I get caught up with all the administrative work of getting the project site up and running, I’ll go ahead and post my thoughts. We also encourage you to jump into the forums and start letting us know what you think needs to be included in the model.


    <p>
        Rich, This email is to capture my initial thoughts on Project Quant content and structure. Let&#8217;s try to lock on terminology as we go. I only bring this up since I seem to keep using the same phrases to describe different components and we need to have unique terms. Jeff Roles (possible roles, *please suggest better role names* ;-))
    </p>
    <ul>
        <li>Monitor. Someone or team responsible for keeping current on select web sites and mailing lists to keep abreast of disclosed vulnerabilities and/or risks to software in the org.
        </li>
        <li>Risk/Security Assessment Team (possibly separate from the folks/teams that operationally deploy updates)
        </li>
        <li>Desktop/Server management team
        </li>
        <li>Network management team
        </li>
        <li>Test team
        </li>
        <li>Audit/compliance team
        </li>
    </ul>
    <p>
        Maintenance/Independent Functions
    </p>
    <ul>
        <li>Software subscription/maintenance fees.
        </li>
        <li style="list-style: none">
            <ul>
                <li>Cost: annual, typically tied to deployed software. I think this is part of the ongoing cost.
                </li>
                <li style="list-style: none">
                    <ul>
                        <li>Because of the different ways companies handle this, we need to think about if/how to include initial purchase cost and timeframes in order to get averages. Not that we need to decide it, but we should generall identify the different likely scenarios and give advice. E.g.
                        </li>
                        <li style="list-style: none">
                            <ul>
                                <li>1. Purchase cost + annual maintenance fee.
                                </li>
                                <li>2. No purchase cost, subscription fee. (red hat)
                                </li>
                                <li>3. No purchase cost, no subscription fee. (ubuntu &#8211; those that choose not to pay, but take the benefit of the updates)
                                </li>
                            </ul>
                        </li>
                    </ul>
                </li>
                <li>Independent of Patch Events
                </li>
            </ul>
        </li>
        <li>Information Services.
        </li>
        <li style="list-style: none">
            <ul>
                <li>Description: Some orgs may utilize information services (free or paid) to keep them informed on risk-related patching issues. The ISS service comes to mind, as do free lists like bugtraq.
                </li>
                <li>Cost: Zero to ?$$$?. I would guess that smaller companies use free services and larger companies tend to subscribe to professional services, though I&#8217;m sure there are a lot of exceptions
                </li>
                <li>Independent of Patch Events
                </li>
            </ul>
        </li>
    </ul>
    <p>
        Event-Related Functions
    </p>
    <ul>
        <li>Triage Assessment.
        </li>
        <li style="list-style: none">
            <ul>
                <li>Assume either a vulnerability is disclosed on bugtraq or a vendor releases a security advisory (term? Risk event, something else? &#8211; risk event might be good because it makes me think of other types of events, I&#8217;m going to start a new list below of events that might trigger the team to do something.)
                </li>
                <li style="list-style: none">
                    <ul>
                        <li>Thought: We should probably map out both scenarios and the event flow should accommodate both, as both are realistic events.
                        </li>
                    </ul>
                </li>
                <li>Description. When an event (see below) happens that potentially changes the software risk, this team/person makes an assessment of whether they are affected, and if so, to what extent and what actions are required. This is the triage step that comes before any deep technical analysis.
                </li>
                <li>Assessment outcomes: (we can probably come up with a list of possible outcomes that would flow to other functions)
                </li>
            </ul>
        </li>
        <li>Analysis. (not sure if this should be separate or part of assessment, so breaking it out for now)
        </li>
        <li>Description. If assessment determines that action is required (e.g. org is potentially affected), this step is to:
        </li>
        <li style="list-style: none">
            <ul>
                <li>Technically analyze a patch to understand details of where it would be deployed and what it would impact, if a reboot might be necessary, etc.
                </li>
                <li>Explore non-patch mitigations that might be implemented effectively
                </li>
                <li>Review facts about known exploits to see if any decisions need to be revisisted, priorities changed, etc.
                </li>
            </ul>
        </li>
    </ul>
    <ul>
        <li>Testing
        </li>
    </ul>
    <ul>
        <li>Description. If a decision is made to roll out a patch, this would be the testing phase appropriate for the org and application in question.
        </li>
        <li>Some orgs may have policies of &#8220;no testing&#8221; if, for example, they just roll out anything the vendor releases. Other companies may do extensive app compat testing with custom apps.
        </li>
    </ul>
    <ul>
        <li>Mitigation.
        </li>
    </ul>
    <ul>
        <li>Description. Probably an optional step that may be taken as an alternative or interim solution.
        </li>
    </ul>
    <ul>
        <li>Patch Deployment.
        </li>
    </ul>
    <ul>
        <li>This will vary a lot. I imagine several sub-scenario for this, even for the same set of software.
        </li>
        <li>Questions to consider:
        </li>
        <li style="list-style: none">
            <ul>
                <li>Using vendor update infrastructure? Which one?
                </li>
                <li>Using Patch Management application? Which one?
                </li>
                <li>Manual process?
                </li>
                <li>Custom/home-grown process?
                </li>
                <li>Discretionary process? (e.g. University posts the patch on a server and tells students/faculty to download it and apply it.)
                </li>
                <li>Is there enforcement? Something like NAP or NAC that will deny access if not policy compliant?
                </li>
            </ul>
        </li>
    </ul>
    <ul>
        <li>Support Calls
        </li>
    </ul>
    <ul>
        <li>I&#8217;m not sure how to break this down, but I know certain types of updating policies causes more/less calls. This will be a good area to probe with orgs of different sizes for case studies &#8211; to determine what those factors might be in general.
        </li>
    </ul>
    <ul>
        <li>Rollback (? And Redeployment ?)
        </li>
        <li style="list-style: none">
            <ul>
                <li>There are scenarios where rollback must be considered:
                </li>
                <li style="list-style: none">
                    <ul>
                        <li>Flawed patch that causes unacceptable behavior
                        </li>
                        <li>15% through deployment and a new patch is released, so the team decides to combine the deployment package and restart. (? Would they rollback? Could they need to in some situations? ?)
                        </li>
                        <li>Unflawed patch, but behavior changes interfere with business needs
                        </li>
                    </ul>
                </li>
            </ul>
        </li>
    </ul>
    <ul>
        <li>Periodic &#8220;Group&#8221; Updates.
        </li>
        <li style="list-style: none">
            <ul>
                <li>Not sure if this is needed, but it strikes me that companies might choose to (for example) ignore all Moderate and Low updates when they come out and just roll them out annually as an image update or something similar. If so, this might be largely independent of Risk Events, but would be relatively predictable for an organization based upon their policy.
                </li>
            </ul>
        </li>
    </ul>
    <ul>
        <li>Events:
        </li>
        <li>New vuln disclosed on bugtraq
        </li>
        <li>Vendor releases security advisory/patch for previously undisclosed vuln
        </li>
        <li>Sample exploit code released for vuln
        </li>
        <li>Malicious software in the wild
        </li>
        <li>Vendor releases security advisory/patch for previously disclosed(public) vuln
        </li>
    </ul>

–Rich

Forums are Live

By Rich

We’ve already started getting some great feedback, so we’ve gone ahead and opened up the Project Quant forums.

One limitation of our forum software is that it doesn’t allow anonymous posting, so please feel free to use the comments here in the blog section if you don’t want to reveal who you are.

–Rich

Wednesday, April 15, 2009

Project Quant: Goals

By Rich

In our last post we introduced the overall idea behind this project, and the Totally Transparent Research process we will follow. Now it’s time to describe the project in a little more detail and lay out our overall goals. As with everything else in this project, the goals aren’t only open for comment/debate, but feedback (both positive and negative) is encouraged.

Objective: The objective of Project Quant is to develop a cost model for patch management response that accurately reflects the financial and resource costs associated with the process of evaluating and deploying software updates (patch management).

Additional Detail: As part of maintaining their technology infrastructure, all organizations of all sizes deploy software updates and patches. The goal of this project is to provide a framework for evaluating the costs of patch management, while providing information to help optimize the associated processes. The model should apply to organizations of different sizes, circumstances, and industries. Since patch management processes vary throughout the industry, Project Quant will develop a generalized model that reflects best practices and can be adapted to different circumstances. The model will encompass the process from monitoring for updates to confirming complete rollout of the software updates, and should apply to both workstations and servers. The model should be unbiased and vendor-neutral.

Deliverables: The end deliverable will include a written report and a spreadsheet-based model. Additional written material and presentations may be developed to support the project goals.

Research Process: All materials will be made publicly available throughout the project, including internal communications (the Totally Transparent Research process). The model will be developed through a combination of primary research, surveys, focused interviews, and public/community participation. Survey results and interview summaries will be posted on the project site, but certain materials may be anonymized to respect the concerns of interview subjects. All interviewees and survey participants will be asked if they wish their responses to remain anonymous, and details will only be released with consent. Securosis and Microsoft may use their existing customers and contacts for focused interviews and surveys, but will also release public calls for participation to minimize bias due to participant selection.

Deadline: The project deliverables should be released in the June timeframe.

We’re thinking the model will start with monitoring for updates, moving through evaluation, testing, and eventual rollout. It should include all different kinds of updates, reflect operational realities, and even include options for skipping patches or outsourcing. It should account for personnel/resourse costs, downtime, and all the other minutia we know affects patch management. We think we may end up having to define some roles, unless we can find something that’s somewhat standardized already out there.

Our next step is to develop the macro version of the model, which will likely be focused on identifying the patch management process and what’s included at each phase. To support this, we plan on interviewing, and will release a call for participation. We’ll also post our proposed interview questions for feedback before we actually start talking with people. Then we’ll post the results with our first overview, and seek public feedback.

So let us know what you think, and we should be back soon with the survey questions and our first general directions for the model. Keep in mind that since we’re working totally out in the open, most of what you see won’t be even close to polished and should always be considered work in progress.

–Rich

Monday, April 13, 2009

Introducing Project Quant

By Rich

Much to our own surprise, we’ve been doing a lot of work on security metrics over the past year. From our work with Mozilla, to the Business Justification for Data Security, we’ve found ourselves slicing and dicing the numbers and methodologies to develop tools that provide a little more insight into managing security operations, help communicate with the business side, and justify where and what we spend on security. Security, like any maturing industry, is more than just banging away on the latest technologies. We require methodologies to assist us in optimizing our programs, prioritizing our efforts, and make sound business (of security) decisions. And I don’t mean only justification metrics to communicate risk and value to get budgets, but internal, operational measurements that directly apply to daily security decisions.

That’s why I’m excited to announce we were approached by Jeff Jones at Microsoft to work with him on a new project around the metrics of patch management. We are handling this one very differently than our other projects, and it’s as much an experiment with a new research process as it is one of security metrics.

As you know, we are incredible sticklers about our objectivity and producing research that’s free of bias (well, except for our bias). For our other projects, even when they were sponsored by vendors, the sponsor wasn’t involved in the creation of the research at all. For this project Jeff wanted to be involved, but also asked for an open, unbiased model that will be useful to community at large (in other words, he didn’t ask for a sales tool). Rather than us developing something back at the metrics lab, Jeff asked us to lead an open community project with as much involvement from different corners of the industry as possible.

We feel this fits with our Totally Transparent Research process where all the research is developed out in the open, and everyone gets to contribute, comment on, and review the content during development. We feel this is the best way to reduce bias, and even if there is bias, at least there’s a paper trail. Yes, it’s risky for us to allow direct involvement of the sponsor, but we’re hoping that the process works as we think it will, which also happens to match Microsoft’s project goals.

In this post we’re going to describe the process, and in the next post we’ll detail the project goals. We’d like feedback on both the project process and goals, since that helps keep them straight. We’re totally serious – none of us wants a biased or narrowly useful result; we wouldn’t participate in this project if we didn’t feel we could provide something of value to the community that also fits with our objectives as independent analysts.

  1. We are establishing a project landing site here at Securosis which will contain all material and research as it is developed. Right now we have comments set up for feedback, and we should have that switched over to a forums system very soon. [DONE]
  2. Every piece of research will be posted for public comment. No comments will be filtered unless they are spam, totally off topic, or personal insults. On the off chance you don’t see your comment right after posting, it may have gotten stuck in our blog spam filters, so please email me directly to pull it out.
  3. Everyone is encouraged to comment and contribute – including competing vendors – and anonymous comments are supported. We only ask that if you are a vendor with skin in the game (a product related to patch management) that you identify yourself (we’ll call you out if we think you aren’t being open).
  4. All significant contributors will be acknowledged in the final report. The bad side is that we won’t be able to financially compensate you, and the project itself will retain ownership rights. Someday we’ll figure out a better way to handle that, and suggestions are appreciated.
  5. All material will be released under a Creative Commons license (TBD).
  6. Spreadsheets will be released in both Excel and open formats. Other written documents will be released as PDF (no, it’s not technically open, but if you have real problem with PDF email me).
  7. On the back end, we are tagging, archiving, and making public all our project-related emails. We won’t be recording phone calls, but will be releasing meeting notes.
  8. All materials will be consolidated on the project site, with major deliverables also posted to the Securosis blog.

In short, we are developing all research out in the open, soliciting community involvement at every stage, making all the materials public, acknowledging contributors, and eventually releasing the final results for free and public use. The end goal of the project is to deliver a metrics model for patch management response to help organizations assess their costs, optimize their process, and achieve their business goals.

Let us know what you think, even if you think we’re just full of it…

(Oh, and we’re not totally thrilled with the project name, so please send us better ideas)

–Rich