Friday Summary: November 5, 2010

November already. Time to clean up the house before seasonal guests arrive. Part of my list of tasks is throwing away magazines. Lots of magazines. For whatever perverse reason, I got free subscriptions to all sorts of security and technology magazines. CIO Insight. Baseline. CSO. Information Week. Dr. Dobbs. Computer XYZ and whatever else was available. They are sitting around unread so it’s time to get rid of them. While I was at it I got rid of all the virtual subscriptions to electronic magazines as well. I still read Information Security Magazine, but I download that, and only because I know most of the people who write for it. For the first time since I entered the profession there will be no science, technology, or security magazines – paper or otherwise – coming to my door. I’m sure most of you have already reached this conclusion, but the whole magazine format is obsolete for news. I kept them around just in case they covered trends I missed elsewhere. Well, that, and because they were handy bathroom reading – until the iPad. Skimming through a stack of them as I drop them into the recycling bin, I realize that fewer than one article per magazine would get my attention. When I did stop to read one, I had already read about it on-line at multiple sites to get far better coverage. The magazine format does not work for news. I am giving this more consideration than I normally would, because it’s been the subject of many phone calls lately. Vendors ask, “Where do people go to find out about encryption? Where do people find information on secure software development? Will the media houses help us reach our audience?” Honestly, I don’t have answers to those questions. I know where I go: my feed reader, Google, Twitter, and the people I work with. Between those four outlets I can find pretty much anything I need on security. Where other people go, I have no idea. Traditional media is dying. Social media seems to change monthly; and the blogs, podcasts, and feeds that remain strong only do so by shaking up their presentations. Rich feels that people go to Twitter for their security information and advice. I can see that – certainly for simple questions, directions on where to look, or A/B product comparisons. And it’s the prefect medium for speed reading your way through social commentary. For more technical stuff I have my doubts. I still hear more about people learning new things from blogs, conferences, training classes, white papers and – dare I say it? – books! The depth of the content remains inversely proportionate to the velocity of the medium. Oh, and don’t forget to check out the changes to the Securosis site and RSS feeds! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post: Does Compliance Drive Patching? Rich, Martin, and Zach on the Network Security Podcast, episode 219. Favorite Securosis Posts Rich: IBM Dances with Fortinet. Maybe. Mike reminds us why all the speculation about mergers and acquisitions only matters to investors, not security practitioners. Mike Rothman: React Faster and Better: Response Infrastructure and Preparatory Steps. Rich nails it, describing the stuff and steps you need to be ready for incident response. Adrian Lane: The Question of Agile’s Success. Other Securosis Posts Storytellers. Download the Securosis 2010 Data Security Survey Report (and Raw Data!) Please Read: Major Change to the Securosis Feeds. React Faster and Better: Before the Attack. Incite 11/3/2010: 10 Years Gone. Cool Sidejacking Security Scorecard (and a MobileMe Update). White Paper Release: Monitoring up the Stack. SQL Azure and 3 Pieces of Flair. Favorite Outside Posts Rich: PCI vs. Cloud = Standards vs. Innovation. Hoff has a great review of the PCI implications for cloud and virtualization. Guess what, folks – there aren’t any shortcuts, and deploying PCI compliant applications and services on your own virtualization infrastructure will be tough, never mind on public cloud. Adrian Lane: HTTP cookies, or how not to design protocols. Historic perspective on cookies and associated security issues. Chris’ favorite too: An illuminating and thoroughly depressing examination of HTTP cookies, why they suck, and why they still suck. Mike Rothman: Are You a Pirate? Arrington sums up the entrepreneur’s mindset crisply and cleanly. Yes, I’m a pirate! Gunnar Peterson offered: How to Make an American Job Before It’s Too Late. David Mortman: Biz of Software 2010, Women in Software & Frat House “Culture”. James Arlen: Friend of the Show Alex Hutton contributed to the ISO 27005 <=> FAIR mapping handbook. Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics – Device Health. NSO Quant: Manage Metrics – Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics – Deploy and Audit/Validate. NSO Quant: Manage Metrics – Process Change Request and Test/Approve. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Ops Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Top News and Posts Gaping holes in mobile payments via Threatpost. Microsoft warns of 0-day attacks. Serious bugs in Android kernel. Indiana AG sues WellPoint over data breach. Windows phone kill switch. CSO Online’s fairly complete List of Security Laws, Regulations and Guidelines. SecTor 2010: Adventures in Vulnerability Hunting – Dave Lewis and Zach Lanier. SecTor 2010: Stuxnet and SCADA systems: The wow factor – James Arlen. RIAA ass-clowns at it again. Facebook developers sell IDs. Russian-Armenian botnet suspect raked in €100,000 a month. FedRAMP Analysis. it sure looks like a desperate attempt to bypass security analysis in a headlong push for cheap cloud services Part 2 of JJ’s guide to credit card regulations. Dangers of the insider threat and key management. Included as humor, not news. Software security courtesy of child labor. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Andre Gironda, in response to The Question of Agile’s Success . “To

Read Post

Security Metrics: Do Something

I was pleased to see the next version of the Center for Internet Security’s Consensus Security Metrics earlier this week. Even after some groundbreaking work in this area in terms of building a metrics program and visualizing the data, most practitioners still can’t answer the simple question: “How good are you at security?” Of course that is a loaded question because ‘good’ is a relative term. The real point is to figure out some way to measure improvement, at least operationally. Given that we Securosis folks tend to be quant-heads, and do a ton of research defining very detailed process maps and metrics for certain security operations (Patch Management Quant and Network Security Ops Quant), we get it. In fact, I’ve even documented some thoughts on how to distinguish between metrics that are relevant to senior folks and those of you who need to manage (improve) operations. So the data is there, and I have yet to talk to a security professional who isn’t interested in building a security metrics program, so why do so few of us actually do it? It’s hard – that’s why. We also need to acknowledge that some folks don’t want to know the answer. You see, as long as security is deemed necessary (and compliance mandates pretty well guarantee that) and senior folks don’t demand quantitative accountability, most folks won’t volunteer to provide it. I know, it’s bass-ackward, but it’s true. As long as a lot of folks can skate through kind of just doing security stuff (and hoping to not get pwned too much), they will. So we have lots of work to do to make metrics easier and useful to the practitioners out there. From a disclosure standpoint, I was part of the original team at CIS that came up with the idea for the Consensus Metrics program and drove its initial development. Then I realized consensus metrics actually involve consensus, which is really hard for me. So I stepped back and let the folks with the patience to actually achieve consensus do their magic. The first version of the Consensus Metrics hit about a year ago, and now they’ve updated it to version 1.1. In this version CIS added a Quick Start Guide, and it’s a big help. The full document is over 150 pages and a bit overwhelming. QS is less than 20 pages and defines the key metrics as well as a balanced scorecard to get things going. The Balanced Scorecard involves 10 metrics, broken out across: Impact: Number of Incidents, Cost of Incidents Performance by Function: Outcomes: Configuration Policy Compliance, Patch Policy Compliance, Percent of Systems with No Known Severe Vulnerabilities Performance by Function: Scope: Configuration Management Coverage, Patch Management Coverage, Vulnerability Scanning Financial Metrics: IT Security Spending as % of IT Budget, IT Security Budget Allocation As you can see; this roughly equates security with vulnerability scanning, configuration, and patch management. Obviously that’s a dramatic simplification, but it’s somewhat plausible for the masses. At least there isn’t a metric on AV coverage, right? The full set of metrics adds depth in the areas of incident management, change management, and application security. But truth be told, there are literally thousands of discrete data points you can collect (and we have defined many of them via our Quant research), but that doesn’t mean you should. I believe the CIS Consensus Security Metrics represent an achievable data set to start collecting and analyzing. One of the fundamental limitations now is that there is no way to know how well your security program and outcomes compare against other organizations of similar size and industry. You may share some anecdotes with your buddies over beers, but nothing close to a quantitative benchmark with a statistically significant data set is available. And we need this. I’m not the first to call for it either, as the New School guys have been all over it for years. But as Adam and Andrew point out, we security folks have a fundamental issue with information sharing that we’ll need to overcome to ever make progress on this front. Sitting here focusing on what we don’t have is the wrong thing to do. We need to focus on what we do have, and that’s a decent set of metrics to start with. So download the Quick Start Guide and start collecting data. Obviously if you have some automation driving some of these processes, you can go deeper sooner – especially with vulnerability, patch, and configuration management. The most important thing you can do is get started. I don’t much care where you start – just that you start. Don’t be scared of the data. Data will help you identify issues. It will help you pinpoint problems. And most importantly, data will help you substantiate that your efforts are having an impact. Although Col. Jessup may disagree (YouTube), I think you can handle the truth. And you’ll need to if we ever want to make this security stuff a real profession. Share:

Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.