Securosis

Research

Research Revisited: 2006 Incites

All of us Securosis folks will be at the RSA Conference this week, so we figured we’d pre-load some old stuff to get a feel for how our research positions turned out. Mine is really old, digging back into the archives from when I had just started Security Incite. Each year I put together a set of Incites that reflected what I expected to happen that year. I basically copied the idea (and format) from my META Group days, where each year we obsessed over our 12 META Trends. The idea was to come up with a paragraph for each of our main coverage areas and provide some guidance. No percentages or anything like that. The innovation that I introduced was to actually go back later in the year and assess how well the projection worked out. We never did that at META. But I figured it would be a hoot to look back at what I thought was going to be big in 2006, so here are the Incites and some more tactical predictions. Some stuff was good. Some stuff was, um, not so good. At least it should provide some laughs. And if you want to check out the grades I gave myself on each Incite a year later, check out my 2006 report card. I can tell you my predictions stunk very badly. You can also check out the 2007 report card while you’re at it, which will ensure you never ask me to prognosticate about anything… 2006 Incites and Predictions (These originally appeared on the Security Incite blog, Jan 9, 2006.) What are the Security Incites? Annually, Security Incite will publish a list of the key “trends” and expectations in the security business for the next year. Called “Security Incites” and written from the perspective of the end user (or security consumer), Incites provide direction on what to expect, assisting the decision making process as budgets and technology adoption plans are finalized for the upcoming year. Each Incite provides a clear position and distills the impact on buying dynamics and architectural constructs. Incites also set the stage for Security Incite’s upcoming research agenda. What’s the difference between a “Security Incite” and a “Prediction?” Predictions are things we expect to happen within the next 12 months, and tend to be more event-oriented. The Security Incites provide a broader perspective across the security domains and can take a longer than 12 months view. 1. No Mas Box (Less Boxes, More Functionality) Users will increasingly revolt about adding yet another narrowly focused security appliance into their network and actively examine new “simplification” architectures. New Unified Threat Management (UTM) products, using blade servers and virtualization technologies, appear in 2006 putting vendors that license key intellectual property at a disadvantage. Management of the integrated UTM environment will remain difficult through 2007. 2. Get the NAC! The increasing number of ingress points into corporate networks (mobile, contractors, VPN) forces users to migrate to a virtual network infrastructure with a secure net and an unsecured net. Network Admission Control (NAC) architectures gain traction in 2006 to facilitate this architectural construct, but do require homogeneity of equipment pushing the pendulum away from best of breed providers. 3. Who are you? Identity Management (IDM) breaks out in 2006, as ROI-driven password management and single sign-on (SSO) initiatives are deployed en masse. Smart users increasingly figure out that strong and centralized IDM provides “good enough” authentication and authorization for compliance purposes, accelerating market growth in 2H 2006. Yet, identity federation continues to lag in a cloud of useless vendor bickering and standards immaturity until mid-2007. Token-based authentication finally hits the wall, as passwords remain good enough and no compelling alternative appears. 4. Stay Out of Jail Compliance continues to generate tremendous hype, but largely remains a red herring throughout 2006. Smart users will use the compliance word to get funding for critical imperatives (perimeter redesign, identity management) and sufficiently document their processes to keep regulators happy. Those not so smart users figure encryption is a panacea and buy some; ultimately realizing making encryption work on a large scale basis hasn’t gotten any easier. 5. Losing The Religion Everyone finally realizes in 2006 that regardless of technical approach (IDS vs. IPS vs. firewalls vs. anomaly detection) it’s all about detecting and blocking malware quickly and effectively. Users expect to see multiple techniques implemented, spurring another wave of consolidation as vendors look to bring complete enterprise-class UTM solutions to market. 6. Endpoint Hostile Takeover Driven by the prevalence of unwanted applications, internal zombies outbreaks, and documented information leaks enabled by key loggers and spyware, users will increasingly lock down endpoint devices, despite pushback from the business users. Limitations of the Windows XP security model makes lockdown difficult in 2006, but much easier when Microsoft’s Vista operating system is ready for deployment beginning in 2007. 7. Bad Content is Bad Content Given “innovation” by spammers and fraudsters, keeping content filtering algorithms accurate and timely is proving very difficult for content-focused security vendors. In 2006, heuristics-based detection cocktails fall out of favor, pushing the pendulum back towards signatures that favor entrenched AV vendors. Users increasingly embrace “in the cloud” content filtering for e-mail, IM, and web traffic because it allows them to get rid of another box in the perimeter and stop worrying about exponentially increasing message volumes. 8. Security Management (oxy)Moron Stand-alone security information management (SIM) plateaus in 2006, as consolidation continues and the need for large-scale system integration makes acceptable “time to value” out of reach for all but the largest enterprises. Closed correlation systems increasingly take root as users swing towards homogeneity and ratchet back expectations on which devices really need to be integrated into the management system, while leveraging the reporting infrastructure for compliance purposes. 9. Services Managed Security Services provide increasing value in terms of both operational capabilities and content filtering. Users realize that removing threats “in the cloud” provides better bang for the buck for mature technologies (firewalls, IPS, anti-spam, gateway AV, web filtering). The biggest challenge in 2006 will

Share:
Read Post

Research Revisited: The 3 Dirty Little Secrets of Disclosure No One Wants to Talk About

This post doesn’t hold up that well, but it goes back to 2006 and the first couple weeks the site was up. And I think it is interesting to reflect on how my thinking has evolved, as well as the landscape around the analysis.   In 2006 the debate was all about full vs. responsible disclosure. While that still comes up from time to time, the debate has clearly shifted. Many bugs aren’t disclosed at all, now that there is a high-stakes market where researchers can support entire companies just discovering and selling bugs to governments and other bidders. The legal landscape and prosecutorial abuse of laws have pushed some researchers to keep things to themselves. The adoption of cloud services also changes things, requiring completely different risk assessment around bug discovery. Some of what I wrote below is still relevant, and perhaps I should have picked something different for my first flashback, but I like digging into the archives and reflecting on something I wrote back when I was still with Gartner, wasn’t even thinking about Securosis as more than a blog, and was in a very different place in my life (i.e., no kids). Also, this is a heck of an excuse to include a screenshot of what the site looked like back then. The 3 Dirty Little Secrets of Disclosure No One Wants to Talk About As a child one of the first signs of my budding geekness was a strange interest in professional “lingo”. Maybe it was an odd side effect of learning to walk at a volunteer ambulance headquarters in Jersey. Who knows what debilitating effects I suffered due to extended childhood exposure to radon, the air imbued with the random chemicals endemic to Jersey, and the staccato language of the early Emergency Medical Technicians whose ranks I would feel compelled to join later in life. But this interest wasn’t limited to the realm of lights and sirens; it extended to professional subcultures ranging from emergency services, to astronauts, to the military, to professional photographers. As I aged and even joined some of these groups I continued to relish the mechanical patois reserved for those earning expertise in a domain. Lingo is often a compression of language; a tool for condensing vast knowledge or concepts into a sound byte easily communicated to a trained recipient, slicing through the restrictive ambiguity of generic language. But lingo is also used as a tool of exclusion or to mask complexity. The world of technology in general, and information security in particular, is as guilty of lingo abuse as any doctor, lawyer, or sanitation specialist. Nowhere is this more apparent than in our discussions of “Disclosure”. A simple term evoking religious fervor among hackers, dread among vendors, and misunderstanding among normal citizens and the media who wonder if it’s just a euphemism for online dating (now with photos!). Disclosure is a complex issue worthy of full treatment; but today I’m going to focus on just 3 dirty little secrets. I’ll cut through the lingo to focus on the three problems of disclosure that I believe create most of the complexity. After the jump that is… “Disclosure” is a bizarre process nearly unique to the world of information technology. For those of you not in the industry, “disclosure” is the term we use to describe the process of releasing information about vulnerabilities (flaws in software and hardware that attackers use to hack your systems). These flaws aren’t always discovered by the vendors making the products. In fact, after a product is released they are usually discovered by outsiders who either accidentally or purposely find the vulnerabilities. Keeping with our theme of “lingo” they’re often described as “white hats”, “black hats”, and “agnostic transgender grey hats”. You can think of disclosure as a big-ass product recall where the vendor tells you “mistakes were made” and you need to fix your car with an updated part (except they don’t recall the product, you can only get the part if you have the right support contract and enough bandwidth, you have to pay all the costs of the mechanic (unless you do it yourself), you bear all responsibility for fixing your car the right way, if you don’t fix it or fix it wrong you’re responsible for any children killed, and the car manufacturer is in no way actually responsible for the car working before the fix, after the fix, or in any related dimensions where they may sell said product). It’s really all your fault you know. Conceptually “disclosure” is the process of releasing information about the flaw. The theory is consumers of the product have a right to know there’s a security problem, and with the right level of details can protect themselves. With “full disclosure” all information is released, sometimes before there’s a patch, sometimes after; sometimes the discoverer works with the vendor (not always), but always with intense technical detail. “Responsible disclosure” means the researcher has notified the vendor, provided them with details so they can build a fix, and doesn’t release any information to anyone until a patch is released or they find someone exploiting the flaw in the wild. Of course to some vendors use the concept of responsible disclosure as a tool to “manage” researchers looking at their products. “Graphic disclosure” refers to either full disclosure with extreme prejudice, or online dating (now with photos!). There’s a lot of confusion, even within the industry, as to what we really mean by disclosure and it it’s good or bad to make this information public. Unlike many other industries we seem to feel it’s wrong for a vendor to fix a flaw without making it public. Some vendors even buy flaws in other vendors products; just look at the controversy around yesterday’s announcement from TippingPoint. There was a great panel with all sides represented at the recent Black Hat conference. So what are the dirty little secrets? Full disclosure helps the bad guys It’s about ego, control, and competition We need the threat of full disclosure or vendors

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.