Securosis

Research

We’re All Gonna Get Hacked

Kelly at Dark Reading posted an interesting article today, based on a survey done by BT around hacking and penetration testing. I tend to take most of the stats in there with a bit of skepticism (as I do any time a vendor publishes numbers that favor their products), but I totally agree with the first number: Call it realism, or call it pessimism, but most organizations today are resigned to getting hacked. In fact, a full 94 percent expect to suffer a successful breach in the next 12 months, according to a new study on ethical hacking to be released by British Telecom (BT) later this week. The other 6% are either banking on luck or deluding themselves. You see, there’s really no difference between cybercrime and normal crime anymore. If you’ve ever been involved with physical security in an organization, you know that everyone suffers some level of losses. The job of corporate security and risk management is to keep those losses to an acceptable level, not eliminate them. It’s called shrinkage, and it’s totally normal. I have no doubts I’ll get hacked at some point, just as I’ve suffered from various petty crime over the years. My job is to prepare, make it tough on the bad guys, and minimize the damage to the best of my ability when something finally happens. As Rothman says, “REACT FASTER”, and as I like to say, “REACT FASTER AND BETTER”. Once you’ve accepted your death, it’s a lot easier to enjoy life. Share:

Share:
Read Post

The Network Security Podcast, Episode 149

It’s been a bit of a strange week on the security front, with good guys hacking a botnet, a major security vendor called to the carpet for some vulnerabilities, and yet another set of Adobe 0days. But being Cinco de Mayo, we can just margarita our worries away. In this episode we review some of the bigger stories of the week, and spend a smidge of time pimping for a (relatively) new site started by some of our security friends, and a new project Rich is involved with. Network Security Podcast, Episode 149, May 5, 2009 Time: 34:08 Show Notes: The Social Security Awards video is up! Yet more Adobe zero day exploits. Now it’s just annoying. McAfee afflicted with XSS and CSRF vulnerabilities. Torpig botnet hijacked by researchers. New School of Information Security blog launched. Project Quant patch management project seeking feedback. Tonight’s Music: Wound up Tight by Hal Newman & the Mystics of Time Share:

Share:
Read Post

Spam Levels and Anti-Spam SaaS

I was reading the Network World coverage last night of the McAfee Spam Report stating spam rates were down 20%. While McAfee’s numbers are probably accurate, my initial reaction was “Bull$#(&”, because I personally am not seeing a drop in spam. If the McAfee report, as well as Brian Krebs’ posts, show the totals are down, why am I getting a lot more spam, increasing weekly to the point where I am becoming actively annoyed again? I was wondering how much was due to the launch of the new Securosis web site, which was the ‘cat and mouse’ cyclical changing of spam techniques, and how much was an anti-spam provider not keeping up. I spent a couple of hours last night combing through Postini alerts, my internal junk folder, and the deleted spam that had made it to my inbox. What I found was a linear progression from the time we started with Postini until now, with increasing rates getting caught by my internal spam filter, and a corresponding linear increase getting into the Inbox. Not sure why I allowed this to capture my efforts on Cinco de Mayo, especially considering I have developed a really good margarita recipe that deserved some focused appreciation, but hey, I have no life, and the article grabbed my interest enough to go exploring. Anyway, I think that Postini is just falling behind the curve. We switched over September of 2008. My email address was broadcast when I joined Rich last July and I was surprised that there was not more spam. When we added the Postini service, no spam was getting through for a while, and every evening I would get my Postini status digest of the one or two spam messages it had intercepted. I still get these, and the digest always shows 1-2 emails captured. However, I am getting several dozen in my internal spam folder and another 15-20 in my inbox. And it is the old school blatant “Bank of Nigeria” and “Lottery Winner ” stuff that is sneaking in. Even the halfway well-executed Citibank/Chase/BofA Security alert phishing attempts are getting caught my my personal filters, so how in the world is this stuff getting through Postini? This is not the 97-99% percent blockage that I talked about in the past, and customers have reported to me. I just did a survey 9 months ago and it may already be out of date. It’s time to make a change. The beauty of spam filtering as SaaS is that we can change without pain. I am on the lookout for a 10 seat SaaS anti-spam plan. Got recommendations? I would love to hear them. Share your advice and I will share my margarita recipe. Share:

Share:
Read Post

There Are No Trusted Sites: Security Edition

If you’ve been following this series, we’ve highlighted some of the breaches of trusted sites that were, or could have been, used to attack visitors. There’s nothing like hitting a major media or financial site and using it to hack anyone who wanders by that day. This week we’re breaking it down security style, thanks to multiple vulnerabilities at McAfee. McAfee suffered multiple XSS and CSRF vulnerabilities in different areas, including a simple CSRF in their vulnerability scanning service (ironic, eh?). If you don’t know, Cross Site Request Forgery allows an attacker to “influence” your session if you are logged into a service. If you are logged into your bank in one window, they can use malicious code from the evil site under their control to transfer funds and such. I know a lot of exceptional security types over at McAfee so I don’t want to slam them too hard. This shows that in any large organization, web application security is a tough issue. Hopefully they will respond publicly, openly, and aggressively, which is really the best approach when you’ve been exposed like this. Just a friendly reminder that you can’t trust anyone or anything on the Internet. Except us, of course. Share:

Share:
Read Post

Comments on Oracle’s Acquisition of Sun

On Monday at the RSA conference I learned that Oracle is purchasing Sun Microsystems. I was so busy/exhausted from the conference that I forgot about it until this week. This is pretty exciting! Whether it’s really a good or a bad thing depends upon your perspective. Technology-wise it’s a good match, but the corporate cultures are very dissimilar. I have spoken with a few current Sun employees who are really worried about what life will be like at the Big-O. However I heard very much the same concern from many PeopleSoft employees, and the catastrophic fallout anticipated as part of that merger never happened; with the current economic situation, it probably won’t happen this time either. I also have to say this is a much better fit, with Oracle being the acquirer, than it would have been with IBM or HP. The product lines are more complimentary than IBM’s or HP’s, and I suspect there will be fewer layoffs than if either of those companies had made the acquisition. Sun’s people may not like the culture, but I have been hearing complaints from current and ex-Sun employees for years that they were unable to win market share despite having really innovative technologies, and there will be a sense of pride in having the products you worked on effectively marketed and sold. When I worked at Oracle way back when, it was amazing to watch the sales dynamic that was going on. If the customer was making a $20M purchase of hardware and software, let’s say $17M of that was for the hardware. However, the customer’s motivation for the purchase was they needed a solid database platform. That meant the $3M Oracle purchase is what mattered to the customer, and how well Oracle performed on the hardware was the deciding factor in the purchase. This meant the smaller database software company held sway over the larger hardware vendors. For years Oracle has used this incredible leverage over their hardware partners and ‘squeezed’ them on pricing. Now Oracle is the huge company with great margins, but the market dynamic is really changing, and commoditization is moving right up the stack and squeezing their core business as well. It’s not just about the database any longer. Look no further than Cisco getting into the Server/Switch business and offering a unique take on virtualization and provisioning. Several people I spoke with at the RSA conference all said the same thing: Oracle needs to own more of the data center in the coming years if they want to continue their growth curve. I believe Mr. Ellison meant “We’ll engineer the Oracle database and Solaris operating system together. With Sun we can make all components of the IT stack integrated and work well.” quite literally, and it reflects Oracle’s long-term growth strategy. Bundling Solaris with whatever virtualization technologies are at their disposal, InfiniBand Switches, and a full array of servers, gives Oracle a chip-to-web-app presence in the data center that makes the LAMP stack look like a child’s toy. From a security perspective, Oracle now has some really compelling technologies at their disposal. Trusted Solaris is the most secure general purpose OS in the world. Sun’s data encryption and authentication/key management may not be best of breed, but they are solid products that could generate considerable revenue in the hands of Oracle’s professional service arm. And while it is really difficult to secure a JVM properly, it can be done, and the beauty of the Java programming language is that it flat out has the best object model I have ever used. I can properly encapsulate and protect objects, and the language syntax is far easier to read and analyze for coding and security flaws than C++ or other commonly used environments. If Oracle decides to knit these components together within their Data Vault variant of the Oracle database, you will have all of the elements for a very secure development environment. One of the rumors that I was hearing was that Oracle would kill off MySQL. This has been covered in some of the blogs as well. I personally think this is nonsense. MySQL is a very well-designed database. It is modular and cannot only be tuned like an Oracle database, but is instead configured more like a Linux kernel to meet the user’s specific needs. MySQL has a rabid following and what I am estimate at around 15 million installations around the world. When you couple this with the BEA pieces in place and the Java programming language and associated tools/platforms Sun has, you have a really phenomenal web application development suite. Oracle no longer has to ‘compete’ with MySQL – now they have a real answer to PostgreSQL (No, Oracle Lite fans, that was not the answer) without undermining their core database business. What Oracle really needs to do is provide a PL/SQL parser/pre-processor for MySQL, thus providing developers not only the option to use existing SQL/PSM, but the Oracle-specific procedural language most DBAs are familiar with. This would keep the existing MySQL users happy, and offer a migration path into the core Oracle database platform should they outgrow MySQL’s capabilities. Also keep in mind that Oracle purchased Innobase InnoDB, which is not really a database, but rather an underlying storage engine that is commonly used by MySQL. One of the cool things about MySQL is that you can configure it with different storage backends, such as clustering or ISAM. So Oracle owns MySQL and one of the commonly used storage technologies for it, and that platform has strong user affinity – now they just need to find a way to leverage that and make money from it. Letting that community wither and die just does not make sense. To me this looks like a very complimentary acquisition. Share:

Share:
Read Post

Innovation, the RSA Conference, and Leap Years

On Thursday at the RSA Conference, I had the opportunity to attend a lunch with the conference advisory board: Benjamin Jun of Cryptography Research, Tim Mather of RSA, Ari Juels of RSA Laboratories, and Asheem Chandna of Greylock Partners. It was an interesting event, and Alex Howard of TechTarget did a good job of covering the discussion in a recent article. As with many things associated with the RSA Conference, it took me a bit of time to digest and distill all the various bits of information crammed into my sleep-deprived brain. I find that these big events are an excellent opportunity to smash my consciousness with far more data than it can possibly process, and eventually a few trends emerge. No, not this year’s “hot technology”, but macro themes that seem to interweave the disparate corners of our practice and industry. It might run contrary to many of the articles I read, or conversations I’ve had, but I think this year’s subtext was “innovation”. (And not because I presented on it with Hoff). Every year when I run into people on the show floor, the first question they tend to ask is “see anything new and interesting?” Finding something new I care about is pretty rare these days for two reasons. First, if it’s in my coverage area I sure as heck had better know about it before RSA. Second, most of the advances we see these days are evolutionary, and earth-shattering new products are few and far between. That doesn’t mean I don’t think we’re innovating, but that innovation is more pervasive throughout the year and less tied to any single show floor. One really interesting bit that popped out (from Asheem) was that the Innovation Station had only 14 applicants last year, and over 50 this year. I think in these days of tight marketing budgets for startups, a floor booth is hard to justify, and perhaps some of the total crap was weeded out, but security startups are far from dead (just look at my Inbox). But more interesting than innovation in startups is innovation from established players. For the first time in a very long time I’m seeing early tendrils of real innovation leaking from some of the big vendors again. We talked about it for a few minutes at the lunch, but it’s obvious that the security industry was able to coast for a few years on its core approaches. Customers were more focused on performance and throughput than new technologies, thus there was little motivation for big innovation. The limited market demand pushed innovation into the realm of startups, where new technologies could incubate until the big companies would snatch them up. Our financial friends at Marker Advisors even talked about this trend in a recent guest post, and how “traditional” buying cycles are now disrupted by technology turnover and changing client requirements. It all ties in perfectly to Hoff’s Hamster Sign Wave of Pain. On the other side, we’re seeing some of the most dramatic attack innovation since the discovery of the buffer overflow. And for the first time, these attacks are causing consistent, real, measurable, and widespread losses. We’ve seen major financial institutions breached, the plans for the Joint Strike Fighter stolen (‘leaked’ doesn’t nearly convey the seriousness), and malware hitting the major news outlets (with often crappy reporting). There is evidence that all aspects of our information society are deeply penetrated and fallible. Not that the world is coming to an end, but we can’t pretend we don’t have problems. This combination of buying cycles, threat innovation, growing general awareness, and product and practice innovation creates what may be the most interesting time in history to work in security. We’ve never before had such a high profile, faced such daunting challenges, and seen such open opportunities. Merely building on what we’ve done before doesn’t have a chance of restoring the risk balance, and there’s never been better motivation for big financials, the government, and big manufacturing (you know, the guys with all the money) to invest in new approaches. I’d call it a “Perfect Storm” if that phrase wasn’t banned by the Securosis Guide of Crappy Phrases, Marketing Hyperbole, and Silly, Meaningless Words (after “holistic” and before “synergy”). Frankly, we don’t have any choice but to innovate. When market forces like this align the outcome is inevitable. Tim Mather referred to the National Cyber Leap Year, a program by the government to engage industry and push for game-changing security advancements. Not that the Leap Year program itself will necessarily succeed, but there is clear recognition that innovation is essential to our survival. We can’t keep layering the same old crap onto hot newness and expect a good result. Those of you who hate change are going to be seriously unhappy. Those who revel in challenges are in for a wild ride. The good news is there’s no way we can lose – it isn’t like society will let itself break down completely and go all Road Warrior. Especially since Mel turned into an anti-semitic whack job. (Image courtesy www.pdrater.com). Share:

Share:
Read Post

How Do You Deploy Your Patches?

Last week I posted an outline for a patch management cycle to base Project Quant metrics on. Based on some feedback, I think we need to hear from those of you who actually do this for a living (you really don’t want to know the crappy process we used back in my sysadmin days). If you have a moment, please pop over to the forums and let us know what you are using for your process. (If you want to leave anonymous feedback, instead of the forums you can leave it as a comment on the main post; this is a weird limitation of our platform). Thanks Share:

Share:
Read Post

LogLogic acquires Exaprotect

Another interesting news item during the RSA show that I am just getting time to comment on is LogLogic’s announcement they have acquired Exaprotect. When LogLogic announced a partnership with Exaprotect a few months back, my initial reaction was “Who”? Actually, I had heard of the company, but knew very little about the technology. I had not heard any of the companies I speak with on a regular basis mention them, so I had not been paying very close attention to this small firm. When I went to Exaprotect’s website to see what products they offered, I really was unable to tell. It looked like a carbon copy of the LogLogic product benefits summary! It is amazingly difficult to understand what differentiates one product from another on corporate web sites when they are all attempting to cover the current market drivers, and do so at the expense of explaining what they actually do. The company is not very well known by those of you who do not follow this space closely, but they do offer a security event management product, along with a couple of other interesting pieces in the areas of configuration management and policy management. The reason this acquisition is important is two-fold. First, this is the removal of the last line of distinction between log management vendors and SEM vendors. ArcSight, LogLogic, eIQ Networks, Q1Labs, LogRhythm, NitroSecurity, and so on are all covering log management and security analysis. Granted, the degree to which each vendor provides the respective capability varies, and each has its own strengths. All in all, these systems collect disparate events, analyze the events in relation to some policy, and provide storage and reporting. The difference was the type of events collected, the speed with which the analysis was conducted, and the audience for the results. These distinctions were usually split down the middle, either near-real-time security response or a forensic analysis and event correlation. What we will see in the coming quarters is adjustment in vendor architectures for these offerings to be efficiently merged into seamless offerings, continuing to provide evolutionary updates to near-real-time and forensic offerings, and looking for ways to differentiate from their competitors. The second reason is that it spotlights the technical and value path this market segment is (and needs to be) headed down. The tough question, now that the vendors collect just about every relevant piece of security & operational data available, is what do you do with that data? How do you differentiate yourself? How do you provide the customer more value? Sure we are going to see new features appended to the core offerings, a la database protection, but the more important feature/functions will have to do with configuration management, business process verification, and policy management/enforcement. Configuration management provides the vendors with a big missing piece of preventative control and baselining of systems that are critical for most compliance efforts. It’s not that difficult to implement, fits nicely within a log management architecture, and offers value to several buying centers. Policy management, provided the vendors actually can take a business policy and automatically map that to the underlying data streams available, will also provide a huge leap in value to customers and speak to non-technical audiences. The final piece of the puzzle is a flexible analytics engine, so policy verification can be performed in an appropriate time-frame in the specific customer environment, in order to verify business continuity and efficacy. I use the word ‘verification’ because enforcement is not really the customer requirement, and more importantly blocking is not typically the appropriate way to remediate problems – the solution is often more complex. All three of these offerings show SEM moving up the stack and making sense of business processing and compliance in the business context. I look at the LogLogic acquisition as a step necessary to compete, not just the in basic SEM infrastructure of near-real-time event processing, but in all three of the evolutionary ways security event management is heading. That’s not an endorsement of the Exaprotect technology – I have not gotten my hands on it and could not tell you how well it works – but it does encapsulate the segment trends. I intend to delve into each of these trends in more depth. Share:

Share:
Read Post

Friday Summary: May 1, 2009

Sometimes the most energizing thing you can do is absolutely nothing. Last week at RSA was absolutely insane, in a good way. It’s kind of like being a kid and going to summer camp. You get to see all the friends who live in other towns, you all go nuts for a week with minimal supervision, and then everyone staggers home all excited. Between the Recovery Breakfast, 4 official RSA panels, a Jericho panel, my 160+ slide Friday morning session with Chris Hoff, and the nonstop speed-dating during the day, and parties at night, I should really be in much worse shape. But I found this year’s RSA to be incredibly motivating on multiple levels. First, I think this is absolutely one of the best times to be in information security. Yes, major crap is hitting the fan all over the place, including massive national security, financial, and infrastructure breaches, but security is also hitting the front pages and reaching into the common consciousness. This is exactly the kind of environment true security professionals thrive on – with challenges and opportunities on all sides. As someone who loves the practice and theory of security, I find these challenges to be absolutely energizing and I wouldn’t want to be doing anything else. Well, except for maybe being an astronaut. Next, RSA was extremely motivating from a corporate standpoint. I won’t say much, but it validated what we’re trying to do, and how we are positioning ourselves. Finally, it was a very motivating week on a personal level. I used to have friends at work, and acquaintances in the industry. But these days I find some of my closest friends are scattered throughout the world in different jobs. I realized I spend more time interacting with many of you than I do with my local ‘meatspace’ friends outside of the industry. I especially appreciated the group that took me out for my birthday on Monday night – it really eased the pain of spending yet another family event away from my wife and (new) daughter. After RSA I took 4 days off, and the combination of intensity followed by relaxation was a major recharge, but didn’t leave me much content for this week’s summary. Except stay away from, like, every Adobe product on the planet since they are all full of 0days. One reminder – if you’d like to get our content via email instead of RSS, please head over and sign up for the Daily Digest (it goes out every night). We’re also thinking of creating a Friday Summary-only version, so let us know if that would be of interest. And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Martin and Rich on the weekly Network Security Podcast. I did a series of three videos and an executive overview on DLP for Websense. It was kind of cool to go to a regular studio and have it professionally edited. The videos (all about 2 minutes long) and Executive Guide are designed to introduce technical or non-technical executives to DLP. It’s all objective stuff, and cut-down versions of our more extensive materials. I show up in the Sydney Morning Herald, based on some TidBITS/Mac writing. Speaking of TidBITS, I wrote up some thoughts on how to read Mac security articles. I was quoted in a TechTarget article on cloud computing, based on my involvement in the Jericho panel. Favorite Securosis Posts Rich: The latest Project Quant post – we really need your feedback on the patch management cycle! Adrian: Rich’s post on the Security Industry Anti-Disambiguation Movement. Having watched this first-hand at a couple of startups, I know how well the mere mention of a competing technology by one of the major competitors could halt your POC process in an instant. Favorite Outside Posts Adrian: Favorite external was Greg Young’s comment on Becoming the Threat … An excellent analysis of something we see in society, and certainly something that is a problem here in Phoenix. Oddly, this is something I do NOT see with most corporate IT. Why is that? Rich: Chris Eng’s Decoding the Verizon DBIR 2009 Cover. Very cool. Top News and Posts Joint Strike fighter plans nicked. Will someone in charge WAKE THE FUCK UP! Good: Microsoft removes ‘AutoRun’ option for Memory sticks. Bad: Pushing 8 out through auto-update? What if I don’t want it? Targetted worms and banking scams. Adobe is having a seriously bad run. More 0days. Interesting take on WAF+VA. The Black Hat call for papers is extended. Blog Comment of the Week This week’s best comment was from Ant in response to Rich’s post on Security Industry Disambiguation Movement. Well I mint not have chosen those terms, but I personally* fully endorse the sentiment! A different problem arises where a perfectly serviceable term is pressed into use in several different but not wholly dissimilar markets, leading to ambiguity and confusion – e.g., identity management, policy management. So… it’s not strictly anti-disambiguation, but it some vendors are guilty of disingenuously using a term which doesn’t apply to them in their market. – Ant * i.e., this is not (necessar Share:

Share:
Read Post

Project Quant: Patch Management Cycle

While we don’t plan on posting every Project Quant update here on the main blog, we will be cross-posting some of the more significant project updates, as well as other content we relevant to our broader readership. (For these posts we will turn off comments to consolidate them all in the Project Quant area.) So here is our first pass at defining a patch management process for the project: Although we posted some of our initial thoughts, and have been getting some great feedback from everyone, Jeff and I realized that we haven’t even defined a standard patch management cycle yet to start from. DS, Dutch, and a few others have started posting some metrics/variables, but we didn’t have a process to fit them into. I’ve been researching other patch management cycles, and here’s my first stab at one for the project. You’ll notice it’s a little more granular than most of the other ones out there – I think we need to break out phases in more detail to both match the different processes used by different organizations, and to give us cleaner buckets for our metrics. Here’s a quick outline of the steps: Monitor for Release/Advisory: Anything associated with tracking patch releases, since all vendors follow different processes. Acquire: Get the patch. Evaluate: Initial evaluation of the patch. What’s it for? Is it security-sensitive? Do we use that software? Is the issue relevant in our environment? Are there workarounds or dependencies? Prioritize/Schedule: Prioritize based on the nature of the patch itself, and your infrastructure/assets. Then build out a deployment schedule, based on your prioritization. Test and Certify/Accredit: Perform any required testing, and certify the patch for release. This could include any C&A requirements for you government types, compliance requirements, or internal policy requirements. Create Deployment Package: Prepare the patch for deployment. Deploy. Confirm Deployment: Verify that patches were properly deployed. This might include use of configuration management or vulnerability assessment tools. Clean up: Clean up any bad deployments, remnants of the patch application procedure, or other associated cruft/detritus. Document and Update Configuration Standards: Document the patch deployment, which may be required for regulatory compliance, and update any associated configuration standards/guidelines/requirements. This is a quick and dirty pass and meant to capture the macro-level steps in the process. I know not all organizations follow, or need to follow, a process like this, but it will help us organize our metrics. Let me know what you think – I’m sure I’m missing something… To comment on this post, please see the original over in the Project Quant area. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.