Securosis

Research

Orchestria Enters DLP Market- Underestimates Competition With Totally Inaccurate Marketing

Orchestria finally announced their first “true” general DLP product. For those of you who don’t know, Orchestria has danced around this space for a few years now. They started with a product narrowly focused on helping certain financial services firms, particularly broker/dealers, manage compliance issues around insider trading and privacy. Basically you can think about it as a client-centric (with some networking monitoring) DLP solution focused on one category of violations. It didn’t work well as a general DLP solution, but that wasn’t their market. Interestingly enough it was based on Autonomy’s Aungate technology, but then Autonomy started pushing Aungate competitively and Orchestria had to do a little re-working (industry rumor stuff, this didn’t come through confidential channels). Autonomy has since bought Zantaz and are combining the products. Anyway, back to Orchestria. Since I worked with them before and don’t know exactly how much is public about the new product I can’t go into any details. What I’m comfortable saying is that it looks interesting, covers the bases to be considered more Content Monitoring and Filtering than just DLP, and I’ll withhold judgement until we see some deployments and competitive evaluations. But they might not get the chance if their sales guys are as poorly educated on the competition as the press release and their product site indicates. They claim to be the first “next generation” DLP solution; filling the gaps uncovered by others. Let’s look at a few: Unlike first-generation DLP software, Orchestria provides coverage across all points of risk within an enterprise, detects violations accurately with minimal false positives, and can proactively block true infractions. Weird. That’s what customers tell me all the major DLP solutions do. Especially the top five of Vontu, Reconnex, Websense, Vericept, and EMC/Tablus. My assessment is that every one of these products provides that, and some others, like <a href=”http://www.codegree etworks.com”>Code Green (a mid-sized play) also provide it. Orchestria Multi-Layered Defense – Orchestria Multi-Layered Defense leverages multiple network, server, client, import and archive agents to ensure control across all forms of electronic data, including messages with encrypted and password-protected files, internal messages, disconnected laptops, files at rest, and mobile storage devices – most of which are ignored by first-generation solutions. Totally untrue. All the top five do all, or most, of that. Sometimes it takes third-party integration, so maybe that’s the grey area they’re taking advantage of. Orchestria Full-Dimensional Analysis – Limited to content-focused inspection, first-generation platforms falsely flag numerous legitimate messages. These “false positives” create a significant review burden and prevent organizations from implementing controls that block or correct messages before they are sent or files before they are saved. Orchestria’s Full-Dimensional Analysis not only analyzes content, but also dynamically examines content-around-content, message context, the identity of sender and recipients, hierarchy, and user input. This approach can reduce false positives by more than 90% compared to first-generation solutions. I’m unaware of any major DLP product that doesn’t use context as well as content. Actually, they sort of have to in order to work. Orchestria Incident-Appropriate Action – Far beyond passive post-incident review employed by first-generation technologies, Orchestria proactively protects enterprises by matching responses specifically to the type and severity of the violation. In addition to providing the industry’s leading workflow-enabled review capability, Orchestria supports a variety of “before-the-send” actions including blocking, correcting, and quarantining. This solution also automatically classifies, routes, and stores sensitive messages and files to meet a variety of records management and legal mandates. Common features in any successful DLP product. I’ll give them a little credit, most of the other DLP tools don’t focus on compliance archiving and require more manual tuning and technology integration to manage that. But workflow review? Hell, I’ve been working with all the major DLP vendors for years on this (at least the ones that didn’t come up with good systems on their own). Dr. Sara Radicati of The Radicati Group said, “The key advantage with Orchestria DLP is responding to potential violations with automated incident-appropriate actions – from a routine warning, to forwarding to a supervisor, to nothing at all. As it all happens in real time, customers will know that their messages haven’t been banished to a backed-up review queue for hours or even days This is why I’m very careful about the custom quotes I’ll do (none for years now, but never say never); they make you sound like a … well, I’d use the word we’re all thinking if it were a guy, but I’ll never say that about a woman who doesn’t print it on her business card. I hope I get to stay on my high horse indefinitely; now that I’m an independent consultant we’ll see how long I last. “Orchestria’s DLP solution provides a new and different approach that fulfills all requirements for effective protection,” said Bo Ma ing, Orchestria’s president and chief executive officer. “It covers all points of risk within the enterprise, accurately distinguishes violations from false positives, and enables the right action, including proactive protection – all on the industry’s most flexible architecture. Bo, I think you’ve done some cool stuff, but you’re better off focusing on what you really bring that’s new to the market (and you do have a couple things) than exaggerated marketing that won’t stand up once someone glances at a competitor. Your website is even more full of omissions and errors than this release. DLP is probably the ugliest market I covered as an analyst. It’s seriously rough and tumble with over a dozen vendors fighting over what was only $50M last year, and will probably only be $100-120M this year. Unless by “First Gen” you mean products from 2 years ago, you’re in for a surprise once you go into competitive evaluations. Marketing aside I think Orchestria will be one to watch and a few competitive wins could open up some big opportunities. Right now, the jury is out and it’s clear whoever wrote their marketing materials needs to take a close look at the competition. Share:

Share:
Read Post

Microsoft Can’t Manage Third-Party Patches, Even Though It’s A Good Idea

Cutaway has a good post up today over at Security Ripcord. In it, he suggests that Microsoft should… well, I’ll let him say it: Here is my solution: Microsoft needs to come up with a Central Update Console that software and driver developers can hook to configure automatic updates. They already provide this type of feature through the “Add/Remove Programs” console. Good developers utilize this to help users and administrators manage the software that is installed on their systems. How hard would it be to come up with a solution that other developers could hook to help with centralizing the management of updates and provide a significant positive impact on the overall security of every computer on the Interweb? Although the design, development, testing, implementation, and maintenance of this project would be challenging, I am willing to be that this would be a small project in the grand scheme of Microsoft OS development. They don’t need to take every software vendor into consideration, they just need to come up with one method all of them could use. This is something I’ve actually put some thought into (and not just because Cutaway and I talked about it a couple weeks ago), but I don’t think it can work. At least not today. Managing vulnerabilities and patches is a huge issue, with a moderately sized third party market just to deal with it. While Microsoft provides patches for their own software only (with few exceptions, like a recent ATI driver update), they don’t provide patches for non-Microsoft software. I think this is for two reasons that I don’t expect to change anytime soon. Antitrust- there’s an entire market dedicated to vulnerability and patch management. MS can’t step in an include this in the OS, however useful an idea, without having to face antitrust accusations. Due to past mistakes, they are often restricted from including features other OS vendors don’t blink an eye at. Take a look at all the whining by Symantec and McAfee over Patchguard. Liability- it doesn’t matter how many warnings and disclaimers MS puts on the darn thing; the first time a bad third-party patch propagates through a Microsoft central patch console and blows up systems (which is inevitable), the world will cry havoc and let slip the dogs of alt.ms.sucks- and at least a few lawyers, wanting a piece of the MS pot of valuation. On the enterprise side this isn’t as much of an issue since most organizations don’t use the update function built into Windows (although they do use WSUS (Windows Software Update Server). Consumers, on the other hand, rely heavily on Microsoft for their updates and some sort of central service for third party patches could really help keep their systems current. Especially for device drivers; while applications can build in their own update functions and check whenever they’re used, device drivers represent a huge class of vulnerability that even most enterprises don’t pay enough attention to. Also, as software gets more and more intermingled the risk of relying on application launch to check for patches becomes a problem. Components can represent an exploit risk through a web browser or a virus, even if you haven’t launched the application in a long time. Today, vendors manage this by ignoring it or installing YAASTS (Yet Another Annoying System Tray Service) that runs constantly, draining your system resources. Thus I think Cutaway’s idea of a central patch service could provide a lot of value and help improve security. No argument there. But it represents a risk to Microsoft that I just don’t think the product managers, never mind the lawyers, will let them take. Share:

Share:
Read Post

Anyone Going To SANS Vegas Next Week?

I’m probably going to swing out to Vegas for a day or two, but haven’t figured out what days yet. If you’re going and want to meet up, drop me a line in the comments or at rmogull@securosis.com. Share:

Share:
Read Post

Avast! Ye Scurvy Dogs!

Yarr! Today be Talk Like A Pirate Day, and we’ll not be having no landlubber speak on this here vessel. So grab ye cutlass, man yer station, and PREPARE TO REPEL BOARDERS!!! Share:

Share:
Read Post

Yes, The World Has Changed And So Must We

Boy, Chris is all riled up over my criticism of Jericho. Let me put this bad boy to bed, at least from my side. Chris missed the point of my last post, and my editor tells me it might be because of how I wrote it. Thus I’ll be a little clearer in this one. To quickly recap, it seems that they’re ruffled at Jericho’s suggestion that the way in which we’ve approached securing our assets isn’t working and that instead of focusing on the symptoms by continuing to deploy boxes that don’t ultimately put us in the win column, we should solve the problem instead. I’m not ruffled with that suggestion at all, agree completely. I just think they communicate it very poorly. The threats aren’t the same. The attackers aren’t the same. Networks aren’t the same. The tools, philosophy and techniques we use to secure them can’t afford to be, either. Agree completely. Look at pretty much everything I’ve ever published over the past 5-6 years. As a mater of fact, I’m doing my best to contribute actionable advice, models, and frameworks to manage these problems. Heck, I barely even talk about “traditional” network security since the world has moved on. Go back to my original posts– when I criticize Jericho it’s over how they communicate and that they spend too much time stating the obvious, not that I disagree with our need to change. Because we do need to change how we approach security. We don’t need to throw away everything we’ve done, but there’s a lot of new work we need to complete. Data security, application security, how we manage users, identity, and fundamentally how we define trust all need to evolve. I’m with ya man, don’t put the wrong words in my mouth. I’m contributing in my own small way over here, and if you know people at Jericho I’m happy to work with them directly. If I can keep the irons to the fire I might have some new stuff to reveal this week. Peace out. Share:

Share:
Read Post

Understanding and Selecting a DLP Solution: Part 3, Data-In-Motion Technical Architecture

Welcome to part 3 of our series on Data Loss Prevention/Content Monitoring and Filtering. You should go read Part 1 and Part 2 before digging into this one. In this episode we’re going to spend some time looking at the various architectures we typically see in DLP products. This is a bit of a tough one since we tend to see a bunch of different technical approaches. There’s no way to cover all the options in a little old blog post, so I’ll focus on the big picture things you should look for. To structure things a bit, we’ll look at DLP for data-in-motion (network monitoring/filtering) data-at-rest (storage) and data-in-use (endpoint). For space reasons, this post will focus on data-in-motion, and the next post will drill into at-rest and in-use. Network Monitor In the heart of most DLP solutions lies a passive network monitor. This is where DLP was born and is where most of you will start your data protection adventure. The network monitoring component is typically deployed at or near the gateway on a SPAN port (or a similar tap). It performs full packet capture, session reconstruction, and content analysis in real time. Performance numbers tend to be a little messy. First, on the client expectation side, everyone wants full gigabit Ethernet performance. That’s pretty unrealistic since I doubt many of you fine readers are really running that high a level of communications traffic. Remember, you don’t use DLP to monitor your web applications, but to monitor employee communications. Realistically we find that small enterprises run well less than 50 MB/s of relevant traffic, medium enterprises run closer to 50-200 MB/s, and large enterprises around 300 MB/s (maybe as high as 500 in a few cases). Because of the content analysis overhead, not every product runs full packet capture. You might have to choose between pre-filtering (and thus missing non-standard traffic) or buying more boxes and load balancing. Also, some products lock monitoring into pre-defined port and protocol combinations, rather than using service/channel identification based on packet content. Even if full application channel identification is included, you want to make sure it’s not off by default. Otherwise, you might miss non-standard communications such as tunneling over an unusual port. Most of the network monitors are just dedicated servers with DLP software installed. A few vendors deploy as a true specialized appliance. While some products have their management, workflow, and reporting built into the network monitor, most offload this to a separate server or appliance. This is where you’ll want the big hard drives to store policy violations, and this central management server should be able to handle distributed hierarchical deployments. Email The next major component is email integration. Since email is store and forward you can gain a lot of capabilities, like quarantine, encryption integration, and filtering, without the same complexity of blocking synchronous traffic. Most products embed an MTA (Mail Transport Agent) into the product, allowing you to just add it as another hop in the email chain. Quite a few also integrate with some of the major existing MTAs/email security solutions directly for better performance. One weakness of this approach is it doesn’t give you access to internal email. If you’re on an Exchange server, internal messages never make it through the MTA since there’s no reason to send that traffic out. To monitor internal mail you’ll need direct Exchange/Notes integration, which is surprisingly rare in the market. We’re also talking true integration, not just scanning logs/libraries after the fact, which is what a few consider internal mail support. Good email integration is absolutely critical if you ever want to do any filtering, as opposed to just monitoring. Actually, this is probably a good time to drill into filtering a bit… Filtering/Blocking and Proxy Integration Nearly anyone deploying a DLP solution will eventually want to start blocking traffic. There’s only so long you can take watching all your juicy sensitive data running to the nether regions of the Internet before you start taking some action. But blocking isn’t the easiest thing in the world, especially since we’re trying to allow good traffic, only block bad traffic, and make the decision using real time content analysis. Email, as we just mentioned, is pretty easy. It’s not really real-time and is proxied by its very nature. Adding one more analysis hop is a manageable problem in even the most complex environments. Outside of email most of our communications traffic is synchronous- everything runs in real time. Thus if we want to filter it we either need to bridge the traffic, proxy it, or poison it from the outside. With a bridge we just have a system with two network cards and we perform content analysis in the middle. If we see something bad, the bridge closes the connection. Bridging isn’t the best approach for DLP since it might not stop all the bad traffic before it leaks out. It’s like sitting in a door watching everything go past with a magnifying glass- by the time you get enough traffic to make an intelligent decision, you may have missed the really good (bad) stuff. Very few products take this approach, although it does have the advantage of being protocol agnostic. Our next option is a proxy. A proxy is protocol/application specific and queues up traffic before passing it on, allowing for deeper analysis (get over it Hoff, I’m simplifying on purpose here). We mostly see gateway proxies for HTTP, FTP, and IM. Almost no DLP solutions include their own proxies; they tend to integrate with existing gateway/proxy vendors such as Blue Coat/Cisco/Websense instead. Integration is typically through the iCAP protocol, allowing the proxy to grab the traffic, send it to the DLP product for analysis, and cut communications if there’s a violation. This means you don’t have to add another piece of hardware in front of your network traffic and the DLP vendors can avoid the difficulties of building dedicated network hardware for inline analysis. A couple of gateways, like Blue Coat and Palo Alto (I may be missing

Share:
Read Post

Jericho Needs Assistance Restating The Obvious

Sometimes it’s not even worth the effort. First Rothman, then Hoff decide to bring up our favorite red headed stepchild (a term I use with fondness, since I have red hair and a stepfather); all based on an SC magazine article. I suppose Jericho’s goals are admirable, but I can’t help but feel that they’re stating the blindingly obvious and doing a piss poor job of it. For those of you not familiar with Jericho, take a quick gander over here. Basically, they’ve been advocating “de-perimeterization”; pushing people into new security architectures and dropping their firewalls (yes, they really said to trash the firewall if you go back and look at some of their original press releases). These days they have a marginally better platform (speaking platform, not technology), and aren’t running around telling people to shut off firewalls quite as much. I’ll let them describe their position: The group admits ‘deperimeterisation’ isn’t the most catchy phrase to explain multiple-level security, but Simmonds calls it an “overarching phrase” that “covers everything”. So what is it? According to the Jericho Forum, it is a concept that describes protecting an enterprise’s systems and data on multiple levels using a pick’n’mix of encryption, inherently secure computer protocols and data-level authentication. At the same time, it enables the free flow of secure data wherever and whenever it is needed, in whatever medium and between dissimilar organisations — such as banks and oil companies, for example. This kicks against the notion of security via a network boundary to the internet. Or as Hoff restates: Your perimeter *is* full of holes so what we need to do is fix the problems, not the symptoms. That is the message. Chris sometimes refers to a particular colleague of ours as Captain Obvious. I guess he didn’t want Richard to be lonely. Of course the perimeter is full of holes; I haven’t met a security professional who thinks otherwise. Of course our software generally sucks and we need secure platforms and protocols. But come on guys, making up new terms and freaking out over firewalls isn’t doing you any good. Anyone still think the network boundary is all you need? What? No hands? Just the “special” kid in back? Okay, good, we can move on now. How about this- focus on one issue and stay on message. I formally submit “buy secure stuff” as a really good one to keep us busy for a while. You have some big companies on board and could use some serious pressure to kick those market forces into gear. Share:

Share:
Read Post

Repeat After Me: P2P Is For Stealing Music, Not Sharing Employee Records

Well, we finally know how Pfizer lost all those employee records. An employee installed P2P file sharing software on her laptop, and probably shared her entire drive. Oops. I bet I know one person that’s eating alone in the corporate lunchroom. We originally talked about this a few weeks ago. I’d like to remind people of my Top 5 Steps to Prevent Information Loss and Data Leaks. Securing endpoints is number 5. I hate having to clamp down on employees with harsh policies, but limiting P2P on corporate systems is in the category of “reasonable things”. (thanks to Alex Hutton for the pointer) Share:

Share:
Read Post

What We Have Here Is A Failure To Communicate

Sigh. Again. More Jericho? Yep. Can’t let Hoff go without a retort, not after this. I’d like to quote my last post for a moment: I suppose Jericho”s goals are admirable, but I can”t help but feel that they”re stating the blindingly obvious and doing a piss poor job of it. For those of you not familiar with Jericho, take a quick gander over here. Basically, they”ve been advocating “de-perimeterization”; pushing people into new security architectures and dropping their firewalls (yes, they really said to trash the firewall if you go back and look at some of their original press releases). Now Hoff’s criticism of said post: The Mogull decides that rather than contribute meaningful dialog to discuss the meat of the topic at hand, he would rather contribute to the FUD regarding the messaging of the Jericho Forum that I was actually trying to wade through. … I spent my time in my last post suggesting that the Jericho Forum’s message is NOT that one should toss away their firewall. I spent my time suggesting that rather reacting to the oft-quoted and emotionally flammable marketing and messaging, folks should actually read their 10 Commandments as a framework. Quick reminder that the platform really used to be about getting rid of the perimeter. I’m a huge data security wonk, and even I think we’ll always need a perimeter, while also building better controls into the data. If you want to look, this is one of their better early presentations. It’s not too bad. But I’m an open minded guy, so I’ll drop the past and move into the present. Let’s look at the 10 commandments (Chris, I’m stealing your image to save typing time):  1. Agree. Security 101. 2. Agree, common sense. 3. Agree, seems obvious. 4. Agree, in an ideal world, we can get better and should strive towards it but not rely on it. 5. Agree, any company with a laptop is implementing this already.  6. Agree, designed a model for this back in 2002 (I’m not sure I can share it, need to check with my former employer). 7. Agree, was part of that model, and we’re already seeing some of this today. 8. Agree, see federated identity. Nothing new. 9. Agree, this could be interesting but I think it needs a lot more development. 10. Agree, but again, pretty basic. 11. Agree, no one would disagree. Chris, this messaging needs more refinement and a lot more meat. A lot of it isn’t revolutionary, yet much of the Jericho press coverage is sensationalistic and impedes their ability to get the message to the audience. They’ve built up so much baggage that they need to really work on the messaging. Quotes like this one don’t help the cause: The group admits “deperimeterisation” isn”t the most catchy phrase to explain multiple-level security, but Simmonds calls it an “overarching phrase” that “covers everything”. So what is it? According to the Jericho Forum, it is a concept that describes protecting an enterprise”s systems and data on multiple levels using a pick”n”mix of encryption, inherently secure computer protocols and data-level authentication. At the same time, it enables the free flow of secure data wherever and whenever it is needed, in whatever medium and between dissimilar organisations — such as banks and oil companies, for example. This kicks against the notion of security via a network boundary to the internet. You asked me to: Repeat after me: THIS IS A FRAMEWORK and provides guidance and a rational, strategic approach to Enterprise Architecture and how security should be baked in. Please read this without the FUDtastic taint: It isn’t the FUD in the framework that’s the problem. It’s the FUD in the press quotes, and the lack of meat in the guiding principles (the commandments aren’t really a framework). I’m happy to retract my suggestion to focus on using market forces to pressure vendors. Better yet, I’m happy to contribute to the dialog. I’ve been doing it for years, and intend to keep doing it. Take a look at my Data Security Hierarchy (which is now dated and I’m working on a new framework which is much more specific). Also look at Dynamic Trust if you can find it at Gartner (again, can’t release material I don’t own). … Spend a little time with Dr. John Meakin, Andrew Yeomans, Stephen Bo er, Nick Bleech, etc. and stop being so bloody American 😉 These guys practice what they preach and as I found out, have been for some time. I’m happy to. I’m happy to spend as many hours as they’d like talking about specific models and frameworks for improving security and protecting data. You set up the meetings and I’ll be there. Data security is here today, but harder than it should be, with some big clients out there implementing the right models we can make life easier for the rest of the world. But I disagree that they’ve refined the messaging enough yet. Too much obviousness; not enough specifics to back the really cool ideas; way too much FUD still in the press. That’s basic communications, and it needs work. I’m happy to help. You know where I am. Just shine your Stupendous Signal into the clouds and I’m on my way. Share:

Share:
Read Post

Network Security Podcast, Episode 77

Martin’s recruited me to co-host indefinitely, and I think we’re finally working out the kinks. This one is all over the map but there were some interesting things to talk about: Show Notes: Loren’s review of last week’s podcast – We answered a question from Loren in the podcast, which Rich answered on the blog but I accidentally deleted. Honest. Rich’s TD Ameritrade poll– What do you think the real culprit for the compromise was? Analysing the TD Ameritrade Disclosure TD Ameritrade’s 6 million customers hit with security breach Introducing Security Mike’s Guide to Internet Security – Good news, Mike Rothman doesn’t look at all like the guy on the cover of this book. More good news, this is an idea I think will benefit a lot of people The Ghost in My FileVault Tor madness reloaded My interview with Shava Nerad, the Executive Director of the Tor Project – Anyone interested in hearing an update from Shava? I’m sure she’d be willing to come on the show again. Tell-All PCs and Phones Transforming Divorce – Sorry if this is behind a NYT paywall, even though they’ve stated they’re going to tear it down soon. Tonight’s music: Got to have a Job by Little Charlie and the Nightcats Network Security Podcast, Episode 77  Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.