Securosis

Research

Defining (Blog) Content Theft

My posts today on SecurityRatty inspired a bit more debate than I expected. A number of commenters asked if someone still links back to my site, how can I consider it theft? What makes it different than other content aggregators? This is actually a big problem on many of the sites where I contribute content. From TidBITS to industry news sites, skimmers scrape the content, and often present it as their own. Some, like Ratty, aren’t as bad since they still link back. Others I never even see since they skip the linking process. I’ve been in discussions with other bloggers, analysts, and journalists where we all struggle with this issue. The good news is most of it is little more than an annoyance; my popularity is high enough now that people who search for my content will hit me on Google long before any of these other sites. But it’s still annoying. Here’s my take on theft vs. legal use: Per my Creative Commons license, I allow non-commercial use of my content if it’s attributed back to me. By “non-commercial” I mean you don’t directly profit from the content. A security vendor linking into my posts and commenting on it is totally fine, since they aren’t using the content directly to profit. Reposting every single post I put up, with full content (as Ratty does), and placing advertising around it, is a violation. I purposely don’t sell advertising on this site- the closest I come is something like the SANS affiliate program which is a partner organization that I think offers value to my readers. Thieves take entire posts (attributed or not) and do not contribute their own content. They leech off others. Even if someone produces a feed with my headlines, and maybe a couple line summary, and then links into the original posts I consider that legitimate. Related to (2), search engines and feed aggregators are fine since they don’t repurpose the entire content. Technorati, Google, and others help people find my content, but they don’t host it. To get the full content people need to visit my site, or subscribe to my feed. Yes, they sell advertising, but not on my full content, for which readers need to visit my site. In some cases I may authorize a full representation of my content/feed, but it’s *my* decision. I do this with the Security Bloggers Network since it expands my reach, I have full access to readership statistics, and it’s content I like to be associated with. Many people use large chunks of my content on their sites, but they attribute back and use my content as something to blog about, thus contributing to the collective dialog. Thieves just scrape, and don’t contribute. Thieves steal content even when asked to cease and desist. I know 2 other bloggers that asked Ratty to drop them and he didn’t. I know one that did get dropped on request, but I only found that out after I put up my post (and knew the other requests were ignored). I didn’t ask myself, based on reports from others that were ignored. Thus thieves violate content licenses, take full content and not just snippets, ignore requests to stop, and don’t contribute to the community dialog/discussion. Attributed or not, it’s still theft (albeit slightly less evil than unattributed theft). I’m not naive; I don’t expect the problem to ever go away. To be honest, if it does it means my content is no longer of value. But that doesn’t mean I don’t reserve the right to protect my content when I can. I’ve been posting nearly daily for 2 years, and trying to put up a large volume of valuable content that helps people in their day to day jobs, not just comments on news stories. It’s one of the most difficult undertakings of my life, and even though I don’t directly generate revenue from advertising I get both personal satisfaction and other business benefits from having readers on my site, or reading my feed. To be blunt, my words feed my family. The content is free, but I own my words – they are not in the public domain. Share:

Share:
Read Post

Best Practices For Endpoint DLP: Part 1

As the first analyst to ever cover Data Loss Prevention, I’ve had a bit of a tumultuous relationship with endpoint DLP. Early on I tended to exclude endpoint only solutions because they were more limited in functionality, and couldn’t help at all with protecting data loss from unmanaged systems. But even then I always said that, eventually, endpoint DLP would be a critical component of any DLP solution. When we’re looking at a problem like data loss, no individual point solution will give us everything we need. Over the next few posts we’re going to dig into endpoint DLP. I’ll start by discussing how I define it, and why I don’t generally recommend stand-alone endpoint DLP. I’ll talk about key features to look for, then focus on best practices for implementation. It won’t come as any surprise that these posts are building up into another one of my whitepapers. This is about as transparent a research process as I can think of. And speaking of transparency, like most of my other papers this one is sponsored, but the content is completely objective (sponsors can suggest a topic, if it’s objective, but they don’t have input on the content). Definition As always, we need to start with our definition for DLP/CMP: “Products that, based on central policies, identify, monitor, and protect data at rest, in motion, and in use through deep content analysis”. Endpoint DLP helps manage all three parts of this problem. The first is protecting data at rest when it’s on the endpoint; or what we call content discovery (and I wrote up in great detail). Our primary goal is keeping track of sensitive data as it proliferates out to laptops, desktops, and even portable media. The second part, and the most difficult problem in DLP, is protecting data in use. This is a catch all term we use to describe DLP monitoring and protection of content as it’s used on a desktop- cut and paste, moving data in and out of applications, and even tying in with encryption and enterprise Document Rights Management (DRM). Finally, endpoint DLP provides data in motion protection for systems outside the purview of network DLP- such as a laptop out in the field. Endpoint DLP is a little difficult to discuss since it’s one of the fastest changing areas in a rapidly evolving space. I don’t believe any single product has every little piece of functionality I’m going to talk about, so (at least where functionality is concerned) this series will lay out all the recommended options which you can then prioritize to meet your own needs. Endpoint DLP Drivers In the beginning of the DLP market we nearly always recommended organizations start with network DLP. A network tool allows you to protect both managed and unmanaged systems (like contractor laptops), and is typically easier to deploy in an enterprise (since you don’t have to muck with every desktop and server). It also has advantages in terms of the number and types of content protection policies you can deploy, how it integrates with email for workflow, and the scope of channels covered. During the DLP market’s the first few years, it was hard to even find a content-aware endpoint agent. But customer demand for endpoint DLP quickly grew thanks to two major needs- content discovery on the endpoint, and the ability to prevent loss through USB storage devices. We continue to see basic USB blocking tools with absolutely no content awareness brand themselves as DLP. The first batches of endpoint DLP tools focused on exactly these problems- discovery and content-aware portable media/USB device control. The next major driver for endpoint DLP is supporting network policies when a system is outside the corporate gateway. We all live in an increasingly mobile workforce where we need to support consistent policies no matter where someone is physically located, nor how they connect to the Internet. Finally, we see some demand for deeper integration of DLP with how a user interacts with their system. In part, this is to support more intensive policies to reduce malicious loss of data. You might, for example, disallow certain content from moving into certain applications, like encryption. Some of these same kinds of hooks are used to limit cut/paste, print screen, and fax, or to enable more advanced security like automatic encryption or application of DRM rights. The Full Suite Advantage As we’ve already hinted, there are some limitations to endpoint only DLP solutions. The first is that they only protect managed systems where you can deploy an agent. If you’re worried about contractors on your network or you want protection in case someone tries to use a server to send data outside the walls, you’re out of luck. Also, because some content analysis policies are processor and memory intensive, it is problematic to get them running on resource-constrained endpoints. Finally, there are many discovery situations where you don’t want to deploy a local endpoint agent for your content analysis- e.g. when performing discovery on a major SAN. Thus my bias towards full-suite solutions. Network DLP reduces losses on the enterprise network from both managed and unmanaged systems, and servers and workstations. Content discovery finds and protects stored data throughout the enterprise, while endpoint DLP protects systems that leave the network, and reduces risks across vectors that circumvent the network. It’s the combination of all these layers that provides the best overall risk reduction. All of this is managed through a single policy, workflow, and administration server; rather than forcing you to create different policies; for different channels and products, with different capabilities, workflow, and management. In our next post we’ll discuss the technology and major features to look for, followed by posts on best practices for implementation. Share:

Share:
Read Post

The Future Of Application And Database Security: Part 2, Browser To WAF/Gateway

Since Friday is usually “trash” day (when you dump articles you don’t expect anyone to read) I don’t usually post anything major. But thanks to some unexpected work that hit yesterday, I wasn’t able to get part 2 of this series out when I wanted to. If you can tear yourself away from those LOLCatz long enough, we’re going to talk about web browsers, WAFs, and web application gateways. These are the first two components of Application and Database Monitoring and Protection (ADMP), which I define as: Products that monitor all activity in a business application and database, identify and audit users and content, and, based on central policies, protect data based on content, context, and/or activity. Browser Troubles As we discussed in part 1, one of the biggest problems in web application security is that the very model of the web browsers and the World Wide Web is not conducive to current security needs. Browsers are the ultimate mashup tool- designed to take different bits from different places and seamlessly render them into a coherent whole. The first time I started serious web application programming (around 1995/96) this blew my mind. I was able to embed disparate systems in ways never before possible. And not only can we embed content within a browser, we can embed browsers within other content/applications. The main reason, as a developer, I converted from Netscape to IE was that Microsoft allowed IE to be embedded in other programs, which allowed us to drop it into our thick VR application. Netscape was stand alone only; seriously limiting its deployment potential. This also makes life a royal pain on the security front where we often need some level of isolation. Sure, we have the same-origin policy, but browsers and web programming have bloated well beyond what little security that provides. Same-origin isn’t worthless, and is still an important tool, but there are just too many ways around it. Especially now that we all use tabbed browsers with a dozen windows open all the time. Browsers are also stateless by nature, no matter what AJAX trickery we use. XSS and CSRF, never mind some more sophisticated attacks, take full advantage of the weak browser/server trust models that result from these fundamental design issues. In short, we can’t trust the browser, the browser can’t trust the server, and individual windows/tabs/sessions in the browser can’t trust each other. Fun stuff! WAF Troubles I’ve talked about WAFs before, and their very model is also fundamentally flawed. At least how we use WAFs today. The goal of a WAF is, like a firewall, to drop known bad traffic or only allow known good traffic. We’re trying to shield our web applications from known vulnerabilities, just like we use a regular firewall to block ports, protocols, sources, and destinations. Actually, a WAF is closer to IPS than it is to a stateful packet inspection firewall. But web apps are complex beasts; every single one a custom application, with custom vulnerabilities. There’s no way a WAF can know all the ins and outs of the application behind it, even after it’s well tuned. WAFs also only protect against certain categories of attacks- mostly some XSS and SQL injection. They don’t handle logic flaws, CSRF, or even all XSS. I was talking with a reference yesterday of one of the major WAFs, and he had no trouble slicing through it during their eval phase using some standard techniques. To combat this, we’re seeing some new approaches. F5 and WhiteHat have partnered to feed the WAF specific vulnerability information from the application vulnerability assessment. Imperva just announced a similar approach, with a bunch of different partners. These advances are great to see, but I think WAFs will also need to evolve in some different ways. I just don’t think the model of managing all this from the outside will work effectively enough. Enter ADMP The idea of ADMP is that we build a stack of interconnected security controls from the browser to the database. At all levels we both monitor activity and include enforcement controls. The goal is to start with browser session virtualization connected to a web application gateway/WAF. Then traffic hits the web server and web application server, both with internal instrumentation and anti-exploitation. Finally, transactions drop to the database, where they are again monitored and protected. All of the components for this model exist today, so it’s not science fiction. We have browser session virtualization, WAFs, SSL-VPNs (that will make sense in a minute), application security services and application activity monitoring, and database activity monitoring. In addition to the pure defensive elements, we’ll also tie in to the applications at the design and code level through security services for adaptive authentication, transaction authentication, and other shared services (happy Dre? 🙂 ). The key is that this will all be managed through a central console via consistent policies. In my mind, this is the only thing that makes sense. We need to understand the applications and the databases that back them. We have to do something at the browser level since even proper parameterization and server side validation can’t meet all our needs. We have to start looking at transactions, business context, and content, rather than just packets and individual requests. Point solutions at any particular layer have limited effectiveness. But if we stop looking at our web applications as pieces, and rather design security that addresses them as a whole, we’ll be in much better shape. Not that anything is perfect, but we’re looking at risk reduction, not risk elimination. A web application isn’t just a web server, just some J2EE code, or just a DB- it’s a collection of many elements working together to perform business transactions, and that’s how we need to look at them for effective security. The Browser and Web Application Gateway A little while back I wrote about the concept of browser session virtualization. To plagiarize myself and save a little writing time so I can get

Share:
Read Post

Don’t Use chmod To Block Mac OS X ARDAgent Vulnerability

Just a quick note- if you used chmod to change the permissions of ARDAgent to block the privilege escalation vulnerability being used by the new trojans you should still go compress or remove it. Repairing permissions restores ARDAgent and opens the vulnerability again. I suppose you could also make sure you don’t repair permissions, but it’s easiest to just remove it. I removed the chmod recommendation from the TidBITS article. Share:

Share:
Read Post

The Future Of Application And Database Security: Part 1, Setting The Stage

I’ve been spending the past few weeks wandering around the country for various shows, speaking to some of the best and brightest in the world of application and database security. Heck, I even hired one of them. During some of my presentations I laid out my vision for where I believe application (especially web application) and database security are headed. I’ve hinted at it here on the blog, discussing the concepts of ADMP, the information-centric security lifecycle, and DAM, but it’s long past time I detailed the big picture. I’m not going to mess around and write these posts so they are accessible to the non-geeks out there. If you don’t know what secure SDLC, DAM, SSL-VPN, WAF, and connection pooling mean, this isn’t the series for you. That’s not an insult, it’s just that this would drag out to 20+ pages if I didn’t assume a technical audience. Will all of this play out exactly as I describe? No way in hell. If everything I predict is 100% correct I’m just predicting common knowledge. I’m shooting for a base level of 80% accuracy, with hopes I’m closer to 90%. But rather than issuing some proclamation from the mount, I’ll detail why I think things are going where they are. You can make your own decisions as to my assumptions and the accuracy of the predictions that stem from them. Also, apologies to Dre’s friends and family. I know this will make his head explode, but that’s a cost I’m willing to pay. Special thanks to Chris Hoff and the work we’ve been doing on disruptive innovation, since that model drives most of what I’m about to describe. Finally, this is just my personal opinion as to where things will go. Adrian is also doing some research on the concept of ADMP, and may not agree with everything I say. Yes, we’re both Securosis, but when you’re predicting uncertain futures no one can speak with absolute authority. (And, as Hoff says, no one can tell you you’re wrong today). Forces and Assumptions Based on the work I’ve been doing with Hoff, I’ve started to model future predictions by analyzing current trends and disruptive innovations. Those innovations that force change, rather than ones that merely nudge us to steer slightly around some new curves. In the security world, these forces (disruptions) come from three angles- business innovation, threat innovation, and efficiency innovation. The businesses we support are innovating for competitive advantage, as are the bad guys. For both of them, it’s all about increasing the top line. The last category is more internal- efficiency innovation to increase the bottom line. Here’s how I see the forces we’re dealing with today, in no particular order: Web browsers are inherently insecure. The very model of the world wide web is to pull different bits from different places, and render them all in a single view through the browser. Images from over here, text from over here, and, using iframes, entire sites from yet someplace else. It’s a powerful tool, and I’m not criticizing this model; it just is what it is. From a security standpoint, this makes our life more than a little difficult. Even with a strictly enforced same origin policy, it’s impossible to completely prevent cross-site issues, especially when people keep multiple sessions to multiple sites open all at the same time. That’s why we have XSS, CSRF, and related attacks. We are trying to build a trust model where one end can never be fully trusted. We have a massive repository of insecure code that grows daily. I’m not placing the blame on bad programmers; many of the current vulnerabilities weren’t well understood when much of this code was written. Even today, some of these issues are complex and not always easy to remediate. We are also discovering new vulnerability classes on a regular basis, requiring review and remediation on any existing code. We’re talking millions of applications, never mind many millions of lines of code. Even the coding frameworks and tools themselves have vulnerabilities, as we just saw with the latest Ruby issues. The volume of sensitive data that’s accessible online grows daily. The Internet and web applications are powerful business tools. It only makes sense that we connect more of our business operations online, and thus more of our sensitive data and business operations are Internet accessible. The bad guys know technology. Just as it took time for us to learn and adopt new technologies, the bad guys had to get up to speed. That window is closed, and we have knowledgeable attackers. The bad guys have an economic infrastructure. Not only can they steal things, but they have a market to convert the bits to bucks. Pure economics give them viable business models that depend on generating losses for us. Bad guys attack us to steal or assets (information) or hijack them to use against others (e.g., to launch a big XSS attack). They also sometimes attack us just to destroy our assets, but not often (less economic incentive, even for DoS blackmail). Current security tools are not oriented to the right attack vectors. Even WAFs offer limited effectiveness since they are more tied to our network security models than our data/information-centric models. We do not have the resources to clean up all existing code, and we can’t guarantee future code, even using a secure SDLC, won’t be vulnerable. This is probably my most contentious assumption, but most of the clients I work with just don’t have the resources to completely clean what they do have, and even the best programmers will still make mistakes that slip through to production. Code scanning tools and vulnerability analysis tools can’t catch everything, and can’t eliminate all false positives. They’ll never catch logic flaws, and even if we had a perfect tool, the second a new vulnerability appeared we’d have to go back and fix everything we’d built up to that point. We’re relying on more and more code and

Share:
Read Post

Network Security Podcast, Episode 109

This week, Martin and I are joined by Adam Shostack, bandleader of the Emergent Chaos Jazz Combo of the Blogosphere and co-author of The New School of Information Security. (And he sorta works for a big software company, but that’s not important right now). You can get the show notes and episode over at netsecpodcast.com. We spend a lot of time talking about statistics and the New School concepts. I’m a big fan of the book, and Adam and I share a lot of positions on where we are as an industry, and where we need to go. Share:

Share:
Read Post

Improving OS X Security

There’s been a bunch of news on the Mac security front in the past couple of weeks. From the Safari carpet bombing attack, to a couple trojans popping up. Over the weekend I submitted an email response to a press interview where I outlined my recommended improvements to OS X to keep Macs safer than Windows. On the technical side they included elements like completing implementation of library randomization (ASLR), adding more stack protection to applications, enhancing and extending sandboxing to most major OS X applications, running fewer processes as root/system, and more extensive use of DEP. I’m not bothering to lay this out in any more depth, because Dino Dai Zovi did a much better job of describing them over on his blog. Dino’s one of the top Mac security researchers out there, so I highly suggest you read his post if you’re interested in OS X security. There are a few additional things I’d like to see, outside of the OS level changes: A more-deeply staffed Apple Security Response Center, with public facing side to better communicate security issues and engage the research community. Apple absolutely sucks at working with researchers and communicating on security issues. Improvements here will go a way to increase confidence, manage security issues, and avoid many of the kinds of flareups we’ve seen in the past few years. Better policies on updating open source software included with OS X. In some cases, we’ve seen vulnerabilities in OS X due to included open source software, like Samba and Apache, that are unpatched for MONTHS after they are publicly known. These are fully exploitable on Macs and other Apple products until Apple issues an update. I realize this is a very tough issue, because Apple needs to run through extensive evaluation and testing before releasing updates, but they can mitigate this timeline by engaging deeply with those various open source teams to reduce the windows where users are exposed to the vulnerabilities. An Apple CSO- someone who is both the internal leader and external face of Apple security. They need an evangelist with credibility in the security world (no, I’m not trolling for a job; I don’t want to move to California, even for that). A secure development lifecycle for Apple products. The programmers there are amazing, but even great programmers need to follow secure coding practices that are enforced with tools and process. I have suspicions we might see some of these technical issues fixed in Snow Leopard, but the process issues are just as important for building and maintaining a sustainable, secure platform. Share:

Share:
Read Post

I’m With Ptacek- I Run My Mac As Admin

I’m still in New York for the FISD conference, listening to Team Cymru talk about the state of cybercrime as I wait for my turn at the podium (to talk about information-centric security and DLP). One problem with travel is keeping up with the news, so I pretty much missed the Applescript vulnerability and now have to write it up for TidBITS on the plane before Monday. I was reading Thomas Ptacek’s post on the vulnerability, and I think it’s time I joined Tom and came out of the closet. I run as admin on my Mac. All the time. And I’m not ashamed. Why? As Ptacek said, even without root/admin there’s a ton of nasty things you can do on my system. In fact, you can pretty much get anything I really worry about. I even once wrote some very basic Applescript malware that ran on boot (after jailbreaking an improperly configured virtual machine). It didn’t need admin to work. There. I feel better now. Glad to get that out there. (If you’re going to criticize this, go read Tom’s post and talk to him first. He’s smarter than me, and not on an airplane.) Share:

Share:
Read Post

I’m Not The Only Blogger Here!

I’ve been absolutely flattered by some of the positive comments on our posts this week, especially the database posts. But as much as I enjoy the credit for someone else’s work, I’d like to remind everyone that I’m not the only blogger here at Securosis anymore. Adrian Lane, our new Senior Security Strategist, has been putting up all the meat this week. Once I get back from this conference I’ll increase the font size on the writer tagline for the blog so it’s more obvious. We also occasionally have contributions from David Mortman and Chris Pepper, both of whom wrote posts I got the credit for. These are all brilliant guys, and I’m honored they contribute here. They’re probably smarter than I am… … oh. Never mind. I write it all. Share:

Share:
Read Post

Speaking in Seattle And New York This Week

It’s a good thing Adrian joined when he did, because I’m slammed with speaking events this week and he gets to mind the blog. Tomorrow I head up to Bellevue to speak at the Association for Enterprise Integration’s Enterprise Security Management event. This is a mixed audience with mostly defense contractors and NSA types. Bit of a different venue for me, but I love talking with .gov/.mil/.nothingtoseehere types. Wednesday I shift over to the City (NYC) for the Financial Information Security Decisions conference put on by Information Security magazine. I’m presenting in the Data Security track, recording a session on virtualization security with Dino and Hoff, and squeezing in a few other things. I can’t speak for Dino, but Hoff and I are both battling travel-related colds, so the panel could end up as the Great Snot War of 2008. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.