Securosis

Research

YouTube, Viacom, And Why You Should Fear Google More Than The Government

Reading Wired this morning (and a bunch of other blogs), I learned that a judge ordered Google/YouTube to turn over ALL records of who watched what on YouTube. To Viacom of all organizations, as part of their lawsuit against Google for hosting copyrighted content. The data transfered over includes IP address and what was watched. Gee, think that might leak at some point? Ever watch YouTube porn from an IP address that can be tied to you? No porn? How about singing cats? Yeah, I thought so you sick bastard. But wait, what are the odds of tracing an IP address back to an individual? Really damn high if you use any other Google service that requires a login, since they basically never delete data. Even old emails can tie you back to an IP, never mind a plethora of other services. Ever comment on a blog? The government has a plethora of mechanisms to track our activity, but even with recent degradations in their limits for online monitoring, we still have a heck of a lot of rights and laws protecting us. Even the recent warrantless wiretapping issue doesn’t let a government agency monitor totally domestic conversations without court approval. But Google? (And other services). There’s no restriction on what they can track (short of reading emails, or listening in on VoIP calls). They keep more damn information on you than the government has the infrastructure to support. Searches, videos you’ve watched, emails, sites you visit, calendar entries, and more. Per their privacy policies some of this is deleted over time, but even if you put in a request to purge your data it doesn’t extend to tape archives. It’s all there, waiting to be mined. Feedburner, Google Analytics. You name it. Essentially none of this information is protected by law. Google can change their privacy policies at any time, or sell the content to anyone else. Think it’s secure? Not really- I heard of multiple XSS 0days on Google services this week. I’ve seen some of their email responses to security researchers; needless to say, they really need a CSO. I’m picking on Google here, but most online services collect all sorts of information, including Securosis. In some cases, it’s hard not to collect it. For example, all comments on this blog come with an IP address. The problem isn’t just that we collect all sorts of information, but that we have a capacity to correlate it that’s never been seen before. Our laws aren’t even close to addressing these privacy issues. On that note, I’m disabling Google Analytics for the site (I still have server logs, but at least I have more control over those). I’d drop Feedburner, but that’s a much more invasive process right now that would screw up the site badly. Glad I have fairly tame online habits, although I highly suspect my niece has watched more than a few singing cat videos on my laptop. It was her, I swear! Share:

Share:
Read Post

The Mozilla Metrics Project

Ryan Naraine just posted an article over at ZDNet about a project I’m extremely excited to be involved with. Just before RSA I was invited by Window Snyder over at Mozilla to work with them on a project to take a new look at software security metrics. Window has posted the details of the project over on the Mozilla security blog, and here’s an excerpt: Mozilla has been working with security researcher and analyst Rich Mogull for a few months now on a project to develop a metrics model to measure the relative security of Firefox over time. We are trying to develop a model that goes beyond simple bug counts and more accurately reflects both the effectiveness of secure development efforts, and the relative risk to users over time. Our goal in this first phase of the project is to build a baseline model we can evolve over time as we learn what works, and what does not. We do not think any model can define an absolute level of security, so we decided to take the approach of tracking metrics over time so we can track relative improvements (or declines), and identify any problem spots. This information will support the development of Mozilla projects including future versions of Firefox. … Below is a summary of the project goals, and the xls of the model is posted at http://securosis.com/publications/MozillaProject2.xls. The same content as a set of .csvs is available here: http://securosis.com/publications/MozillaProject.zip This is a preliminary version and we are currently looking for feedback. The final version will be a far more descriptive document, but for now we are using a spreadsheet to refine the approach. Feel free to download it, rip it apart, and post your comments. This is an open project and process. Eventually we will release this to the community at large with the hope that other organizations can adapt it to their own needs. Although I love my job, it’s not often I get to develop original research like this with an organization like Mozilla. We really think we have the opportunity to contribute to the security and development communities in an impactful way. If you’d like to contribute, please comment over at the Mozilla blog, or email me directly. I’d like to keep the conversation over there, rather than in comments here. This is just the spreadsheet version (and a csv version); the final product will be more of a research note, describing the metrics, process, and so on. I’m totally psyched about this. Share:

Share:
Read Post

SecurityRatty Is A Slimy, Content-Stealing Thief

Like most other security blogs in the world, my content is regularly abused by a particular site that just shovels out my posts as if it was theirs. This is an experiment to see if they bother reading what they steal. Share:

Share:
Read Post

Best Practices For Endpoint DLP: Part 2

In Part 1 I talked about the definition of endpoint DLP, the business drivers, and how it integrates with full-suite solutions. Today (and over the next few days) we’re going to start digging into the technology itself. Base Agent Functions There is massive variation in the capabilities of different endpoint agents. Even for a single given function, there can be a dozen different approaches, all with varying degrees of success. Also, not all agents contain all features; in fact, most agents lack one or more major areas of functionality. Agents include four generic layers/features: Content Discovery: Scanning of stored content for policy violations. File System Protection: Monitoring and enforcement of file operations as they occur (as opposed to discovery, which is scanning of content already written to media). Most often, this is used to prevent content from being written to portable media/USB. It’s also where tools hook in for automatic encryption or application of DRM rights. Network Protection: Monitoring and enforcement of network operations. Provides protection similar to gateway DLP when a system is off the corporate network. Since most systems treat printing and faxing as a form of network traffic, this is where most print/fax protection can be enforced (the rest comes from special print/fax hooks). GUI/Kernel Protection: A more generic category to cover data in use scenarios, such as cut/paste, application restrictions, and print screen. Between these four categories we cover most of the day to day operations a user might perform that places content at risk. It hits our primary drivers from the last post- protecting data from portable storage, protecting systems off the corporate network, and supporting discovery on the endpoint. Most of the tools on the market start with file and (then) networking features before moving on to some of the more complex GUI/kernel functions. Agent Content Awareness Even if you have an endpoint with a quad-core processor and 8 GB of RAM, the odds are you don’t want to devote all of that horsepower to enforcing DLP. Content analysis may be resource intensive, depending on the types of policies you are trying to enforce. Also, different agents have different enforcement capabilities which may or may not match up to their gateway counterparts. At a minimum, most endpoint tools support rules/regular expressions, some degree of partial document matching, and a whole lot of contextual analysis. Others support their entire repertoire of content analysis techniques, but you will likely have to tune policies to run on a more resource constrained endpoint. Some tools rely on the central management server for aspects of content analysis, to offload agent overhead. Rather than performing all analysis locally, they will ship content back to the server, then act on any results. This obviously isn’t ideal, since those policies can’t be enforced when the endpoint is off the enterprise network, and it will suck up a fair bit of bandwidth. But it does allow enforcement of policies that are otherwise totally unrealistic on an endpoint, such as database fingerprinting of a large enterprise DB. One emerging option is policies that adapt based on endpoint location. For example, when you’re on the enterprise network most policies are enforced at the gateway. Once you access the Internet outside the corporate walls, a different set of policies is enforced. For example, you might use database fingerprinting (exact database matching) of the customer DB at the gateway when the laptop is in the office or on a (non split tunneled) VPN, but drop to a rule/regex for Social Security Numbers (or account numbers) for mobile workers. Sure, you’ll get more false positives, but you’re still able to protect your sensitive information while meeting performance requirements. Next up: more on the technology, followed by best practices for deployment and implementation. Share:

Share:
Read Post

I Win

Guess they don’t bother to review the content they steal… Update- I think I’ll call this attack “Rat Phucking”. Share:

Share:
Read Post

Pre-Black Hat/DefCon SunSec And Inagural Phoenix Security Slam

I’ve talked to some of the local crew, and we’ve decided to hold a special pre-BH/DefCon SunSec on July 31st (location TBD). We’re going to take a bit of a different approach on this one. A while back, Vinnie, Andre, myself, and a couple of others sat around a table trying to think of how to jazz up SunSec a bit. As much as we enjoy hanging out and having beers, we recognize the Valley of the Sun is pretty darn big, and some of you need a little more than just alcohol to get you out of the house on a Wednesday of Thursday night. We came up with the idea of the Phoenix Security Slam (PiSS for short). We’ll move to a venue where we can get a little private space, bring a projector, and have a little presentation free for all. Anyone who presents is limited to 10 minutes, followed by Q&A. Fast, to the point, and anything goes. For this first run we’ll be a little less formal. I’ll bring my DefCon content, and Vinnie has some other materials to preview. I may also have some other good info about what’s going down in Vegas the next week, and I’ll share what I can. We’ll limit any formal presentation time to an hour, and make sure the bar is open before I blather. If you’re in Phoenix, let me know what you think. If you’re also presenting at BH/DC and want to preview your content, let me know. Also, we could use ideas for a location. Some restaurant where we can take over a back room is ideal. Share:

Share:
Read Post

Defining (Blog) Content Theft

My posts today on SecurityRatty inspired a bit more debate than I expected. A number of commenters asked if someone still links back to my site, how can I consider it theft? What makes it different than other content aggregators? This is actually a big problem on many of the sites where I contribute content. From TidBITS to industry news sites, skimmers scrape the content, and often present it as their own. Some, like Ratty, aren’t as bad since they still link back. Others I never even see since they skip the linking process. I’ve been in discussions with other bloggers, analysts, and journalists where we all struggle with this issue. The good news is most of it is little more than an annoyance; my popularity is high enough now that people who search for my content will hit me on Google long before any of these other sites. But it’s still annoying. Here’s my take on theft vs. legal use: Per my Creative Commons license, I allow non-commercial use of my content if it’s attributed back to me. By “non-commercial” I mean you don’t directly profit from the content. A security vendor linking into my posts and commenting on it is totally fine, since they aren’t using the content directly to profit. Reposting every single post I put up, with full content (as Ratty does), and placing advertising around it, is a violation. I purposely don’t sell advertising on this site- the closest I come is something like the SANS affiliate program which is a partner organization that I think offers value to my readers. Thieves take entire posts (attributed or not) and do not contribute their own content. They leech off others. Even if someone produces a feed with my headlines, and maybe a couple line summary, and then links into the original posts I consider that legitimate. Related to (2), search engines and feed aggregators are fine since they don’t repurpose the entire content. Technorati, Google, and others help people find my content, but they don’t host it. To get the full content people need to visit my site, or subscribe to my feed. Yes, they sell advertising, but not on my full content, for which readers need to visit my site. In some cases I may authorize a full representation of my content/feed, but it’s *my* decision. I do this with the Security Bloggers Network since it expands my reach, I have full access to readership statistics, and it’s content I like to be associated with. Many people use large chunks of my content on their sites, but they attribute back and use my content as something to blog about, thus contributing to the collective dialog. Thieves just scrape, and don’t contribute. Thieves steal content even when asked to cease and desist. I know 2 other bloggers that asked Ratty to drop them and he didn’t. I know one that did get dropped on request, but I only found that out after I put up my post (and knew the other requests were ignored). I didn’t ask myself, based on reports from others that were ignored. Thus thieves violate content licenses, take full content and not just snippets, ignore requests to stop, and don’t contribute to the community dialog/discussion. Attributed or not, it’s still theft (albeit slightly less evil than unattributed theft). I’m not naive; I don’t expect the problem to ever go away. To be honest, if it does it means my content is no longer of value. But that doesn’t mean I don’t reserve the right to protect my content when I can. I’ve been posting nearly daily for 2 years, and trying to put up a large volume of valuable content that helps people in their day to day jobs, not just comments on news stories. It’s one of the most difficult undertakings of my life, and even though I don’t directly generate revenue from advertising I get both personal satisfaction and other business benefits from having readers on my site, or reading my feed. To be blunt, my words feed my family. The content is free, but I own my words – they are not in the public domain. Share:

Share:
Read Post

ATM PIN Thefts

The theft of Citibank ATM PINs is in the news again as it appears that indictments have been handed down on the three suspects. This case will be interesting to watch, to see what the fallout will be. It is not still really clear if the PINs were leaked in transit, or if the clearing house servers were breached. There are a couple of things about this story that I still find amusing. The first is that Fiserv, the company that operates the majority of the network, is pointing fingers at Cardtronics Inc. The quote by the Fiserv representative “Fiserv is confident in the integrity and security of our system” is great. They both manage elements of the ‘system’. When it comes down to it, this is like two parties who are standing in a puddle of gasoline, accusing each other of lighting a match. It won’t matter who is at fault when they both go up in flames. In the public mind, no one is going to care, and they will be blamed equally and quite possibly both go out of business if their security was shown to be grossly lacking. My second though on this subject was, once you breach the ‘system’, you have to get the money out. In this case, it has been reported that over $2M was ‘illegally gained’. If the average account is hacked for $200.00, we are talking about at least 10,000 separate ATM withdrawals. That is a lot of time spent at the 7-11! But seriously, that is a lot of time to spend making ATM withdrawals. I figure that they way they got caught is that the thief’s picture keept turning up on security cameras … otherwise this is a difficult crime to detect and catch. I also got to thinking about ATMs and the entire authentication process is not much more than basic two factor authentication combined with some simple behavioral checks at the back end. The security of these networks is really not all that advanced. Typically PIN codes are four digits in length, and it really does not make a lot of sense to use hash algorithms given the size of the PIN and the nature of the communications protocol. And while it requires some degree of technical skill, the card itself can be duplicated, making a fairly weak two factor system. Up until a couple years ago, DES was still the typical encryption algorithm in use, and only parts of the overall transaction processing systems keep the data encrypted. Many of the ATMs are not on private networks, but utilize the public Internet and airwaves. Given the amount of money and the number of transactions that are processed around the world, it is really quite astonishing how well the system as a whole holds up. Finally, while I have been known to bash Microsoft for various security miscues over the years, it seems somewhat specious to state “Hackers are targeting the ATM system’s infrastructure, which is increasingly built on Microsoft Corp.’s Windows operating system.” Of course they are targetting the infrastructure; that is the whole point of electronic fraud. They probably meant the back end processing infrastructure. And why mention Windows? Windows may make familiarity with the software easier; this case does not show that any MS product was at fault for the breach. Throwing that into the story seems like they are trying to cast blame on MS software without any real evidence. Share:

Share:
Read Post

What’s My Motivation?

‘Or more appropriately, “Why are we talking about ADMP?” In his first post on the future of application and database security, Rich talked about Forces and Assumptions heading us down an evolutionary path towards ADMP. I want to offer a slightly different take on my motivation, or belief, in this strategy. One of the beautiful things about mode application development is our ability to cobble together small, simple pieces of code into a larger whole in order to accomplish some task. Not only do I get to leverage existing code, but I get to bundle it together in such a way that I alter the behavior depending upon my needs. With simple additions, extensions and interfaces, I can make a body of code behave very differently depending upon how I organize and deploy the pieces. Further, I can bundle different application platforms together in a seamless manner to offer extraordinary services without a great deal of re-engineering. A loose confederation of applications cooperating together to solve business problems is the typical implementation strategy today, and I think that the security challenge needs to account for the model rather than the specific components within the model. Today, we secure components. We need to be able to ‘link up’ security in the same way that we do the application platforms (I would normally go off on an Information Centric Security rant here, but that is pure evangelism, and a topic for another day). I have spent the last four years with a security vendor that provided assessment, monitoring, and auditing of databases and databases specifically. Do enough research into security problems, customer needs, and general market trends; and you start to understand the limitations of securing just a single application in the chain of events. For example, I found that database security issues detected as part of an assessment scan may have specific relevance to the effectiveness of database monitoring. I believe Web Application security providers witness the same phenomenon with SQL Injection as they may lack some context for the attack, or at least the more subtle subversions of the system or exploitation of logic flaws in the database or database application. A specific configuration might be necessary for business continuity and processing, but could open an acknowledged security weakness that I would like to address with another tool, such as database monitoring. That said, where I am going with this line of thought is not just the need for detective and preventative controls on a single application like a web server or database server, but rather the Inter-application benefit of a more unified security model. There were many cases where I wanted to share some aspect of the database setup with the application or access control system that could make for a more compelling security offering (or visa-versa, for that matter). It is hard to understand context when looking at security from a single point outside an application, or from the perspective of a single application component. I have said many times that the information we have at any single processing node is limited. Yes, my bias towards application level data collection vs. network level data collection is well documented, but I am advocating collection of data from multiple sources. A combination of monitoring of multiple information sources, coupled with a broad security and compliance policy set, would be very advantageous. I do not believe this is simply a case of (monitoring) more is better, but of solving specific problems where it is most efficient to do so. There are certain attacks that are easier to address at the web application level, and others best dealt with in the database, while others should be intercepted at the network level. But the sharing of policies, policy enforcement, and suspect behaviors, can be both more effective and more efficient. Application and Database Monitoring and Protection is a concept that I have been considering/researching/working towards for several years now. With my previous employer, this was a direction I wanted to take the product line, as well as some of the partner relationships to make this happen across multiple security products. When Rich branded the concept with the “ADMP” moniker it just clicked with me for the reasons stated above, and I am glad he posted more on the subject last week. But I wanted to put a little more focus on the motivation for what he is describing and why it is important. This is one of the topics we will both be writing about more often in the weeks and months ahead. Share:

Share:
Read Post

Best Practices For Endpoint DLP: Part 1

As the first analyst to ever cover Data Loss Prevention, I’ve had a bit of a tumultuous relationship with endpoint DLP. Early on I tended to exclude endpoint only solutions because they were more limited in functionality, and couldn’t help at all with protecting data loss from unmanaged systems. But even then I always said that, eventually, endpoint DLP would be a critical component of any DLP solution. When we’re looking at a problem like data loss, no individual point solution will give us everything we need. Over the next few posts we’re going to dig into endpoint DLP. I’ll start by discussing how I define it, and why I don’t generally recommend stand-alone endpoint DLP. I’ll talk about key features to look for, then focus on best practices for implementation. It won’t come as any surprise that these posts are building up into another one of my whitepapers. This is about as transparent a research process as I can think of. And speaking of transparency, like most of my other papers this one is sponsored, but the content is completely objective (sponsors can suggest a topic, if it’s objective, but they don’t have input on the content). Definition As always, we need to start with our definition for DLP/CMP: “Products that, based on central policies, identify, monitor, and protect data at rest, in motion, and in use through deep content analysis”. Endpoint DLP helps manage all three parts of this problem. The first is protecting data at rest when it’s on the endpoint; or what we call content discovery (and I wrote up in great detail). Our primary goal is keeping track of sensitive data as it proliferates out to laptops, desktops, and even portable media. The second part, and the most difficult problem in DLP, is protecting data in use. This is a catch all term we use to describe DLP monitoring and protection of content as it’s used on a desktop- cut and paste, moving data in and out of applications, and even tying in with encryption and enterprise Document Rights Management (DRM). Finally, endpoint DLP provides data in motion protection for systems outside the purview of network DLP- such as a laptop out in the field. Endpoint DLP is a little difficult to discuss since it’s one of the fastest changing areas in a rapidly evolving space. I don’t believe any single product has every little piece of functionality I’m going to talk about, so (at least where functionality is concerned) this series will lay out all the recommended options which you can then prioritize to meet your own needs. Endpoint DLP Drivers In the beginning of the DLP market we nearly always recommended organizations start with network DLP. A network tool allows you to protect both managed and unmanaged systems (like contractor laptops), and is typically easier to deploy in an enterprise (since you don’t have to muck with every desktop and server). It also has advantages in terms of the number and types of content protection policies you can deploy, how it integrates with email for workflow, and the scope of channels covered. During the DLP market’s the first few years, it was hard to even find a content-aware endpoint agent. But customer demand for endpoint DLP quickly grew thanks to two major needs- content discovery on the endpoint, and the ability to prevent loss through USB storage devices. We continue to see basic USB blocking tools with absolutely no content awareness brand themselves as DLP. The first batches of endpoint DLP tools focused on exactly these problems- discovery and content-aware portable media/USB device control. The next major driver for endpoint DLP is supporting network policies when a system is outside the corporate gateway. We all live in an increasingly mobile workforce where we need to support consistent policies no matter where someone is physically located, nor how they connect to the Internet. Finally, we see some demand for deeper integration of DLP with how a user interacts with their system. In part, this is to support more intensive policies to reduce malicious loss of data. You might, for example, disallow certain content from moving into certain applications, like encryption. Some of these same kinds of hooks are used to limit cut/paste, print screen, and fax, or to enable more advanced security like automatic encryption or application of DRM rights. The Full Suite Advantage As we’ve already hinted, there are some limitations to endpoint only DLP solutions. The first is that they only protect managed systems where you can deploy an agent. If you’re worried about contractors on your network or you want protection in case someone tries to use a server to send data outside the walls, you’re out of luck. Also, because some content analysis policies are processor and memory intensive, it is problematic to get them running on resource-constrained endpoints. Finally, there are many discovery situations where you don’t want to deploy a local endpoint agent for your content analysis- e.g. when performing discovery on a major SAN. Thus my bias towards full-suite solutions. Network DLP reduces losses on the enterprise network from both managed and unmanaged systems, and servers and workstations. Content discovery finds and protects stored data throughout the enterprise, while endpoint DLP protects systems that leave the network, and reduces risks across vectors that circumvent the network. It’s the combination of all these layers that provides the best overall risk reduction. All of this is managed through a single policy, workflow, and administration server; rather than forcing you to create different policies; for different channels and products, with different capabilities, workflow, and management. In our next post we’ll discuss the technology and major features to look for, followed by posts on best practices for implementation. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.