Securosis

Research

TD Ameritrade: Making Life Harder For Themselves

Sheesh… just when you think they’re over the hump, more details leak on the TD Ameritrade breach and they aren’t looking quite so competent anymore. Network World has a good article up summarizing the latest developments. A few tidbits stand out: The Ameritrade spokeswoman says the company believes no Social Security numbers have been taken because the only known illicit activity traceable to the breaches is spam, not identity theft. There’s a word for statements like this… bullshit! Just because they haven’t traced any identity theft or other fraud to the SSNs in their database doesn’t mean the numbers aren’t sitting on some bad guy’s hard drive someplace. If they determined that SSNs are not at risk because the specific malicious software involved was analyzed and limited itself to email, then that’s one thing. But saying “nothing bad has happened so far, so nothing bad will ever happen” is stupid. Folks, time for a reminder. This is all Crisis Communications 101- as history has shown, the best way to defend your reputations in a major incident is to admit the failing, spare nothing to protect your customers, and act as openly and honestly as possible. Otherwise we wouldn’t have seen a bottle of Tylenol on a store shelf since the 1980’s. This: The company says it will sign its customers up for the service on an exception basis -meaning they don’t automatically get it – but it doesn’t advertise this option in any of the literature it has put out concerning the data compromise. is not putting your customers first. The rest of us should learn from this; TD Ameritrade is now suffering more negative publicity than if they had come clean from the start. I’ve moved our little poll on this to the sidebar, and will post the results on Monday. I’m starting to think it might be something other than SQL injection… Share:

Share:
Read Post

Ahhh. Marketing Desperation.

You can always smell desperation. It has a certain… quality that gently waifs into the nasal cavity, tickling those very nerves that are too oft neglected in our sanitary society. You know, the same ones that pick up the odor of sewer crap. What’s odd is that this smell is extruded not only by the truly desperate, but by those whose self esteem is so battered that they crave every bit of validation they can beg off the nearest passerby. It’s a strange dichotomy. When evaluating vendors and their chances of success, desperation isn’t always a clear indicator of their future. On one side we have the truly desperate, such as the vendor I worked with, who was quietly shopping around for a buyer and could only provide small school districts as references. On the other side was the bully; the successful startup who revelled in sending photos of their gear replacing a competitor’s on a rack. They eventually got bought for big money, but I suspect that marketing manager is blowing it all at a strip joint dropping twenties in the hopes the dancer will make eye contact and call him darling before strutting off stage with enough cash to solve the crisis in Darfur. Today, thanks to Alan Shimel, we see probably the most amusing act of desperation I’ve ever witnessed. One of his competitors, ForeScout, bought the Google AdWords for Alan’s name. Now, every time you search on Alan, the first thing that comes up is: Replacing Safe Access? www.forescout.com/counteract Get CounterACT – Clientless Network Access Control from ForeScout This is the security marketing equivalent of political push polling, but probably a lot less effective. Okay, it’s amusing, but ForeScout has probably given Alan one of the best sales tools he ever asked for. How hard do you think it will be for him to use this to his advantage in a competitive situation? Here’s a note to you marketing folks- take a look at sources like this Security Catalyst Forums thread on vendors. Acting like a used car salesman is a sure fire way to alienate a prospect. I hear this time after time. Yes, as Rothman tells us certain heavy handed tactics work, but if it smells like crap on a simple Google search, odds are the customer will figure out it’s crap. (For the record, I know nothing about ForeScout and haven’t ever worked with them; the product might be great for all I know) Share:

Share:
Read Post

Raytheon/Oakley, Probably A Good Fit

Fresh off today’s Daily Incite I saw that Raytheon acquired Oakley Networks. Oakley is a bit of a strange bird- it’s not really DLP, but they have some interesting monitoring technology that’s well suited for certain environments- especially the federal sector that Raytheon plays in so strongly. Oakley started with an endpoint monitoring tool that’s like keystroke capture on steroids (and centrally manageable), and then bought a network tool vendor for monitoring acceptable use on the wire. It doesn’t have the advanced content awareness of DLP, nor some of the integration required for the filtering and discovery sides, but that’s not really what it’s used for. DLP records only on violations; Oakley is better described as “user activity forensics” (it’s more than that, but that’s the closest bucket). I don’t expect to see Oakley/Raytheon break into the general enterprise market anytime soon, and I hope this ends the confusion of people lumping them into DLP (a lot of that’s their own fault from some decisions they’ve since moved past), but I think the Raytheon acquisition is reasonable and appropriate, and should be successful because of the federal focus. I don’t get to say that often about buyouts. Good luck to Tom and the guys… Share:

Share:
Read Post

Orchestria Enters DLP Market- Underestimates Competition With Totally Inaccurate Marketing

Orchestria finally announced their first “true” general DLP product. For those of you who don’t know, Orchestria has danced around this space for a few years now. They started with a product narrowly focused on helping certain financial services firms, particularly broker/dealers, manage compliance issues around insider trading and privacy. Basically you can think about it as a client-centric (with some networking monitoring) DLP solution focused on one category of violations. It didn’t work well as a general DLP solution, but that wasn’t their market. Interestingly enough it was based on Autonomy’s Aungate technology, but then Autonomy started pushing Aungate competitively and Orchestria had to do a little re-working (industry rumor stuff, this didn’t come through confidential channels). Autonomy has since bought Zantaz and are combining the products. Anyway, back to Orchestria. Since I worked with them before and don’t know exactly how much is public about the new product I can’t go into any details. What I’m comfortable saying is that it looks interesting, covers the bases to be considered more Content Monitoring and Filtering than just DLP, and I’ll withhold judgement until we see some deployments and competitive evaluations. But they might not get the chance if their sales guys are as poorly educated on the competition as the press release and their product site indicates. They claim to be the first “next generation” DLP solution; filling the gaps uncovered by others. Let’s look at a few: Unlike first-generation DLP software, Orchestria provides coverage across all points of risk within an enterprise, detects violations accurately with minimal false positives, and can proactively block true infractions. Weird. That’s what customers tell me all the major DLP solutions do. Especially the top five of Vontu, Reconnex, Websense, Vericept, and EMC/Tablus. My assessment is that every one of these products provides that, and some others, like <a href=”http://www.codegree etworks.com”>Code Green (a mid-sized play) also provide it. Orchestria Multi-Layered Defense – Orchestria Multi-Layered Defense leverages multiple network, server, client, import and archive agents to ensure control across all forms of electronic data, including messages with encrypted and password-protected files, internal messages, disconnected laptops, files at rest, and mobile storage devices – most of which are ignored by first-generation solutions. Totally untrue. All the top five do all, or most, of that. Sometimes it takes third-party integration, so maybe that’s the grey area they’re taking advantage of. Orchestria Full-Dimensional Analysis – Limited to content-focused inspection, first-generation platforms falsely flag numerous legitimate messages. These “false positives” create a significant review burden and prevent organizations from implementing controls that block or correct messages before they are sent or files before they are saved. Orchestria’s Full-Dimensional Analysis not only analyzes content, but also dynamically examines content-around-content, message context, the identity of sender and recipients, hierarchy, and user input. This approach can reduce false positives by more than 90% compared to first-generation solutions. I’m unaware of any major DLP product that doesn’t use context as well as content. Actually, they sort of have to in order to work. Orchestria Incident-Appropriate Action – Far beyond passive post-incident review employed by first-generation technologies, Orchestria proactively protects enterprises by matching responses specifically to the type and severity of the violation. In addition to providing the industry’s leading workflow-enabled review capability, Orchestria supports a variety of “before-the-send” actions including blocking, correcting, and quarantining. This solution also automatically classifies, routes, and stores sensitive messages and files to meet a variety of records management and legal mandates. Common features in any successful DLP product. I’ll give them a little credit, most of the other DLP tools don’t focus on compliance archiving and require more manual tuning and technology integration to manage that. But workflow review? Hell, I’ve been working with all the major DLP vendors for years on this (at least the ones that didn’t come up with good systems on their own). Dr. Sara Radicati of The Radicati Group said, “The key advantage with Orchestria DLP is responding to potential violations with automated incident-appropriate actions – from a routine warning, to forwarding to a supervisor, to nothing at all. As it all happens in real time, customers will know that their messages haven’t been banished to a backed-up review queue for hours or even days This is why I’m very careful about the custom quotes I’ll do (none for years now, but never say never); they make you sound like a … well, I’d use the word we’re all thinking if it were a guy, but I’ll never say that about a woman who doesn’t print it on her business card. I hope I get to stay on my high horse indefinitely; now that I’m an independent consultant we’ll see how long I last. “Orchestria’s DLP solution provides a new and different approach that fulfills all requirements for effective protection,” said Bo Ma ing, Orchestria’s president and chief executive officer. “It covers all points of risk within the enterprise, accurately distinguishes violations from false positives, and enables the right action, including proactive protection – all on the industry’s most flexible architecture. Bo, I think you’ve done some cool stuff, but you’re better off focusing on what you really bring that’s new to the market (and you do have a couple things) than exaggerated marketing that won’t stand up once someone glances at a competitor. Your website is even more full of omissions and errors than this release. DLP is probably the ugliest market I covered as an analyst. It’s seriously rough and tumble with over a dozen vendors fighting over what was only $50M last year, and will probably only be $100-120M this year. Unless by “First Gen” you mean products from 2 years ago, you’re in for a surprise once you go into competitive evaluations. Marketing aside I think Orchestria will be one to watch and a few competitive wins could open up some big opportunities. Right now, the jury is out and it’s clear whoever wrote their marketing materials needs to take a close look at the competition. Share:

Share:
Read Post

Microsoft Can’t Manage Third-Party Patches, Even Though It’s A Good Idea

Cutaway has a good post up today over at Security Ripcord. In it, he suggests that Microsoft should… well, I’ll let him say it: Here is my solution: Microsoft needs to come up with a Central Update Console that software and driver developers can hook to configure automatic updates. They already provide this type of feature through the “Add/Remove Programs” console. Good developers utilize this to help users and administrators manage the software that is installed on their systems. How hard would it be to come up with a solution that other developers could hook to help with centralizing the management of updates and provide a significant positive impact on the overall security of every computer on the Interweb? Although the design, development, testing, implementation, and maintenance of this project would be challenging, I am willing to be that this would be a small project in the grand scheme of Microsoft OS development. They don’t need to take every software vendor into consideration, they just need to come up with one method all of them could use. This is something I’ve actually put some thought into (and not just because Cutaway and I talked about it a couple weeks ago), but I don’t think it can work. At least not today. Managing vulnerabilities and patches is a huge issue, with a moderately sized third party market just to deal with it. While Microsoft provides patches for their own software only (with few exceptions, like a recent ATI driver update), they don’t provide patches for non-Microsoft software. I think this is for two reasons that I don’t expect to change anytime soon. Antitrust- there’s an entire market dedicated to vulnerability and patch management. MS can’t step in an include this in the OS, however useful an idea, without having to face antitrust accusations. Due to past mistakes, they are often restricted from including features other OS vendors don’t blink an eye at. Take a look at all the whining by Symantec and McAfee over Patchguard. Liability- it doesn’t matter how many warnings and disclaimers MS puts on the darn thing; the first time a bad third-party patch propagates through a Microsoft central patch console and blows up systems (which is inevitable), the world will cry havoc and let slip the dogs of alt.ms.sucks- and at least a few lawyers, wanting a piece of the MS pot of valuation. On the enterprise side this isn’t as much of an issue since most organizations don’t use the update function built into Windows (although they do use WSUS (Windows Software Update Server). Consumers, on the other hand, rely heavily on Microsoft for their updates and some sort of central service for third party patches could really help keep their systems current. Especially for device drivers; while applications can build in their own update functions and check whenever they’re used, device drivers represent a huge class of vulnerability that even most enterprises don’t pay enough attention to. Also, as software gets more and more intermingled the risk of relying on application launch to check for patches becomes a problem. Components can represent an exploit risk through a web browser or a virus, even if you haven’t launched the application in a long time. Today, vendors manage this by ignoring it or installing YAASTS (Yet Another Annoying System Tray Service) that runs constantly, draining your system resources. Thus I think Cutaway’s idea of a central patch service could provide a lot of value and help improve security. No argument there. But it represents a risk to Microsoft that I just don’t think the product managers, never mind the lawyers, will let them take. Share:

Share:
Read Post

Anyone Going To SANS Vegas Next Week?

I’m probably going to swing out to Vegas for a day or two, but haven’t figured out what days yet. If you’re going and want to meet up, drop me a line in the comments or at rmogull@securosis.com. Share:

Share:
Read Post

Avast! Ye Scurvy Dogs!

Yarr! Today be Talk Like A Pirate Day, and we’ll not be having no landlubber speak on this here vessel. So grab ye cutlass, man yer station, and PREPARE TO REPEL BOARDERS!!! Share:

Share:
Read Post

Yes, The World Has Changed And So Must We

Boy, Chris is all riled up over my criticism of Jericho. Let me put this bad boy to bed, at least from my side. Chris missed the point of my last post, and my editor tells me it might be because of how I wrote it. Thus I’ll be a little clearer in this one. To quickly recap, it seems that they’re ruffled at Jericho’s suggestion that the way in which we’ve approached securing our assets isn’t working and that instead of focusing on the symptoms by continuing to deploy boxes that don’t ultimately put us in the win column, we should solve the problem instead. I’m not ruffled with that suggestion at all, agree completely. I just think they communicate it very poorly. The threats aren’t the same. The attackers aren’t the same. Networks aren’t the same. The tools, philosophy and techniques we use to secure them can’t afford to be, either. Agree completely. Look at pretty much everything I’ve ever published over the past 5-6 years. As a mater of fact, I’m doing my best to contribute actionable advice, models, and frameworks to manage these problems. Heck, I barely even talk about “traditional” network security since the world has moved on. Go back to my original posts– when I criticize Jericho it’s over how they communicate and that they spend too much time stating the obvious, not that I disagree with our need to change. Because we do need to change how we approach security. We don’t need to throw away everything we’ve done, but there’s a lot of new work we need to complete. Data security, application security, how we manage users, identity, and fundamentally how we define trust all need to evolve. I’m with ya man, don’t put the wrong words in my mouth. I’m contributing in my own small way over here, and if you know people at Jericho I’m happy to work with them directly. If I can keep the irons to the fire I might have some new stuff to reveal this week. Peace out. Share:

Share:
Read Post

Understanding and Selecting a DLP Solution: Part 3, Data-In-Motion Technical Architecture

Welcome to part 3 of our series on Data Loss Prevention/Content Monitoring and Filtering. You should go read Part 1 and Part 2 before digging into this one. In this episode we’re going to spend some time looking at the various architectures we typically see in DLP products. This is a bit of a tough one since we tend to see a bunch of different technical approaches. There’s no way to cover all the options in a little old blog post, so I’ll focus on the big picture things you should look for. To structure things a bit, we’ll look at DLP for data-in-motion (network monitoring/filtering) data-at-rest (storage) and data-in-use (endpoint). For space reasons, this post will focus on data-in-motion, and the next post will drill into at-rest and in-use. Network Monitor In the heart of most DLP solutions lies a passive network monitor. This is where DLP was born and is where most of you will start your data protection adventure. The network monitoring component is typically deployed at or near the gateway on a SPAN port (or a similar tap). It performs full packet capture, session reconstruction, and content analysis in real time. Performance numbers tend to be a little messy. First, on the client expectation side, everyone wants full gigabit Ethernet performance. That’s pretty unrealistic since I doubt many of you fine readers are really running that high a level of communications traffic. Remember, you don’t use DLP to monitor your web applications, but to monitor employee communications. Realistically we find that small enterprises run well less than 50 MB/s of relevant traffic, medium enterprises run closer to 50-200 MB/s, and large enterprises around 300 MB/s (maybe as high as 500 in a few cases). Because of the content analysis overhead, not every product runs full packet capture. You might have to choose between pre-filtering (and thus missing non-standard traffic) or buying more boxes and load balancing. Also, some products lock monitoring into pre-defined port and protocol combinations, rather than using service/channel identification based on packet content. Even if full application channel identification is included, you want to make sure it’s not off by default. Otherwise, you might miss non-standard communications such as tunneling over an unusual port. Most of the network monitors are just dedicated servers with DLP software installed. A few vendors deploy as a true specialized appliance. While some products have their management, workflow, and reporting built into the network monitor, most offload this to a separate server or appliance. This is where you’ll want the big hard drives to store policy violations, and this central management server should be able to handle distributed hierarchical deployments. Email The next major component is email integration. Since email is store and forward you can gain a lot of capabilities, like quarantine, encryption integration, and filtering, without the same complexity of blocking synchronous traffic. Most products embed an MTA (Mail Transport Agent) into the product, allowing you to just add it as another hop in the email chain. Quite a few also integrate with some of the major existing MTAs/email security solutions directly for better performance. One weakness of this approach is it doesn’t give you access to internal email. If you’re on an Exchange server, internal messages never make it through the MTA since there’s no reason to send that traffic out. To monitor internal mail you’ll need direct Exchange/Notes integration, which is surprisingly rare in the market. We’re also talking true integration, not just scanning logs/libraries after the fact, which is what a few consider internal mail support. Good email integration is absolutely critical if you ever want to do any filtering, as opposed to just monitoring. Actually, this is probably a good time to drill into filtering a bit… Filtering/Blocking and Proxy Integration Nearly anyone deploying a DLP solution will eventually want to start blocking traffic. There’s only so long you can take watching all your juicy sensitive data running to the nether regions of the Internet before you start taking some action. But blocking isn’t the easiest thing in the world, especially since we’re trying to allow good traffic, only block bad traffic, and make the decision using real time content analysis. Email, as we just mentioned, is pretty easy. It’s not really real-time and is proxied by its very nature. Adding one more analysis hop is a manageable problem in even the most complex environments. Outside of email most of our communications traffic is synchronous- everything runs in real time. Thus if we want to filter it we either need to bridge the traffic, proxy it, or poison it from the outside. With a bridge we just have a system with two network cards and we perform content analysis in the middle. If we see something bad, the bridge closes the connection. Bridging isn’t the best approach for DLP since it might not stop all the bad traffic before it leaks out. It’s like sitting in a door watching everything go past with a magnifying glass- by the time you get enough traffic to make an intelligent decision, you may have missed the really good (bad) stuff. Very few products take this approach, although it does have the advantage of being protocol agnostic. Our next option is a proxy. A proxy is protocol/application specific and queues up traffic before passing it on, allowing for deeper analysis (get over it Hoff, I’m simplifying on purpose here). We mostly see gateway proxies for HTTP, FTP, and IM. Almost no DLP solutions include their own proxies; they tend to integrate with existing gateway/proxy vendors such as Blue Coat/Cisco/Websense instead. Integration is typically through the iCAP protocol, allowing the proxy to grab the traffic, send it to the DLP product for analysis, and cut communications if there’s a violation. This means you don’t have to add another piece of hardware in front of your network traffic and the DLP vendors can avoid the difficulties of building dedicated network hardware for inline analysis. A couple of gateways, like Blue Coat and Palo Alto (I may be missing

Share:
Read Post

Jericho Needs Assistance Restating The Obvious

Sometimes it’s not even worth the effort. First Rothman, then Hoff decide to bring up our favorite red headed stepchild (a term I use with fondness, since I have red hair and a stepfather); all based on an SC magazine article. I suppose Jericho’s goals are admirable, but I can’t help but feel that they’re stating the blindingly obvious and doing a piss poor job of it. For those of you not familiar with Jericho, take a quick gander over here. Basically, they’ve been advocating “de-perimeterization”; pushing people into new security architectures and dropping their firewalls (yes, they really said to trash the firewall if you go back and look at some of their original press releases). These days they have a marginally better platform (speaking platform, not technology), and aren’t running around telling people to shut off firewalls quite as much. I’ll let them describe their position: The group admits ‘deperimeterisation’ isn’t the most catchy phrase to explain multiple-level security, but Simmonds calls it an “overarching phrase” that “covers everything”. So what is it? According to the Jericho Forum, it is a concept that describes protecting an enterprise’s systems and data on multiple levels using a pick’n’mix of encryption, inherently secure computer protocols and data-level authentication. At the same time, it enables the free flow of secure data wherever and whenever it is needed, in whatever medium and between dissimilar organisations — such as banks and oil companies, for example. This kicks against the notion of security via a network boundary to the internet. Or as Hoff restates: Your perimeter *is* full of holes so what we need to do is fix the problems, not the symptoms. That is the message. Chris sometimes refers to a particular colleague of ours as Captain Obvious. I guess he didn’t want Richard to be lonely. Of course the perimeter is full of holes; I haven’t met a security professional who thinks otherwise. Of course our software generally sucks and we need secure platforms and protocols. But come on guys, making up new terms and freaking out over firewalls isn’t doing you any good. Anyone still think the network boundary is all you need? What? No hands? Just the “special” kid in back? Okay, good, we can move on now. How about this- focus on one issue and stay on message. I formally submit “buy secure stuff” as a really good one to keep us busy for a while. You have some big companies on board and could use some serious pressure to kick those market forces into gear. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.