Securosis

Research

Incite 4/10/2013: 103

My paternal grandmother passed away last week at 103. No, that is not a typo. One hundred and three. Ciento tres for you Spanish speakers out there. She would have been 104 in June. That’s a long time. To give you some perspective, per the infoplease site, William Taft was president in 1909. Robert Peary and Matthew Henson reached the North Pole that year. And the big news in the medical community was finding a cure for syphilis. I’m sure that caused much rejoicing around the world. I guess before 1909 you could actually have gone blind, though my folks somehow forgot to tell me about the cure… My Grandma Hilda was interesting, although I didn’t know her very well. She moved with my grandfather to Florida when I was 5. I’d see them for the occasional winter break trip to North Miami Beach, and they’d come north for some holidays. But they weren’t phone people and long distance calls were pretty expensive back then, so it wasn’t like we’d just chat on the phone. Our kids have it better – they can text, FaceTime, and email their grandparents and cousins. I didn’t have that option. She grew up in Baltimore and the way she met my grandfather was a great story. She was actually on a date with his brother Sam, but my grandfather had a car, so he drove Sam to Baltimore for the date. Evidently my grandfather liked her because when his brother went to get a pack of smokes, my grandfather took off and stood in on the date. I doubt they called it a ‘CB’ like my buddies would today, but they were married for almost 65 years, so it worked out. She couldn’t have been more different from my grandfather. The cantor who presided over the the memorial service called the two of them Ying and Yang. But it was really more like the tortoise and the hare. My Grandpa Harry was fast and explosive. He’s been gone for 16 years but we still talk about his tantrums. He talked fast. He walked fast. He did everything fast and had little tolerance for folks who didn’t keep up. Whereas my grandmother was slow and calm. In the face of a Mt. Vesuvius explosion from Harry, she just wouldn’t be bothered. No matter what happened she was calm. She’d make some snide comment and get back to whatever she was doing. She was the only one who could put him in his place. And she did. It was amazing to see. And when I say slow, I mean sloooooow. She wasn’t in a rush to do anything, not that I can remember anyway. She got there when she got there. She didn’t drive, so if she couldn’t get a ride or didn’t want to take the bus she wouldn’t go. One winter my grandparents took my brother and me to Walt Disney World when we were young. They had just opened EPCOT (yes, I’m dating myself) and I distinctly remember following my grandfather and visiting each ‘country’ in the park. We probably made 4 or 5 loops around the park, and every hour or so we’d pass by my grandmother strolling along at her own pace taking in the sites, not a care in the world. He got to the finish line first, and she took her time to get there. 103 years to be exact. On an interesting side note, my paternal great-grandfather (Hilda’s Dad) also lasted 103 years. Seriously. So we’re running a pool on my father’s side of the family on who of each generation will go for 103. I’m tempted to make a run for it. Why not? I’ve always said I want to stick around long enough to have my kids change my diapers, just to return the favor. And evidently I have the genetics to do it. Though if I do want to stick around that long I’ll need to learn to slow down and be calm, like my grandmother. Living until 103 isn’t for folks in a rush. –Mike Photo credits: 168/365 – President Taft Faces the Future originally uploaded by davidd Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Understanding Identity Management for Cloud Services Buyers Guide Architecture and Design Integration Newly Published Papers Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Implementing and Managing Patch and Configuration Management Defending Against Denial of Service Attacks Incite 4 U What’s $300K between friends? Very interesting research by our friend Wendy Nather of 451 Group (highlighted by Shimmy at NetworkWorld) on what to buy if you start a security program in a green field. Yeah, I know there are no green fields, but Wendy determined that a 1000 person company would need to spend $300,000-$400,000 for a bare bones security capability. If they wanted a little more they would at least double the cost. She tends to see 1 security person for 500 employees. This isn’t a low-cost scenario, is it? And it doesn’t really help a company sell more stuff, does it? Sure you can spin your wheels talking about enabling this or that, but security remains a significant cost center. But at least the stuff you buy stops the attackers, right? (No, not really.) So there’s that… – MR Two of these are nothing alike: You know big data is a threat to traditional big iron when entrenched providers start marketing off its coattails, as Alex Gorbachev attempts to do by comparing required IT management skills for Hadoop and Exadata. One of the many problems with this article is that the basic premise is not true: big data is not “a pre-integrated, engineered system with built-in management and automation

Share:
Read Post

Should the Red (Team) be dead?

I like to see stuff that challenges common wisdom. The inimitable professor Gene Spafford of Purdue goes far against the grain in calling out the excitement of hacking competitions and red teams as counterproductive to training the next generation of security folks. Gene starts with an analogy for how security folks would deal with a bunch of barns on fire: We’re going to have a contest to find who can pass this pail of water the quickest. Yes, it is a small, leaky pail, but we have a lot of them, so that is what we’re going to use in the contest. The winners get to be closest to the flames and have a name tag that says “fire prevention specialist.” He goes through another couple analogies to make the same point, that security folks seem to be holding competitions to show proficiency in stopping yesterday’s problems, but not enough time thinking about how to solve the root cause of the security issues: poor systems design. First, in every case, a mix of short-sighted and ultimately stupid solutions are being undertaken. In each, there are large-scale efforts to address pressing problems that largely ignore fundamental, systemic weaknesses. Second, there are a set of efforts putatively being made to increase the population of experts, but only with those who know how to address a current, limited problem set. Fancy titles, certificates, and seminars are used to promote these technicians. Meanwhile, longer-term expertise and solutions are being ignored because of the perceived urgency of the immediate problems and a lack of understanding of cost and risk. Third, longer-term disaster is clearly coming in each case because of secondary problems and growth of the current threats. That’s uplifting, right? He does highlight a number of potential solutions, or at least things we should focus on to a greater degree, including: Nationally, we are investing heavily in training and recruiting “cyber warriors” but pitifully little towards security engineers, forensic responders, and more. It is an investment in technicians, not in educated expertise. We have a marketplace where we continue to buy poorly-constructed products then pay huge amounts for add-on security and managing response; meanwhile, we have knowledgeable users complaining that they can’t afford the up-front cost required to replace shoddy infrastructure with more robust items. Rather than listen to experts, we let business and military interests drive the dialog. We have well-meaning people who somehow think that “contests” are useful in resolving part of the problem And to put a bow on the issues with contests: Competitions require rapid response instead of careful design and deep thought – if anything, they discourage people who exhibit slow, considerate thinking – discourage them from the contests, and possibly from considering the field itself. If what is being promoted are competitions for the fastest hack on a WIntel platform, how is that going to encourage deep thinkers interested in architecture, algorithms, operating systems, cryptology, or more? But there’s more… So, the next time you hear some official talk about the need for “cyber warriors” or promoting some new “capture the flag” competition, ask yourself if you want to live in a world where the barns are always catching fire, the cars are always breaking down, nearly everyone eats fast food, and the major focus of “authorities” is attracting more young people to minimally skilled positions that perpetuate that situation…until everything falls apart. The next time you hear about some large government grant that happens to be within 100 miles of the granting agency’s headquarters or corporate support for a program of which the CEO is an alumnus but there is no history of excellence in the field, ask yourself why their support is skewed towards building more hot dog stands. I think Gene brings up a number of good points in a very clear manner. I can see the other side of the equation as well, given that red team exercises are fun and give folks a feel for what it’s like to be under fire. But clearly there is a need for both quick twitch security folks (who can respond quickly under fire) and architects who can think deeply about difficult problems. Share:

Share:
Read Post

Security FUD hits investors

We ve talked a bit about the need to “be careful what we wish for,” in terms of making security a higher profile issue with senior management. Well, it’s no longer just vendors throwing FUD balloons that can splat at any time. I was perusing the Seeking Alpha investor site over the weekend when I found an article called Pandemic Cyber Security Failures Open An Historic Opportunity For Investors. Yes, I threw up a bit in my mouth when I read that headline. The first sentence doesn’t help: Cyber Security failures in the Western World have reached a pandemic stage. Oy. Then the author goes on to quote lots of different sources designed to scare the crap out of the uneducated. It’s awesome. Then he talks a bit about the reality of current defenses: From my discussions with top security professionals at leading security organizations, including Big 4 consulting and assurance companies, software such as Antivirus and Intrusion Detection and Prevention (IDS/IPS) are currently only marginally effective at catching security threats. Ugh. But it gets better. Of course when you throw this much FUD you need to have solutions, right? The partnership between VMWare and Cisco is going to integrate network defenses into the virtual computing used in cloud deployments, didn’t you know? That will definitely help address the pandemic. And get this beauty about HP’s innovation in the space: In addition, HP (HPQ) has developed software to link operational system logs with security event logging, enabling network operations and security to unite in common defense of corporate networks. Eliminating functional silos in network operations and security means more coordinated and efficient defenses against attackers. Hello! 2004 called and they want their functional silos back. This is when you really wish the uneducated wouldn’t do a few minutes of research and then think they understand security. I don’t feel bad that professional investors may see (and even act on) this kind of crap. But I do worry about unsuspecting individual investors who are most vulnerable to this drivel. Now please excuse me while I take some deep, cleansing breaths… Share:

Share:
Read Post

IaaS Encryption: Protecting Volume Storage

Now that we have covered all the pesky background information, we can start delving into the best ways to actually protect data. Securing the Storage Infrastructure and Management Plane Your first step is to lock down the management plane and the infrastructure of your cloud storage. Encryption can compensate for many configuration errors and defend against many management plane attacks, but that doesn’t mean you can afford to skip the basics. Also, depending on which encryption architecture you select, a poorly-secured cloud deployment could obviate all those nice crypto benefits by giving away too much access to portions of your encryption implementation. We are focused on data protection so we don’t have space to cover all the ins and outs of management plane security, but here are some data-specific pieces to be aware of: Limit administrative access: Even if you trust all your developers and administrators completely, all it takes is one vulnerability on one workstation to compromise everything you have in the cloud. Use access controls and tiered accounts to limit administrative access, as you do for most other systems. For example, restrict snapshot privileges to a few designated accounts, and then restrict those accounts from otherwise managing instances. Integrate all this into your privileged user management. Compartmentalize: You know where flat networks get you, and the same goes for flat clouds. Except that here we aren’t talking about having everything on one network, but about segregation at the management plane level. Group systems and servers, and limit cloud-level access to those resources. So an admin account for development systems shouldn’t also be able to spin up or terminate instances in the production accounting systems. Lock down the storage architecture: Remember, all clouds still run on physical systems. If you are running a private cloud, make sure you keep everything up to date and configured securely. Audit: Keep audit logs, if your platform or provider supports them, of management-plane activities including starting instances, creating snapshots, and altering security groups. Secure snapshot repositories: Snapshots normally end up in object storage, so follow all the object storage rules we will offer later to keep them safe. In private clouds, snapshot storage should be separate from the object storage used to support users and applications. Alerts: For highly sensitive applications, and depending on your cloud platform, you may be able to generate alerts when snapshots are created, new instances are launched from particular instances, etc. This isn’t typically available out of the box but shouldn’t be hard to script, and may be provided by an intermediary cloud broker service or platform if you use one. There is a whole lot more to locking down a management plane, but focusing on limiting admin access, segregating your environment at the cloud level with groups and good account privileges, and locking down the back-end storage architecture, together make a great start. Encrypting Volumes As a reminder, volume encryption protects from the following risks: Protects volumes from snapshot cloning/exposure Protects volumes from being explored by the cloud provider, including cloud administrators Protects volumes from being exposed by physical drive loss (more for compliance than a real-world security issue) IaaS volumes can be encrypted three ways: Instance-managed encryption: The encryption engine runs within the instance and the key is stored in the volume but protected by a passphrase or keypair. Externally managed encryption: The encryption engine runs in the instance but keys are managed externally and issued to instances on request. Proxy encryption: In this model you connect the volume to a special instance or appliance/software, and then connect the application instance to the encryption instance. The proxy handles all crypto operations and may keep keys either onboard or external. We will dig into these scenarios next week. Share:

Share:
Read Post

Friday Summary, Gattaca Edition: April 5, 2012

Hi folks, Dave Lewis here, and it is my turn to pull the summary together this week. I’m glad for the opportunity. So, a random thought: I have made a lot of mistakes in my career and will more than likely make many more. I frequently refer to this as my well-honed ability to fall on spears. The point? Simple. This is a learning opportunity that people seldom appreciate. Much like toddlers, we learn to walk by mastering the fine art of the faceplant. We learn in rather short order that we really don’t care for the experience of falling on our faces, and soon that behavior is corrected (for most, at least). So why, pray tell, do we continue to suffer massive data breaches? Not a week goes by without some major corporation or government body announcing that they have lost a USB drive or had a laptop stolen. Have we not learned yet that “face + floor = pain” is not an equation worthy of an infinite loop? Just my musing for this week. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted by The Macalope. Adrian’s DR paper: Security Implications Of Big Data. Rich quoted on Watering Hole Attacks. Adrian’s DR post: Database Security Operations. Mike’s DR post: You’re A Piece Of Conference Meat. snort Favorite Securosis Posts Rich: 1 in 6 Amazon Web Services users can’t read. This seriously tweaked me. And don’t give me guff for picking my own post – no one else posted this week. You’d think with 3 full timers and 6 contributors, someone else… Adrian Lane: Proposed California Data Law Will Affect Security… but it will take quite a while before companies take it seriously. David Mortman: Flash! And it’s gone… Dave Lewis: Defending Cloud Data: How IaaS Storage Works. Other Securosis Posts Cybersh** just got real. Proposed California Data Law Will Affect Security. Brian Krebs outs possible Flashback malware author. Appetite for Destruction. Get Ready for Phone Security and Regulations. IaaS Encryption: Understanding Encryption Systems. An article so bad, I have to trash it. Favorite Outside Posts Rich: Activists on Front Lines Bringing Computer Security to Oppressed People. Lives really are at stake for these people. Mike Mimoso is doing a great job with this coverage. Adrian Lane: IT for Oppression. And I just thought this was IT culture. Dave Lewis: Googlers exultant over launch of Blink browser engine. Google rolls their own browser engine. This should be interesting. Dave Mortman: Building Technical Literacy in Business Teams. James Arlen: Delivering message w/ impact && Announing our ‘Reverse Job Fair’. This should be a brilliant workshop. Top News and Posts New PoS malware. That’s “point of sale”, not the other thing. Sometimes. How to Dress Like a Cyber Warrior OR Looking Like a Tier-Zero Hero. This amused me far more than it should have. Bill would allow bosses to seek Facebook passwords. …and then Amendment aimed at workers’ passwords pulled. Apple’s iMessage encryption trips up feds’ surveillance. Because encrytion is haaaard. (h/t James Arlen). Aaron Swartz’s Prosecutors Were Threatened and Hacked, DOJ Says. I’ll just bite my tongue Honeypot Stings Attackers With Counterattacks. Top 10 Web Hacks 2012. FBI Pursuing Real-Time Gmail Spying Powers as “Top Priority” for 2013 Attempted child abduction thwarted when girl asks stranger for code word. This article caught my eye for the brilliant simplicity for keeping your kids safe. Blog Comment of the Week This week’s best comment goes to Nate, in response to 1 in 6 Amazon Web Services Users Can’t Read. I’d go out on a limb and wager a good portion of those open buckets were setup by non-IT groups who used Amazon as an end around governance and process. I’d also wager a fair number just used one of the available tools to manage their S3 because they don’t really understand the technology and that tool set the bucket to public unbeknownst to them. That means even if they received and read the email above, they probably didn’t understand it. Is that Amazon’s fault? Absolutely not. It does highlight the issue of kicking governance down the road to IT rather than dealing with it at a business level so it can be easily avoided, or focusing governance only on dollars so small opex spends fly under the radar. Unless business leaders start caring about governance and process a whole awful lot, nothing is going to get better, it’s not. Sorry, the kids have been watching the Lorax movie non stop lately. Share:

Share:
Read Post

Cybersh** just got real

Huawei not expecting growth in US this year due to national security concerns (The Verge). U.S. to scrutinize IT system purchases with ties to China (PC World): U.S. authorities will vet all IT system purchases made from the Commerce and Justice Departments, NASA, and the National Science Foundation, for possible security risks, according to section 516 of the new law. “Cyber-espionage or sabotage” risks will be taken into account, along with the IT system being “produced, manufactured, or assembled” by companies that are owned, directed or funded by the Chinese government. This is how you fight asymmetric espionage. Expect the consequences to continue until the attacks taper off to an acceptable level (yes, there is an acceptable level). Share:

Share:
Read Post

Proposed California Data Law *Will* Affect Security

Threatpost reports that California is considering a law requiring companies to show consumers what data is collected on them. Known as the “Right to Know Act of 2013,” AB 1291 was amended this week to boost its chances of success after being introduced in February by state Assembly member Bonnie Lowenthal. If passed, it would require any business that retains customer data to give a copy of that information, including who it has been shared with, for the past year upon request. It applies to companies that are both on – and offline. The claim is that it doesn’t add data protection requirements, but it does. Here is how: You will need mechanisms to securely share the data with customers. This will likely be the same as what healthcare and financial institutions do today (generally email encryption). You will need better auditing of who data is shared with. Depending on interpretation of the law, you might need better auditing of how it is used internally. Right now this doesn’t seem to be a requirement – I am just paranoid from experience. What to do? For now? Nothing. Remember the Compliance Lifecycle. Laws are proposed, then passed, then responsibility is assigned to an enforcement body, then they interpret the law, then they start enforcement, then we play the compensating controls game, then the courts weigh in, and life goes on. Vendors will likely throw AB 1291 into every presentation deck they can find, but there is plenty of time to see how this will play out. But if this goes through, there will definitely be implications for security practitioners. Share:

Share:
Read Post

Brian Krebs outs possible Flashback malware author

Brian Krebs thinks he may have identified the author of the Flashback Mac malware that caused so much trouble last year. Brian is careful with accusations but displays his full investigative reporting chops as he lays out the case: Mavook asks the other member to get him an invitation to Darkode, and Mavook is instructed to come up with a brief bio stating his accomplishments, and to select a nickname to use on the forum if he’s invited. Mavook replies that the Darkode nick should be not be easily tied back to his BlackSEO persona, and suggests the nickname “Macbook.” He also states that he is the “Creator of Flashback botnet for Macs,” and that he specializes in “finding exploits and creating bots.” Brian has started to expose more detailed information from his access to parts of the cybercrime underground, and it’s damn compelling to read. Share:

Share:
Read Post

Appetite for Destruction

We (Rich and Gal) were chatting last week about the destructive malware attacks in South Korea. One popular theory is that patch management systems were compromised and used to spread malware to affected targets, which deleted Master Boot Records and started wiping drives (including network connected drives), even on Linux. There was a lot of justfied hubbub over the source of the attacks, but what really interested us is their nature, and the implications for our defenses. Think about it for a moment. For at least the past 10 years our security has skewed towards preventing data breaches. Before that, going back to Code Red and Melissa, our security was oriented toward preventing mass destructive attacks. (Before that it was all Orange Book, all the time, and we won’t go there). Clearly these attacks have different implications. Preventing mass destruction focuses on firewalls (and other networking gear, for segmentation, not that everyone does a great job with it), anti-malware, and patching (yes, we recognize the irony of patch management being the vector). Preventing breaches is about detection, response, encryption, and egress filtering. The South Korean attack? Targeted destruction. And it wasn’t the first. We believe Stratfor had a ton of data destroyed. Stuxnet (yes, Stuxnet) was a fire and forget munition. But, for the most part, even Anonymous limits their destructive activities to DDoS and the occasional opportunistic target. Targeted destruction isn’t a new game but it’s one we haven’t played much. Take Rich’s Data Breach Triangle concept, or Lockheed’s Cyber Kill Chain. You have three components to a successful attack – a way in, a way out, and something to steal. But for targeted destruction all you need is a way in and something to wreck. Technically, if you use some fire and forget malware (single-use or worm), you don’t even need to interact with anything behind the target’s walls. No one was sitting at a Metasploit console on the other side of the Witty Worm. So what can we do? We definitely don’t have all the answers on this one – targeted destructive attacks, especially of the fire and forget variety, are hard as hell to stop. But a few things come to mind. We cannot rely on response after the malware is triggered, so we need better segregation and containment. Note that we are skipping traditional defense advice because at this point we assume something will get past your perimeter blocking. Rich has started using the term “hypersegregation” to reflect the increasingly granular isolation we can perform, even down to the application level in some cases, without material management overhead increasing (read more). As you move more into cloud and disk-based backups, you might want to ensure you still keep some offline backups of the really important stuff. We don’t care whether it’s disk or tape, but at some point the really critical stuff needs to be offline somewhere. Once again, incident response is huge. But in this case you need to emphasize the containment side of response more than investigation. On the upside these attacks are rarely quiet once they trigger. On the downside they can be quite stealthy, even if they ping the outside world for commands. But there is one point in your favor. Targeted destruction as an endgame is relatively self-limiting. There’s a reason it isn’t the dominant attack type, and while we expect to see more of it moving forward but it isn’t about to be something most of us face on a daily basis. Also, because malware is the main mechanism, all our anti-exploitation work will continue to pay off, making these attacks more and more expensive for attackers. Well, assuming you get the hell off Windows XP. Share:

Share:
Read Post

Get Ready for Phone Security and Regulations

Emergency services providers and others are being hit with telephone-based denial of service attacks. Nasty stuff, powered by IP-based phone systems. This relates to SWATing (what hit Brian Krebs). It has become trivial to use computers to make and spoof phone calls. This is the sort of thing that could lead to new regulations. It is already against the law, but these incidents may lead to rules tightening how companies connect to the phone system. Which probably isn’t great for innovation, and might not work anyway. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.