Securosis

Research

Everything Old Is New Again In The Fog Of The Cloud

Look I understand too little too late I realize there are things you say and do You can never take back But what would you be if you didn’t even try You have to try So after a lot of thought I’d like to reconsider Please If it’s not too late Make it a… cheeseburger -Here I Am by Lyle Lovett Sometimes I get a little too smart for my own good and think I can pull a fast one over you fair readers. Sure enough, Hoff called me out over my Cloud Computing Macro Layers post from yesterday. And I quote: I’d say that’s a reasonable assertion and a valid set of additional “layers.” There also not especially unique and as such, I think Rich is himself a little disoriented by the fog of the cloud because as you’ll read, the same could be said of any networked technology. The reason we start with the network and usually find ourselves back where we started in this discussion is because the other stuff Rich mentions is just too damned hard, costs too much, is difficult to sell, isn’t standardized, is generally platform dependent and is really intrusive. See this post (Security Will Not End Up In the Network) as an example. Need proof of how good ideas like this get mangled? How about Web 2.0 or SOA which is for lack of a better description, exactly what RIch described in his model above; loosely coupled functional components of a modular architecture. My response is… well… duh. I mean we’re just talking distributed applications, which we, of course, have yet to really get right (although multiplayer games and gambling are close). But I take a little umbrage with Chris’s assumptions. I’m not proposing some kind of new, modular structure a la SOA, I’m talking more about application logic and basic security than anything else, not bolt-on tools. Because the truth is, it will be impossible to add these things on after the fact; it hasn’t worked well for network security, and sure as heck won’t work well for application security. These aren’t add-on products, they are design principles. They aren’t all new, but as everyone jumps off the cliff into the cloud they are worth repeating and putting into context for the fog/cloud environment. Thus some new descriptions for the layers. Since it’s Friday and all I can think about is the Stone Brewery Epic Vertical Ale sitting in my fridge, this won’t be in any great depth: Network: Traditional network security, and the stuff Hoff and others have been talking about. We’ll have some new advances to deal with the network aspects of remote and distributed applications, such as Chris’ dream of IF-MAP, but we’re still just talking about securing the tubes. Service: Locking down the Internet exposed APIs- we have a fair bit of experience with this and have learned a lot of lessons over the past few years with work in SOA- SOAP, CORBA, DCOM, RPC and so on. We face three main issues here- first, not everyone has learned those lessons and we see tons of flaws in implementations and even fundamental design. Second, many of the designers/programmers building these cloud services don’t have a clue or a sense of history, and thus don’t know those lessons. And finally, most of these cloud services build their own kinds of APIs from scratch anyway, and thus everything is custom, and full of custom vulnerabilities from simple parsing errors, to bad parameterization, to logic flaws. Oh, and lest we not forget, plenty of these services are just web applications with AJAX and such that don’t even realize they are exposing APIs. Fun stuff I refer to as, “job security”. User: This is an area I intend to talk about in much greater depth later on. Basically, right now we rely on static authentication (a single set of credentials to provide access) and I think we need to move more towards adaptive authentication (where we provide an authentication rating based on how strongly we trust that user at that time in that situation, and can thus then adjust the kinds of allowed transactions). This actually exists today- for example, my bank uses a username/password to let me in, but then requires an additional credential for transactions vs. basic access. Transaction: As with user, this is an area we’ve underexplored in traditional applications, but I think will be incredibly valuable in cloud services. We build something called adaptive authorization into our applications and enforce more controls around approving transactions. For example, if a user with a low authentication rating tries to transfer a large sum out of their bank account, a text message with a code will be send to their cell phone with a code. If they have a higher authentication rating, the value amount before that back channel is required goes up. We build policies on a transaction basis, linking in environmental, user, and situational measurements to approve or deny transactions. This is program logic, not something you can add on. Data: All the information-centric stuff we expend endless words on in this blog. Thus this is nearly all about design, with a smidge of framework and shared security services support we can build into common environments (e.g. an adaptive authentication engine or encryption services in J2EE). No loosely coupled functional components, just a simple focus on proper application design with awareness of the distributed environment. But as Chris also says, It should be noted, however that there is a ton of work, solutions and initiatives that exist and continue to be worked on these topics, it’s just not a priority as is evidenced by how people exercise their wallets. Exactly. Most of what we write about cloud computing security will be ignored… … but what would we be if we didn’t even try. You have to try. Share:

Share:
Read Post

Friday Summary

I have to say, Moscow was definitely one of the more interesting, and difficult, places I’ve traveled to. The city wasn’t what I expected at all- everywhere you look there’s a park or big green swatch down major streets. The metro was the cleanest, most fascinating of any city (sorry NY). I never waited more than 45 seconds for a car, and many of the stations are full of beautiful Soviet-era artwork. In other ways it was more like traveling to Japan- a different alphabet, the obvious recognition of being an outsider, and English (or any Western European language) is tough to find outside the major tourist areas. Eating was sometimes difficult as we’d hunt for someplace with an English menu or pictures. But the churches, historical sites, and museums were simply amazing. We did have one amusing (after the fact) incident. I was out there for the DLP Russia conference, at a Holiday Inn outside of Moscow proper. We requested a non-smoking room, which wasn’t a problem. Of course we’re in a country where the average 3 year old smokes, so we expected a little bleed-over. What we didn’t expect was the Philip-Morris conference being held at the same hotel. So much for our non-smoking room, and don’t get me started on the smoking-only restaurant. Then there was my feeble attempt to order room service that led to the room service guy coming to our room, me pointing at things on the menu, and him bringing something entirely different. Oh well, it was a good trip anyway. Now on to the week’s security summary: Webcasts, Podcasts, Outside Writing, and Conferences: I spoke at a DLP Executive Forum in Dallas (a Symantec event). Over at TidBITS I explain how my iPhone rescued me from a travel disaster. Although I wasn’t on the Network Security Podcast this week, Martin interviewed Homeland Security Secretary Chertoff. Favorite Securosis Posts: Rich: I finally get back to discussing database encryption in Data Encryption, Option 1- Media Protection. (Adrian’s follow up is also great stuff). Adrian: Examining how hackers work and think as a model for approaching Data Discovery & Classification. Favorite Outside Posts: Adrian: Nothing has been more fascinating this week than to watch the Spam stories on Brain Krebs blog. Rich: Kees Leune offers up advice for budding security professionals. Top News: One small step at the ISP, one giant leap for the sanity of mankind. 8e6 and Mail Marshal merge. AVG Flags Windows DLL as a virus, scrambles to fix false positive Jeremiah Grossman on how browser security evolves. Apple updates Safari. Oh yeah, and Google Chrome and Firefox also issued updates this week. Google also fixes a critical XSS vulnerability in its login page. Microsoft patches a 7 year old SMB flaw, which leads Chris Wysopal to talk about researcher credit. Researchers hijack storm worm. I think it’s this kind of offensive computing we’ll need to manage cybercrime- you can’t win on defense alone. Blog Comment of the Week: Ted on my Two Kinds of Security Threats post: You get more of what you measure. It’s pretty easy to measure noisy threats, but hard to measure quiet ones. Fundamentally this keeps quiet threats as a “Fear” sell, and nobody likes those. Share:

Share:
Read Post

Brian Krebs: Ultimate Spam Filter

First he exposes the Russian Business Network and forces them to go underground, now he nearly single-handedly stops 2/3rds of spam. Most tech journalists, myself included merely comment on the latest product drivel or market trends. Brian is one of the only investigative journalists actually looking at the roots of cybercrime and fraud. On Tuesday, he contacted the major ISPs hosting McColo– a notorious hosting service whose clients include a long roster of cybercriminals. At least one of those ISPs pulled the plug, and until McColo’s clients are able to relocate we should enjoy some relative quiet. Congrats Brian, awesome work. This also shows the only way we can solve some of these problems- through proactive investigation and offense. We can’t possibly win if all we do is run around and try and block things on our side. Share:

Share:
Read Post

1 In 4 DNS Servers Still Vulnerable? More Like 4 in 4

I was reading this article over at NetworkWorld today on a study by a commercial DNS vendor that concluded 1 in 4 DNS servers is still vulnerable to the big Kaminsky vulnerability. The problem is, the number is more like 4 in 4. The new attack method that Dan discovered is only slowed by the updates everyone installed, it isn’t stopped. Now instead of taking seconds to minutes to compromise a DNS server, it can take hours. Thus if you don’t put compensating security in place, and you’re a target worth hitting, the attacker will still succeed. This is a case where IDS is your friend- you need to be watching for DNS traffic floods that will indicate you are under attack. There are also commercial DNS solutions you can use with active protections, but for some weird reason I hate the idea of paying for something that’s free, reliable, and widely available. On that note, I’m going to go listen to my XM Radio. The irony is not lost on me. Share:

Share:
Read Post

Cloud Security Macro Layers

There’s been a lot of discussion on cloud computing in the blogosphere and general press lately, and although I’ll probably hate myself for it, it’s time to jump in beyond some sophomoric (albeit really funny) humor. Chris Hoff inspired this with his post on TCG IF-MAP; a framework/standard for exchanging network security objects and events. It’s roots are in NAC, although as Alan Shimel informs us there’s been very little adoption. Since cloud computing is a crappy marketing term that can mean pretty much whatever you want, I won’t dig into the various permutations in this post. For the purposes of this post I’ll be focusing on distributed services (e.g. grid computing), online services, and SaaS. I won’t be referring to in the cloud filtering and other network-only services. Chris’s posting, and most of the ones I’ve seen out there, are heavily focused on network security concepts as they relate to the cloud. but if we look at cloud computing from a macro level, there are additional layers that are just as critical (in no particular order): Network: The usual network security controls. Service: Security around the exposed APIs and services. User: Authentication- which in the cloud word will need to move to more adaptive authentication, rather than our current username/password static model. Transaction: Security controls around individual transactions- via transaction authentication, adaptive authorization, or other approaches. Data: Information-centric security controls for cloud based data. How’s that for buzzword bingo? Okay, this actually includes security controls over the back end data, distributed data, and any content exchanged with the user. Down the road we’ll dig into these in more detail, but anytime we start distributing services and functionality over an open public network with no inherent security controls, we need to focus on the design issues and reduce design flaws as early as possible. We can’t just look at this as a network problem- our authentication, authorization, information, and service (layer 7) controls are likely even more important. This gets me thinking it’s time to write a new framework… not that anyone will adopt it. Share:

Share:
Read Post

Database Encryption- Option 1, Media Protection

I do believe I am officially setting a personal best for the most extended blog series. Way back in February, before my shoulder surgery, I started a series on database encryption. I not only don’t expect you to remember this, but I’d be seriously concerned about your mental well being if you did. In that first post I described the two categories of database encryption- media protection, and separation of duties. Today we’re going to talk more about media encryption, and the advantages of combining it with database activity monitoring. When encrypting a database for media protection our goal is to protect the data from physical loss or theft (including some forms of virtual theft). This won’t protect sensitive content in the database if someone has access to the DB, but it will protect the information in storage and archive, and may offer realtime protection from theft of the database files. The advantage of encryption for media protection is that it is far easier to implement than encryption for separation of duties, which involves mucking with the internal database structures. The disadvantage is that it provides no internal database controls, and thus isn’t ideal for things like credit card numbers where you need to restrict even an administrator’s ability to see them. Database encryption for media protection is performed using the following techniques/technologies: Media encryption: This includes full drive encryption or SAN encryption; the entire storage media is encrypted, and thus the database files are protected. Depending on the method used and the specifics of your environment, this may or may not provide protection for the data as it moves to other data stores, including archival (tape) storage. For example, depending on your backup agent, you may be backing up the unencrypted files, or the encrypted storage blocks. This is best suited for high performance databases where the primary concern is physical loss of the media (e.g. a database on a managed SAN where the service provider handles failed drives potentially containing sensitive data). Any media encryption product supports this option. External File/Folder Encryption: The database files are encrypted using an external (third party) file/folder encryption tool. Assuming the encryption is configured properly, this protects the database files from unauthorized access on the server and those files are typically still protected as they are backed up, copied, or moved. Keys should be stored off the server and no access provided to local accounts, which will offer protection should the server become compromised and rooted by an external attacker. Some file encryption tools, such as Vormetric or BitArmor, can also restrict access to the protected files based on application. Thus only the database processes can access the file, and even if an attacker compromises the database’s user account, they will only be able to access the decrypted data through the database itself. File/folder encryption of the database files is a good option as long as performance is acceptable and keys can be managed externally. Any file/folder encryption tool supports this option (including Microsoft EFS), but performance needs to be tested since there is wide variation among the different tools. Remember that any replication or distribution of data handled from within the database won’t be protected unless you also encrypt those destinations. Native Database Object Encryption: Most current database management system versions, such as Oracle, Microsoft SQL Server, and IBM DB2 include capabilities to encrypt either internal database objects (tables and other structures) or the data stores (files). This is managed from within the database, and keys are typically stored internally. This is overall good option in many scenarios as long as performance meets requirements. Depending on the platform, you may be able to offload key management to an external key management solution. The disadvantage is that it is specific to each database platform, and isn’t even always available. The decision on which option to choose depends on your performance requirements, threat model, exiting architecture, and security requirements. Unless you have a high-performance system that exceeds the capabilities of file/folder encryption, I recommend you look there first. If you are managing heterogeneous database, you will likely look at a third party product over native encryption. In both cases, it’s very important to use external key management and not allow access by any local accounts. The security of database encryption for media protection is greatly enhanced when combined with database activity monitoring. In this scenario, the database content is protected from loss via encryption, and internal data protected against abuse by database activity monitoring. I’ve heard of this combination being used as a compensating control for PCI- the database files are encrypted to prevent loss, while database activity monitoring is used to track all access to credit card numbers and generate alerts for unapproved access, such as a DBA running a SELECT query. Share:

Share:
Read Post

The Two Kinds Of Security Threats, And How They Affect Your Life

When we talk about security threats we tend to break them down into all sorts of geeky categories. Sometimes we use high level terms like clientside, targeted attack, or web application vulnerability. Other times we dig in and talk about XSS, memory corruption, and so on. You’ll notice we tend to mix in vulnerabilities when we talk about threats, but when we do that hopefully in our heads we’re following the proper taxonomy and actually thinking about that vulnerability being exploited, which is closer to a threat. Anyway, none of that matters. In security there are only two kinds of threats that affect us: Noisy threats that break things people care about. Quiet threats everyone besides security geeks ignore, because it doesn’t screw up their ability to get their job done or browse ESPN during lunch. We get money for noisy threats, and get called paranoid freaks for trying to prevent quiet threats (which can still lose our organizations a boatload of money, but don’t interfere with the married CEO’s ability to flirt with the new girl in marketing over email). Compliance, spam, AV, and old-school network attacks are noisy threats. Data breaches (unless you get caught), web app attacks, virtualization security, and most internal stuff are quiet threats. Don’t believe me? Slice up your budget and see how much you spend preventing noisy vs. quiet threats. It’s often our own little version of security theater. And if you really want to understand a vertical market, one of the best things you can do is break out noisy vs. quiet for that market, and you’ll know what you’ll get money for. The problem is, noisy vs. quiet may bear little to no relationship to your actual risk and losses, but that’s just human nature. Share:

Share:
Read Post

How the Death of Privacy and the Long Archive May Forever Alter Politics

Way back in November of 2006 I wrote a post on the impact of our electronic personas on the political process. I was thinking about re-writing the post, but after reviewing it realized the situation is the exact same two years later… if not a bit worse. As a generation raised on MySpace, FaceBook, and other social media starts becoming the candidates, rather than the electorate, I think we will see profound changes in our political process. I’m off in Moscow so I’m pre-posting this, and for all I know the election is in the midst of a complete meltdown right now. Here again, for your reading pleasure, is the text of the original post: As the silly season comes to a close with today’s election (at least for, like, a week or so) there’s a change to the political process I’ve been thinking about a lot. And it’s not e-voting, election fraud, or other issues we’ve occasionally discussed. On this site (and others) we’ve discussed the ongoing erosion of personal privacy. More of our personal information is publicly available, or stored in private databases unlocked with a $ shaped key, than society has ever experienced before. This combines with a phenomena I call “The Long Archive”- where every piece of data, of value or not, is essentially stored for eternity (unless, of course, you’re in a disaster recovery situation). Archived web pages, blog posts, emails, newsgroup posts, MySpace profiles, FaceBook pages, school papers, phone calls, calendar entries, credit card purchases, Amazon orders, Google searches, and … Think about it. If only 2% of our online lives actually survives indefinitely, the mass of data is astounding. What does this have to do with politics? The current election climate could be described as mass media shit-slinging. Our current crop of elected officials, of either party, survives mostly on their ability to find crap on their opponent while hiding their own stinkers. Historically, positive electioneering is a relative rarity in the American political system. We, as a voting public, seem to desire pristine Ken dolls we can relate to over issues-focused candidates. No, not all the time, but often enough that negative campaigning shows real returns. But the next generation of politicians are growing up online, with their entire lives stored on hard drives. From school papers, to medical records, to personal communications, to web activity, chat logs (kept by a “trusted” friend) and personal blogs filled with previously private musings. It’s all there. And no one knows for how long; not really. No one knows what will survive, what will fade, but all of it has the potential to be available for future opponent research. I’m a bit older, but there’s still an incredible archive of information out there on me, including some old newsgroup posts I’m not all that proud of (nothing crazy, but I am a bit of a geek). Maybe even remnants of ugly breakups with ex-girlfriends or rants never meant for public daylight. Never mind my financial records (missed taxes one year, but did make up for it) and such. In short, there’s no way I could run for any significant office without an incredibly thick skin. Anyone who started high school after, say, 1997 is probably in an even more compromising position. Anyone in the MySpace/FaceBook groups are even worse off. With so much information, on so many people, there’s no way it won’t change politics. I see three main options: We continue to look for “clean” candidates- thus those with limited to no online records. Only those who have disengaged from modern society, and are thus probably not fit for public leadership, will run for public office. The “Barbie and Ken” option. We, as society, accept that everyone has skeletons, everyone makes mistakes, and begin to judge candidates on their progression through those mistakes or ability to spin them in the media of the day. We still judge on personality over issues. The “Oprah/Dr. Phil” option. We focus on candidate’s articulations of the issues, and place less of an emphasis on a perfect past or personality. The “Issues-oriented” option. We weigh all the crap on two big scales. Whoever comes out slightly lighter, perhaps with a sprinkling of issues, wins. The “Scales of Shit” option. Realistically we’ll see a combination of all the above, but my biggest concern is how will this affect the quality of candidates? We, as a society, already complain over a lack of good options. We’re limited to those with either a drive for power, or a desire for public good, so strong that they’re willing to peel open their lives in a public vivisection every election cycle. When every purchase you’re ever made, email, IM or SMS, blog post, blog comment, social bookmark, WhateverSpace page, public record, and medical record becomes open season, who will be willing to undergo such embarrassing scrutiny? Will anyone run for office for anything other than raw greed? Or will we, as a society, change the standards by which we judge our elected officials. I don’t know. But I do know society, and politics, will experience a painful transition as we truly enter the information society. Share:

Share:
Read Post

Friday Summary: Happy Halloween!

Man, I love Halloween; it is the ultimate hacker holiday. When else do we have an excuse to build home animatronics, scare the pants off people, and pretend to be someone else (outside of a penetration test)? Last year I built something I called “The Hanging Man” using a microcontroller, some windshield wiper motors, wireless sensors, my (basic) home automation system, and streaming audio. When trick or treaters walked up to the house it would trigger a sensor, black out the front of the house, spotlight a hooded pirate hanging from a gallows, push out some audio of a screaming guy, drop him 15 feet so he was right over the visitors, and then slowly hoist him back up for the next group. This year Adrian and I were pretty slammed so I not only didn’t build anything new, I barely managed to pull the old stuff out. Heck, both of us have big parties, but due to overlapping travel we can’t even make it to each other’s events. But next year… next year I have plans. Diabolical plans… It was a relatively quiet week on the security front, with no major disasters or announcements. On the election front we’re already hearing reports of various voting machine failures, and some states are looking at pulling them altogether. Personally, I stick with mail in ballots. This year election day will be a bit surreal since I’ll be in Moscow for a speaking engagement, and likely won’t stay up to see who won (or whose lawyers start attacking first). While I’m in Moscow, Adrian will be speaking on the Information Centric Security Lifecycle in Chicago for the Information Security Magazine/TechTarget Information Security Decisions conference. I’m a bit sad I won’t be up there to see everyone, but it was impossible to turn down a trip to Moscow. So don’t forget to vote, please don’t hack the vote, and hopefully I won’t be kidnapped by the Russian Mafia next week… Webcasts, Podcasts, and Conferences: The Network Security Podcast, Episode 125. David Mortman joins us to talk about his new gig at Debix and a recent study they released on identity theft and children. I posted a pre-release draft of my next Dark Reading column The Security Pro’s Guide to Thriving in a Down Economy up on the Hackers for Charity Informer site. This is a subscription site many of us are supporting with exclusive and early content to help generate funds for HFC. And by posting, I helped feed a child in an underdeveloped country for a month… Favorite Securosis Posts: Rich: The Five Stage of Cloud Computing Grief. Seriously, this cloud stuff is getting over the top. Adrian: Seems that the people behind Arizona proposition 200 should be hauled in front of the FTC for misleading advertising; this is the most grotesque example I have seen on a state ballot measure. Favorite Outside Posts: Adrian: The Hoff has been on a roll lately, but the post that caught my attention was his discussion of the security and compliance shell game of avoidance through SaaS and ‘Cloud’ services. I mean, it doesn’t count if my sensitive data is in the cloud, right? Rich: Martin asks a simple and profound question. What the hell are you doing with those credit card numbers in the first place?!? (He used nicer words, but you get the point). Top News: What a shock, there’s a worm taking advantage of last week’s RPC flaw in Microsoft Windows. ICANN is going after a fraud-supporting domain name registrar in Estonia. Heck, I think we should go after criminal hosts more often. Maryland and Virginia are dropping electronic voting and going back to paper. Amrit on the 10th anniversary of the Digital Millennium Copyright Act. The DMCA has done more to stifle our rights than to actually protect content. On the positive side, the DMCA has actually somewhat helped website operators and hosts by offering some protection when they host infringing materials, since they have to respond to takedown notices, but aren’t otherwise penalized. A Facebook worm uses Google to get around Facebook security. Most of these sites are a mess because preventing user generated content from abusing other users is a very hard problem. Even when they bother to try. More voting machine idiocy. And here. Look folks, it isn’t like we don’t know how to manage these things. Walk into any casino and you’ll see highly secure interactive systems. Can you imagine how much fun Vegas would be if they treated the slots like we treat voting machines? Blog Comment of the Week: Dryden on The Five Stages of Cloud Computing Grief: My version: Denial: We can”t secure the cloud. Anger: Why the f&*k is my CIO telling me to secure the cloud? Bargaining: Can you please just tell me how you think we can secure the cloud?Depression: They”re deploying the cloud.Acceptance: We can”t secure the cloud. Disclaimer: “Cloud” can be replace with virtually (pun intended) any technology. See you all in 2 weeks… Share:

Share:
Read Post

Thriving In An Economic Crisis- And Supporting Hackers For Charity

I was pretty honored a couple months ago when Johnny Long asked me to participate in a new project for Hackers for Charity called The HFC Security Informer. Johnny is a seriously cool guy who founded Hackers for Charity, which provides a mix of services and financial support in underdeveloped countries. I think most geeks that aren’t running evil botnets have a bit of altruism in them, and HFC is a great way we can use our technical backgrounds (and swag) to help out the rougher parts of the world. HFC runs with basically no funding- giving everything right to its target communities. To better support operations as it grows, Johnny created the HFC Informer- a subscription site with all sorts of behind the scenes content you can’t get anywhere else. This includes pre-release book chapters, discounts on books, exclusive content, and pre-release papers and posts from some of the top names in security… and the occasional lowly analyst. And every time someone contributes content, cash is donated to feed a child for a month. Yesterday I posted a pre-release (and pre-edited) version of my next Dark Reading column The Security Pro”s Guide To Thriving In A Down Economy. Please check it out, and other great content like Rsnake’s Clickjacking paper, and consider supporting HFC. Securosis is a firm believer in the project and we’re hoping to release more content on the HFC Informer, including some of our more in-depth whitepapers. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.