Securosis

Research

Comments on Database Media Protection

Rich posted an article on database and media encryption (aka Data at Rest) earlier this week, discussing the major alternatives for keeping database media safe. Prior to posting it, he asked me to preview the contents for accuracy, which I did, and I think Rich covers the major textbook approaches one needs to consider. I did want to add a little color to this discussion in terms of threat models and motivation- regarding why these options should be considered, as well as some additional practical considerations in the use and selection of encryption for data at rest. Media Encryption: Typically the motivation behind media encryption is to thwart people from stealing sensitive information in the event that the media is lost or stolen, and falls into the wrong hands. If your backup tape falls off the back of a truck, for example, it cannot be read and the information sold. But there are a couple other reasons as well. Tampering with snapshots or media is another problem encryption helps address, as both media and file encryption resist tampering- both in long-term media storage, and file/folder level encryption for short-term snapshots. If a malicious insider can alter the most recent backups, and force some form of failure to the system, the altered data would be restored. As this becomes the master record of events, the likelihood of catching and discovering this attack would be very difficult. Encrypted backups with proper separation of duties makes this at least difficult, and hopefully impossible. In a similar scenario, if someone was to gain access to backups, or the appliance that encrypts and performs key management, they could perform a type of denial of service attack. This might be to erase some piece of history that was recorded in the database, or as leverage to blackmail a company. Regardless of encryption method, redundancy in key management, encryption software/hardware, and backups becomes necessary; otherwise you have simply swapped one security threat for another. External File or Folder Encryption. If you rely on smaller regional database servers, in bank branch offices for example, theft of the physical device is something to consider. In secured data centers, or where large hardware is used, the odds of this happening are slim. In the case of servers sitting in a rack in an office closet, this is not so rare. This stuff is not stored in a vault, and much in the same way file and folder encryption helps with stolen laptops, it can also help if someone walks off with a server. How and where to store keys in this type of environment needs to be considered as well, for both operational consistency and security. Native Database Object Encryption: This is becoming the most common method for encrypting database data, and while it might sound obvious to some, there are historical reasons why this trend is only recently becoming the standard. The recent popularity is because database encryption tends to work seamlessly with most existing application processes and it usually (now) performs quite well, thanks to optimizations by database vendors. As it becomes more common, the attacks will also become more common. Native database encryption helps address a couple specific issues. The first is that archived data is already in an encrypted state, and therefore backups are protected against privacy or tampering problems. Second, encryption helps enforce separation of duties provided that the access controls, group privileges, and roles are properly set up. However, there are a number of more subtle attacks on the database that need to be considered. How objects and structures are named, and how they are used, and other unencrypted columns, can all ‘leak’ information. While the primary sensitive data is encrypted, if the structures or naming conventions are poorly designed, or compound columns are used, information may be unintentionally available by inference. Also, stored procedures or service accounts that have permissions to examine these encrypted columns can be used to bypass authorization checks and access the data, so both direct and indirect access rights need to be periodically reviewed. In some rare cases I have seen read & write access to encrypted columns left open, as the admin felt that if the data was protected it was safe, but overwriting the column with zeros proved otherwise. Finally, some of the memory scanning database monitoring technologies have access to cached data in its unencrypted state, so make sure caching does not leave a hole you thought was closed. Hopefully this went a little beyond specific tools and their usage, and provided some food for thought. You will need to consider how encryption alters your disaster recovery strategy, both with the backups, as well as with encryption software/hardware and key management infrastructure. It affects the whole data eco-system, so you need to consider all aspects of the data life-cycle this touches. And some of you may consider the threats I raised above as far-fetched, but if you think like a criminal or watch the news often enough, you will see examples of these sorts of attacks from time to time. Share:

Share:
Read Post

Healthcare In The Cloud

Google is launching a cooperative program between Google and Medicare of Arizona. They are teaming up to put patient & health care records onto Google servers so it can be shared with doctors, labs and pharmacies. Arizona seniors will be pioneers in a Medicare program that encourages patients to store their medical histories on Google or other commercial Web sites as part of a government effort to streamline and improve health care. The federal agency that oversees Medicare selected Arizona and Utah for a pilot program that invites patients to store their health records on the Internet with Google or one of three other vendors. From Google & Medicare’s standpoint, this seems like a great way to reduce risk and liability while creating new revenue models. Google will be able to charge for some add-on advertisement services, and possibly data for BI as well. It appears that to use the service, you need to provide some consent, but it is unclear from the wording in the privacy policy if that means by default the data can be used or shared with third parties; time will tell. It does appears that Google does not assume HIPPA obligations because they are not a health care provider. And because of the voluntary nature of the program, it would be hard to get any satisfaction should the data be leaked and damages result. The same may be true for Medicare, because if they are not storing the patient data, there is a grey area of applicability for measures like CA-1386 and HIPPA as well. As Medicare is not outsourcing record storage, unlike other SaaS offerings, they may be able to shrug off the regulatory burden. Is it just me, or does this kind of look like Facebook for medical records? Share:

Share:
Read Post

1 In 4 DNS Servers Still Vulnerable? More Like 4 in 4

I was reading this article over at NetworkWorld today on a study by a commercial DNS vendor that concluded 1 in 4 DNS servers is still vulnerable to the big Kaminsky vulnerability. The problem is, the number is more like 4 in 4. The new attack method that Dan discovered is only slowed by the updates everyone installed, it isn’t stopped. Now instead of taking seconds to minutes to compromise a DNS server, it can take hours. Thus if you don’t put compensating security in place, and you’re a target worth hitting, the attacker will still succeed. This is a case where IDS is your friend- you need to be watching for DNS traffic floods that will indicate you are under attack. There are also commercial DNS solutions you can use with active protections, but for some weird reason I hate the idea of paying for something that’s free, reliable, and widely available. On that note, I’m going to go listen to my XM Radio. The irony is not lost on me. Share:

Share:
Read Post

Cloud Security Macro Layers

There’s been a lot of discussion on cloud computing in the blogosphere and general press lately, and although I’ll probably hate myself for it, it’s time to jump in beyond some sophomoric (albeit really funny) humor. Chris Hoff inspired this with his post on TCG IF-MAP; a framework/standard for exchanging network security objects and events. It’s roots are in NAC, although as Alan Shimel informs us there’s been very little adoption. Since cloud computing is a crappy marketing term that can mean pretty much whatever you want, I won’t dig into the various permutations in this post. For the purposes of this post I’ll be focusing on distributed services (e.g. grid computing), online services, and SaaS. I won’t be referring to in the cloud filtering and other network-only services. Chris’s posting, and most of the ones I’ve seen out there, are heavily focused on network security concepts as they relate to the cloud. but if we look at cloud computing from a macro level, there are additional layers that are just as critical (in no particular order): Network: The usual network security controls. Service: Security around the exposed APIs and services. User: Authentication- which in the cloud word will need to move to more adaptive authentication, rather than our current username/password static model. Transaction: Security controls around individual transactions- via transaction authentication, adaptive authorization, or other approaches. Data: Information-centric security controls for cloud based data. How’s that for buzzword bingo? Okay, this actually includes security controls over the back end data, distributed data, and any content exchanged with the user. Down the road we’ll dig into these in more detail, but anytime we start distributing services and functionality over an open public network with no inherent security controls, we need to focus on the design issues and reduce design flaws as early as possible. We can’t just look at this as a network problem- our authentication, authorization, information, and service (layer 7) controls are likely even more important. This gets me thinking it’s time to write a new framework… not that anyone will adopt it. Share:

Share:
Read Post

Data Discovery & Classification

I was reading the RSA report on the Torpig/Sinowal trojan while stuck at the airport for several hours last Thursday. During my many hours of free time I overheard some IT executive discussing the difficulties of implementing data discovery and classification with his peers. I did not catch the name of the company, and probably would not pass it along even if I had, but the tired and whiny rant about their associated failures was not unique. Perhaps I was a bit testy about having to sit in an airport lobby for eight hours, but all I could think was “What is wrong with you? If hackers can navigate your data center, why can’t you?” That’s where the RSA report just gelled my thoughts on the subject. If a small group, quite literally a handful of hackers, can use Torpig & BlaBla to steal hundreds of thousands of credit card numbers, steal accounts and passwords, install malicious software at multiple company sites … all without being provided credentials, access rights or a specific map of your IT infrastructure … why can’t your company classify its own data and intellectual property assets? You would think that a company, given a modest amount of resources, could discover, classify and categorize its own data. I mean, if you paid someone full time to do it, don’t you think you could get the job done? Some of the irritating points that they raised … “Data in motion made it difficult to track”: So what- the hacker tools are kept running and they never stopped scanning. Nor did they give up on the first try; rather they periodically modified their code to adapt for location and type of data, and they were persistent. You should be too. “Difficulty to classify the data” and “Can’t find stuff you know is there”: So what- hire better programmers. Pressure vendors for better tools. Can’t afford expensive software? There is open source code out there to start with; hackers can do it, so can you. There is at least a dozen programatic ways to analyze data, through content or even context, and probably even more ways to traverse/crawl/inspect systems. If the application your company uses it can find it, so can you. “Size of the project is difficult to manage”: So what- divide and conquer. Take a specific set of data you are worried about and start there. Compliance group breathing down your neck to meet XYZ regulation? Pick one category (customer accounts, credit card data, source code, whatever. Tune your tools and policies (you did not really think you were going to get perfection out of the box did you?), address the problem and move on. If you are starting with an ISACA or Cobit framework and trying to map a comprehensive strategy, stop making the problem more complex than it is. Hackers went for low hanging fruit; you should too. “The results are not accurate”: So what- your not going to be 100% right all the time. The hackers aren’t either. Either accept 95-99% accuracy, or try something different. Or maybe your policy is out of line with reality and needs to be reconsidered. “Expensive” and “Takes too much in the way of resources”: No chance! If hackers can run malware for 18 months at TJX and related stores UNDETECTED, then the methods used are not resource hogs, nor did they invest that much money in the tools. Some times, you just got to stop whinin’ and git ‘er done! Share:

Share:
Read Post

Database Encryption- Option 1, Media Protection

I do believe I am officially setting a personal best for the most extended blog series. Way back in February, before my shoulder surgery, I started a series on database encryption. I not only don’t expect you to remember this, but I’d be seriously concerned about your mental well being if you did. In that first post I described the two categories of database encryption- media protection, and separation of duties. Today we’re going to talk more about media encryption, and the advantages of combining it with database activity monitoring. When encrypting a database for media protection our goal is to protect the data from physical loss or theft (including some forms of virtual theft). This won’t protect sensitive content in the database if someone has access to the DB, but it will protect the information in storage and archive, and may offer realtime protection from theft of the database files. The advantage of encryption for media protection is that it is far easier to implement than encryption for separation of duties, which involves mucking with the internal database structures. The disadvantage is that it provides no internal database controls, and thus isn’t ideal for things like credit card numbers where you need to restrict even an administrator’s ability to see them. Database encryption for media protection is performed using the following techniques/technologies: Media encryption: This includes full drive encryption or SAN encryption; the entire storage media is encrypted, and thus the database files are protected. Depending on the method used and the specifics of your environment, this may or may not provide protection for the data as it moves to other data stores, including archival (tape) storage. For example, depending on your backup agent, you may be backing up the unencrypted files, or the encrypted storage blocks. This is best suited for high performance databases where the primary concern is physical loss of the media (e.g. a database on a managed SAN where the service provider handles failed drives potentially containing sensitive data). Any media encryption product supports this option. External File/Folder Encryption: The database files are encrypted using an external (third party) file/folder encryption tool. Assuming the encryption is configured properly, this protects the database files from unauthorized access on the server and those files are typically still protected as they are backed up, copied, or moved. Keys should be stored off the server and no access provided to local accounts, which will offer protection should the server become compromised and rooted by an external attacker. Some file encryption tools, such as Vormetric or BitArmor, can also restrict access to the protected files based on application. Thus only the database processes can access the file, and even if an attacker compromises the database’s user account, they will only be able to access the decrypted data through the database itself. File/folder encryption of the database files is a good option as long as performance is acceptable and keys can be managed externally. Any file/folder encryption tool supports this option (including Microsoft EFS), but performance needs to be tested since there is wide variation among the different tools. Remember that any replication or distribution of data handled from within the database won’t be protected unless you also encrypt those destinations. Native Database Object Encryption: Most current database management system versions, such as Oracle, Microsoft SQL Server, and IBM DB2 include capabilities to encrypt either internal database objects (tables and other structures) or the data stores (files). This is managed from within the database, and keys are typically stored internally. This is overall good option in many scenarios as long as performance meets requirements. Depending on the platform, you may be able to offload key management to an external key management solution. The disadvantage is that it is specific to each database platform, and isn’t even always available. The decision on which option to choose depends on your performance requirements, threat model, exiting architecture, and security requirements. Unless you have a high-performance system that exceeds the capabilities of file/folder encryption, I recommend you look there first. If you are managing heterogeneous database, you will likely look at a third party product over native encryption. In both cases, it’s very important to use external key management and not allow access by any local accounts. The security of database encryption for media protection is greatly enhanced when combined with database activity monitoring. In this scenario, the database content is protected from loss via encryption, and internal data protected against abuse by database activity monitoring. I’ve heard of this combination being used as a compensating control for PCI- the database files are encrypted to prevent loss, while database activity monitoring is used to track all access to credit card numbers and generate alerts for unapproved access, such as a DBA running a SELECT query. Share:

Share:
Read Post

The Two Kinds Of Security Threats, And How They Affect Your Life

When we talk about security threats we tend to break them down into all sorts of geeky categories. Sometimes we use high level terms like clientside, targeted attack, or web application vulnerability. Other times we dig in and talk about XSS, memory corruption, and so on. You’ll notice we tend to mix in vulnerabilities when we talk about threats, but when we do that hopefully in our heads we’re following the proper taxonomy and actually thinking about that vulnerability being exploited, which is closer to a threat. Anyway, none of that matters. In security there are only two kinds of threats that affect us: Noisy threats that break things people care about. Quiet threats everyone besides security geeks ignore, because it doesn’t screw up their ability to get their job done or browse ESPN during lunch. We get money for noisy threats, and get called paranoid freaks for trying to prevent quiet threats (which can still lose our organizations a boatload of money, but don’t interfere with the married CEO’s ability to flirt with the new girl in marketing over email). Compliance, spam, AV, and old-school network attacks are noisy threats. Data breaches (unless you get caught), web app attacks, virtualization security, and most internal stuff are quiet threats. Don’t believe me? Slice up your budget and see how much you spend preventing noisy vs. quiet threats. It’s often our own little version of security theater. And if you really want to understand a vertical market, one of the best things you can do is break out noisy vs. quiet for that market, and you’ll know what you’ll get money for. The problem is, noisy vs. quiet may bear little to no relationship to your actual risk and losses, but that’s just human nature. Share:

Share:
Read Post

Friday Summary – Post-Election

I was in Chicago this week for the Tech Target ISD event giving a presentation on Information Centric Security. Like most of the people who flew in from other parts of the country for this event, we were so focused on the election and getting out to vote before we flew in, that we completely missed the fact that Obama would be speaking about a mile from the Hyatt Regency at McCormick Place. Most of us simply forgot that this was Obama’s home, and that Grant Park would be the likely place for any speeches that were to be given. Dave Mortman was kind enough to show Adam Dodge, Andy the IT Guy, and myself around, and take us to dinner at the Russian Tea Room off Adams street. When we were done with dinner around 8:30, we wandered over to Michigan Avenue for some people watching. The crowd was just starting to build, with thousands of people walking down the street to the entrances of Grant Park. While early, there was no doubt about the outcome for the attendees at this point. The most amazing thing was the sense of energy and genuine elation in the crowd as they walked down the street. Not the wild frenzy you get in New York when the Yankees win a pendant, but more a feeling of relief and joy than anything else. We booked it back to the Hyatt and watched the election results, and Obama’s subsequent speech, on television before calling it a night. I am very glad that I got a chance to be there, albeit on the periphery, as the mood & energy of that crowd was something I have never experienced before. All in all a very nice trip, and hats off to the Tech Target team for putting on such a well run, professional event. The only downside of the whole week was my Southwest flight having to make an unscheduled stop due to running out of fuel(!!!!), and having to present opposite Captain Virtualization on Wednesday morning. Oh, one minor point of interest. While I was at the Hyatt, I noticed that the hotel has a new revenue model: Tel-evator. This is a little marketing device television that they are putting into the elevators to deliver targeted marketing to conference attendees while they take the ride to and from their rooms. Being a security guy, as well as someone who gets annoyed at marketing messages constantly shoved at me, I was thinking “how could I hack this”. In the one minute ride I got as far as determining these little devices are nothing more than laptops running Windows Vista, with content being pushed over the 802.11 wireless connection, before I had to go to the conference. That night when returning to my room, I saw that someone else had the same idea. The ‘Tel-evator’ was now at an MS-DOS prompt, running a script, before rebooting into Vista. Beaten to the punch I guess. If you were the one who hacked the system, shoot me an email and let me know what you found. It was a relatively quiet week on the security front, with no major disasters or announcements. And from what I hear, Comrade Mogull is alive and well. Webcasts, Podcasts, and Conferences: Rich is giving a presentation at a conference in Moscow this week. I was in Chicago at the TechTarget ISD event giving a presentation on the Information Centric Security Lifecycle. Favorite Securosis Posts: Rich: Somehow he manges to work scales, skeletons and Barbie into this discussion on the effect of publicly available personal information, in the future of privacy and politics post. Adrian: It’s long, but there is plenty of food for thought in the DAM: Event Collection Options post. Favorite Outside Posts: Adrian: The original reports of Mike Rothman being MIA, and subsequent rumored sighting of him at a Jimmy Buffet concert walking around in a giant foam parrot suit are unfounded. Eyewitness accounts place him in Chicago this week at the ISD conference. While the parrot suit might have been more flattering than the eIQ polo shirt he was seen wearing, Mike’s health and well being are no longer in doubt. Rich: No reason to dance around it; browser security is pretty bad. Jeremiah discusses a rational and pragmatic approach to addressing Browser Security issues from both the outside and the inside. Top News: Obviously the big news this week is Obama winning the election, and his process of filling out his staff and devising an economic plan to turn the economy around. The economy is still plunging, and despite falling oil prices, we are seeing stocks continue to fall. Craigslist comes of age. Interesting piece on Express Scripts receiving an extortion threat from unknown parties who breached their database. Just ran across this article on CNET about WPA being cracked. Don’t know if this is legit yet. [Pepper adds: Check out Glenn Fleishman’s analysis at Ars Technica] Blog Comment of the Week: Tod on the Felon Database post: “You know that Felonspy.com is a joke, right? More specifically, it’s almost certainly political satire of the sex offender databases.” I do now, Tod- I do now. Share:

Share:
Read Post

“Felon” Database

Most of you probably have a friend like mine, someone who forward you every joke, video and picture they find amusing to their friends list. Sometimes humorous, I still look through all of the emails. Buried in the daily offering was the following link for a site called FelonSpy that I found somewhat fascinating. It was kind of like a reality TV show; insipid, but just different enough I had to check it out. First thing I have to mention is that the data is bogus. Click the ‘Search’ button a few times in a row with the same address and you will see that the graphs are random. I have felons appearing and then disappearing on raw BLM land down the road from me. And if you change the address often enough, you will see the same names and crimes appear over and over in different states. Whatever the real case is, this explanation is bull$!^#, and makes me believe that the entire site is bogus. Still, if the data was real, do you think this is a valuable tool? Would it help you with safety and security? Being someone who had a recent event that has changed my approach to personal safety, this sort of thing is on my mind. Part of me thinks that this type of education helps people plan ahead and react to threats around them. But once it became obvious the data was bogus, I started thinking about people I knew in my area that had criminal backgrounds; the startling discovery that half of the people I know who have criminal backgrounds are some of the nicest and most trustworthy people I know in the area! Some I don’t trust, most I do, which is a slightly better percentage than when I meet random strangers in public. It seems to me this type of technology blindly creates a virtual scarlet letter of sorts, and is an unreliable indicator of good or bad. It probably does not help anyone be more secure- instead listing events that feed paranoia and fear, but still inadequate to make any sort of valid assessment. Share:

Share:
Read Post

How the Death of Privacy and the Long Archive May Forever Alter Politics

Way back in November of 2006 I wrote a post on the impact of our electronic personas on the political process. I was thinking about re-writing the post, but after reviewing it realized the situation is the exact same two years later… if not a bit worse. As a generation raised on MySpace, FaceBook, and other social media starts becoming the candidates, rather than the electorate, I think we will see profound changes in our political process. I’m off in Moscow so I’m pre-posting this, and for all I know the election is in the midst of a complete meltdown right now. Here again, for your reading pleasure, is the text of the original post: As the silly season comes to a close with today’s election (at least for, like, a week or so) there’s a change to the political process I’ve been thinking about a lot. And it’s not e-voting, election fraud, or other issues we’ve occasionally discussed. On this site (and others) we’ve discussed the ongoing erosion of personal privacy. More of our personal information is publicly available, or stored in private databases unlocked with a $ shaped key, than society has ever experienced before. This combines with a phenomena I call “The Long Archive”- where every piece of data, of value or not, is essentially stored for eternity (unless, of course, you’re in a disaster recovery situation). Archived web pages, blog posts, emails, newsgroup posts, MySpace profiles, FaceBook pages, school papers, phone calls, calendar entries, credit card purchases, Amazon orders, Google searches, and … Think about it. If only 2% of our online lives actually survives indefinitely, the mass of data is astounding. What does this have to do with politics? The current election climate could be described as mass media shit-slinging. Our current crop of elected officials, of either party, survives mostly on their ability to find crap on their opponent while hiding their own stinkers. Historically, positive electioneering is a relative rarity in the American political system. We, as a voting public, seem to desire pristine Ken dolls we can relate to over issues-focused candidates. No, not all the time, but often enough that negative campaigning shows real returns. But the next generation of politicians are growing up online, with their entire lives stored on hard drives. From school papers, to medical records, to personal communications, to web activity, chat logs (kept by a “trusted” friend) and personal blogs filled with previously private musings. It’s all there. And no one knows for how long; not really. No one knows what will survive, what will fade, but all of it has the potential to be available for future opponent research. I’m a bit older, but there’s still an incredible archive of information out there on me, including some old newsgroup posts I’m not all that proud of (nothing crazy, but I am a bit of a geek). Maybe even remnants of ugly breakups with ex-girlfriends or rants never meant for public daylight. Never mind my financial records (missed taxes one year, but did make up for it) and such. In short, there’s no way I could run for any significant office without an incredibly thick skin. Anyone who started high school after, say, 1997 is probably in an even more compromising position. Anyone in the MySpace/FaceBook groups are even worse off. With so much information, on so many people, there’s no way it won’t change politics. I see three main options: We continue to look for “clean” candidates- thus those with limited to no online records. Only those who have disengaged from modern society, and are thus probably not fit for public leadership, will run for public office. The “Barbie and Ken” option. We, as society, accept that everyone has skeletons, everyone makes mistakes, and begin to judge candidates on their progression through those mistakes or ability to spin them in the media of the day. We still judge on personality over issues. The “Oprah/Dr. Phil” option. We focus on candidate’s articulations of the issues, and place less of an emphasis on a perfect past or personality. The “Issues-oriented” option. We weigh all the crap on two big scales. Whoever comes out slightly lighter, perhaps with a sprinkling of issues, wins. The “Scales of Shit” option. Realistically we’ll see a combination of all the above, but my biggest concern is how will this affect the quality of candidates? We, as a society, already complain over a lack of good options. We’re limited to those with either a drive for power, or a desire for public good, so strong that they’re willing to peel open their lives in a public vivisection every election cycle. When every purchase you’re ever made, email, IM or SMS, blog post, blog comment, social bookmark, WhateverSpace page, public record, and medical record becomes open season, who will be willing to undergo such embarrassing scrutiny? Will anyone run for office for anything other than raw greed? Or will we, as a society, change the standards by which we judge our elected officials. I don’t know. But I do know society, and politics, will experience a painful transition as we truly enter the information society. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.