Securosis

Research

Security Requirements for Electronic Medical Records

Although security is my chosen profession, I’ve been working in and around the healthcare industry for literally my entire life. My mother was (is) a nurse and I grew up in and around hospitals. I later became an EMT, then paramedic, and still work in emergency services on the side. Heck, even my wife works in a hospital, and one of my first security gigs was analyzing a medical benefits system, while another was as a contract CTO for an early stage startup in electronic medical records/transcription. The value of moving to consistent electronic medical records is nearly incalculable. You would probably be shocked if you saw how we perform medical studies and analyze real-world medical treatments and outcomes. It’s so bass-ackwards, considering all the tech tools available today, that the only excuse is insanity or hubris. I mean there are approved drugs used in Advanced Cardiac Life Support where the medical benefits aren’t even close to proven. Sometimes it’s almost as much guesswork as trying to come up with a security ROI. There’s literally a category of drugs that’s pretty much, “well, as long as they are really dead this probably won’t hurt, but it probably won’t help either”. With good electronic medical records, accessible on a national scale, we’ll gain an incredible ability to analyze symptoms, illnesses, treatments, and outcomes on a massive scale. It’s called evidence-based medicine, and despite what a certain political party is claiming, it has nothing to do with the government telling doctors what to do. Unless said doctors are idiots who prefer not to make decisions based on science, not that your doctor would ever do that. The problem is while most of us personally don’t have any interest in the x-rays of whatever object happened to embed itself in your posterior when you slipped and fell on it in the bathroom, odds are someone wouldn’t mind uploading it… somewhere. Never mind insurance companies, potential employers, or that hot chick in the bar you’ve convinced those are just “love bumps”, and you were born with them. Securing electronic medical records is a nasty problem for a few reasons: They need to be accessible by any authorized medical provider in a clinical setting… quickly and easily. Even when you aren’t able to manually authorize that particular provider (like me when I roll up in an ambulance). To be useful on a personal level, they need to be complete, portable, and standardized. To be useful on a national level, they need to be complete, standardized, and accessible, yet anonymized. While delving into specific technologies is beyond the scope of this post, there are specific security requirements we need to include in records systems to protect patient privacy, while enabling all the advantages of moving off paper. Keep in mind these recommendations are specific to electronic medical records systems (EMR) (also called CPR for Computerized Patient Records) – not every piece of IT that touches a record, but doesn’t have access to the main patient record. Secure Authentication: You might call this one a no-brainer, but despite HIPAA we still see rampant reuse of credentials, and weak credentials, in many different medical settings. This is often for legitimate reasons, since many EMR systems are programmed like crap and are hard to use in clinical settings. That said, we have options that work, and any time a patient record is viewed (as opposed to adding info like test results or images) we need stronger authentication tied to a specific, vetted individual. Secure Storage: We’re tired of losing healthcare records on lost hard drives or via hacking compromises of the server. Make it stop. Please. (Read all our other data security posts for some ideas). Robust Logging and Activity Monitoring: When records are accessed, a full record of who did what, and when, needs to be recorded. Some systems on the market do this, but not all of them. Also, these monitoring controls are easily bypassed by direct database access, which is rampant in the healthcare industry. These guys run massive amounts of shitty applications and rely heavily on vendor support, with big contracts and direct database access. That might be okay for certain systems, but not for the EMR. Anomaly Detection: Unusual records access shouldn’t just be recorded, but must generate a security alert (which is generally a manual review process today). An example alert might be when someone in radiology views a record, but no radiological order was recorded, or that individual wasn’t assigned to the case. Secure Exchange: I doubt our records will reside on a magical RFID implanted in our chests (since arms are easy to lose, in my experience) so we always have them with us. They will reside in a series of systems, which hopefully don’t involve Google. Our healthcare providers will exchange this information, and it’s possible no complete master record will exist unless some additional service is set up. That’s okay, since we’ll have collections of fairly complete records, with the closest thing to a master record likely (and somewhat unfortunately) managed by our insurance company. While we have some consistent formats for exchanging this data (HL7), there isn’t any secure exchange mechanism. We’ll need some form of encryption/DRM… preferably a national/industry standard. De-Identification: Once we go to collect national records (or use the data for other kinds of evidence-based studies) it needs to be de-identified. This isn’t just masking a name and SSN, since other information could easily enable inference attacks. But at a certain point, we may de-identify data so much that it blocks inference attacks, but ruins the value of the data. It’s a tough balance, which may result in tiers of data, depending on the situation. In terms of direct advice to those of you in healthcare, when evaluating an EMR system I recommend you focus on evaluating the authentication, secure storage, logging/monitoring, and anomaly detection/alerting first. Secure exchange and de-identification come into play when you start looking at sharing information. Share:

Share:
Read Post

Securing Cloud Data with Virtual Private Storage

For a couple of weeks I’ve had a tickler on my to do list to write up the concept of virtual private storage, since everyone seems all fascinated with virtualization and clouds these days. Luck for me, Hoff unintentionally gave me a kick in the ass with his post today on EMC’s ATMOS. Not that he mentioned me personally, but I’ve had “baby brain” for a couple of months now and sometimes need a little external motivation to write something up. (I’ve learned that “baby brain” isn’t some sort of lovely obsession with your child, but a deep seated combination of sleep deprivation and continuous distraction). Virtual Private Storage is a term/concept I started using about six years ago to describe the application of encryption to protect private data in shared storage. It’s a really friggin’ simple concept many of you either already know, or will instantly understand. I didn’t invent the architecture or application, but, as foolish analysts are prone to, coined the term to help describe how it worked. (Not that since then I’ve seen the term used in other contexts, so I’ll be specific in my meaning). Since then, shared storage is now called “the cloud”, and internal shared storage an “internal private cloud”, while outsourced storage is some variant of “external cloud”, which may be public or private. See how much simpler things get over time? The concept of Virtual Private Storage is pretty simple, and I like the name since it ties in well with Virtual Private Networks, which are well understood and part of our common lexicon. With a VPN we secure private communications over a public network by encrypting and encapsulating packets. The keys aren’t ever stored in the packets, but on the end nodes. With Virtual Private Storage we follow the same concept, but with stored data. We encrypt the data before it’s placed into the shared repository, and only those who are authorized for access have the keys. The original idea was that if you had a shared SAN, you could buy a SAN encryption appliance and install it on your side of the connection, protecting all your data before it hits storage. You manage the keys and access, and not even the SAN administrator can peek inside your files. In some cases you can set it up so remote admins can still see and interact with the files, but not see the content (encrypt the file contents, but not the metadata). A SaaS provider that assigns you an encryption key for your data, then manages that key, is not providing Virtual Private Storage. In VPS, only the external end-nodes which access the data hold the keys. To be more specific, as with a VPN, it’s only private if only you hold your own keys. It isn’t something that’s applicable in all cloud manifestations, but conceptually works well for shared storage (including cloud applications where you’ve separated the data storage from the application layer). In terms of implementation there are a number of options, depending on exactly what you’re storing. We’ve seen practical examples at the block level (e.g., a bunch of online backup solutions), inline appliances (a weak market now, but they do work well), software (file/folder), and application level. Again, this is a pretty obvious application, but I like the term because it gets us thinking about properly encrypting our data in shared environments, and ties well with another core technology we all use and love. And since it’s Monday and I can’t help myself, here’s the obligatory double-entendre analogy. If you decide to… “share your keys” at some sort of… “key party”, with a… “partner”, the… “sanctity” of your relationship can’t be guaranteed and your data is “open”. Share:

Share:
Read Post

Friday Summary – May 15, 2009

Securosis is a funny company. We have a very different work objectives and time requirements compared to, say, a software company. And the work we do as analysts is way different than an IT admin or security job. We don’t punch the clock, and we don’t have bosses or corporate politics to worry about. We don’t have a ‘commute’ per se, either, so all of the changes since I left my last company and joined have been for the better and do not take long to adapt to. Another oddity I recently learned was that our vacations days are allocated in a very unusual way: it turns out that our holiday calendar is completely variable. Yes, it is based upon important external events, but only of quasi-religious significance. Last week I learned that all Star Trek premier days are holidays, with a day off to ‘clear your mind’ and be ready to enjoy yourself. This week I learned we get 1/2 days off the afternoon of a Jimmy Buffet concert, and most of the day off following a Jimmy Buffet concert. You see the wisdom in this policy the morning after the show. Last night Rich, I, and his extended family went to Cricket Pavilion for Buffett’s only Phoenix show. I won’t say how many of us actually packed into that tiny motor home for the trip down in case someone from the rental company reads the blog, but let’s say that on a hot summer afternoon it was a very cozy trip. And with something like 24 beers on ice per person, we were well prepared. This was my first Buffett concert and I really enjoyed it! We ended up going in late, so we were a long way from the stage, but that did not stop anyone from having a good time. I will be marking next year’s holiday calendar when I learn his next local tour dates. As this is a Securosis holiday, today’s summary will be a short one. And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Martin and Rich hit a major milestone with the 150th Network Security Podcast, which also hit the 500,000 download mark! Congratulations guys! Favorite Securosis Posts Rich: Adrian’s post on Open Invitation to the University of California at Berkeley IT Dept. Adrian: Rich’s post on The Data Breach Triangle. Data security is not always about preventing the attack. Favorite Outside Posts Adrian: Even though it came out last week, I just ran across Glenn Fleishman’s post on securing home networks. Rich: The PaulDotCom post on SQL Injection with sqlmap. Top News and Posts The cost of patching. Adobe Reader JavaScript Vulnerability at CERT. DoD Official Charged With Handing Over Classified Data To China. Security updates by Apple. Nearly half of IT security budgets deemed insufficient. Only half? Really? Did you see that Obama spoke at the ASU graduation ceremony? Did you see that the opening act was Alice Cooper? Rock on! Blog Comment of the Week This week’s best comment was from Martin McKeay in response to The Data Breach Triangle: Perhaps ‘access’ would be a better term to use than ‘exploit’. A malicious outsider needs an exploit to access the data, whereas a malicious insider usually has access to the data to begin with. You need the loot, a way to get the loot and a way to escape with the loot when you’ve got it. Is there any such thing as a ‘crime triangle’? I’m going to have to give this a bit more thought; I believe you have the right idea, but I think this somehow defines the data breach elements too narrowly. I haven’t figured out exactly what leads me in that direction yet, but it will come to me. Share:

Share:
Read Post

Open Invitation to the University of California at Berkeley IT Dept.

You probably heard the news last week that hackers have infiltrated restricted computer databases at Cal Berkeley. 160,000 current and former students and alumni personal information “may” have been stolen. The University says social security numbers, health insurance information and non-treatment medical records dating back to 1999 were stolen. Within that data set was 97,000 Social Security Numbers, from both Berkeley and Mills College students who were eligible for medical treatment. I am going to make an educated guess that this was a database either for or located at Cowell Hospital, but there are [very few other details available. Not unusual in data breach cases, but annoyingly understandable and the reason I do not post comments on most data breaches. This one is different. This is an offer to help UC Berkeley with their data security challenge. As a security professional and Berkeley alumnus, I want to offer my services to assist with security and product strategy to ensure this does not happen again. Free of charge. I am willing to help. This is a service Securosis provides: free strategic consultation services to end users. Within reason, of course, but we do. So I am extending an open offer of assistance to the University. In 2008, when I was still with my previous employer, we had a couple meetings with IT staff members at UC Berkeley for some of the security challenges and to see if our products were of interest to them. As most initial conversations go, we covered as much background about the environment and goals as we could. While the people we were speaking with were smart and highly educated, the questions they asked and the order of their priorities suggested that they were naive about security. I do not want to provide too many details on this out of respect for confidentiality, but the types of products they were reviewing I would have assumed were already in place, and policies and procedures would have been more evolved. I can even hear Adam Dodge in the back of my head saying “Well … education is a lot different than the private sector”. He’s right, and I get that, but for an organization that has already had a data breach through a lost laptop in March 2005, I expected that they would have gotten ahead of the curve. The liability here goes all the way up to the UC Regents, and this is a problem that needs to be addressed. My goal is not to insult the IT staff at UC Berkeley. Just look at the Privacy Rights web site, or the Open Security Foundation, and you will see that they are no better and no worse than any other university in the country. What pisses me off is that my alma mater, one of the best computer schools in the world, is below average in their data security! Come on!!! This is Berkeley we are talking about. UCLA, OK, I could understand that. But Berkeley? They should be leading the nation in IT security, not the new poster child for University data breaches. Berkeley has among its student body some of the smartest people in computer science, who gather there from all over the world to learn. When I was there if you wanted to know about inner details of the UNIX kernel, say at 2:30 in the morning, there was someone in the lab who could answer your question. Want to know the smallest of details on network architecture? The ‘finger’ daemon could point you to the guys who had all the answers. You might need to pull them away from Larn for a couple minutes, but they knew scary levels of detail on every piece of software and hardware on the campus. It is no different today, and they are clearly not leveraging the talent they have effectively. So go ahead. Ask for help. The university needs assistance in strategy and product suitability analysis, Securosis can help, and we will do it for free. Now I am going to have the Cal fight song in my head for the rest of the day. Share:

Share:
Read Post

Database Encryption: Option 2, Enforcing Separation of Duties

This is the next installment in what is now officially the longest running blog series in Securosis history: Database Encryption. In case you have forgotten, Rich provided the Introduction and the first section on Media Protection, and I covered the threat analysis portion to help you determine which threats to consider when developing a database encryption strategy. You may want to peek back at those posts as a refresher if this is a subject that interests you, as we like to use our own terminology. It’s for clarity, not because we’re arrogant. Really! For what we are calling “database media protection” as described in Part 1, we covered the automatic encryption of the data files or database objects through native encryption built into the database engine. Most of the major relational database platforms provide this option, which can be “seamlessly” deployed without modification to applications and infrastructure that use the database. This is a very effective way to prevent recovery of data stored on lost or stolen media. And it is handy when you have renegade IT personnel who hate managing separate encryption solutions. Simple. Effective. Invisible. And only a moderate performance penalty. What more could you want? If you have to meet compliance requirements, probably a lot more. You need to secure credit card data within the database to comply with the PCI Data Security Standard. You are unable to catalog all of the applications that use sensitive data stored in your database, so you want to stop data leakage at the source. Your DBAs want to be ‘helpful’, but their ad-hoc adjustments break the accounting system. Your quality assurance team exports production data into unsecured test systems. Medical records need to be kept private. While database media protection is effective in addressing problems with data at rest, it does not help enforce proper data usage. Requirements to prevent misuse by credentialed users or compromised user accounts, or enforce separation of duties, are outside the scope of basic database encryption. For these reasons and many others, you decide that you need to protect the data within the database through more granular forms of database encryption; table, column, or row level security. This is where the fun starts! Encrypting for separation of duties is far more complex than encrypting for media protection; it involves protecting data from legitimate database users, requiring more changes to the database itself. It’s still native database encryption, but this simple conceptual change creates exceptional implementation issues. It will be harder to configure, your performance will suffer, and you will break your applications along the way. Following our earlier analogy, this is where we transition from hanging picture hooks to a full home remodeling project. In this section we will examine how to employ granular encryption to support separation of duties within the database itself, and the problems this addresses. Then we will delve into the problems you will to run into and what you need to consider before taking the plunge. Before we jump in, note that each of these options are commonly referred to as a ‘Level’ of encryption; this does not mean they offer more or less security, but rather identifies where encryption is applied within the database storage hierarchy (element, row, column, table, tablespace, database, etc). There are three major encryption options that support separation of duties within the database. Not every database vendor supports all of these options, but generally at least two of the three, and that is enough to accomplish the goals above. The common options are: Column Level Encryption: As the name suggests, column level encryption applies to all data in a single, specific column in a table. This column is encrypted using a single key that supports one or more database users. Subsequent queries to examine or modify encrypted columns must possess the correct database privileges, but additionally must provide credentials to access the encryption/decryption key. This could be as simple as passing a different user ID and password to the key manager, or as sophisticated as a full cryptographic certificate exchange, depending upon the implementation. By instructing the database to encrypt all data stored in a column, you focus on specific data that needs to be protected. Column level encryption is the popular choice for compliance with PCI-DSS by restricting access to a very small group. The downside is that the column is encrypted as a whole, so every select requires the entire column to be deencrypted, and every modification requires the entire column to be re-encrypted and certified. This is the most commonly available option in relational database platforms, but has the poorest performance. Table / Tablespace Encryption: Table level encryption is where the entire contents of a table or group of tables are encrypted as one element. Much like full database encryption, this method protects all the data within the table, and is a good option when all more than one column in the table contains sensitive information. While it does not offer fine-grained access control to specific data elements, it more efficient option than column encryption when multiple columns contain sensitive data, and requires fewer application and query modification. Examples of when to use this technique include personally identifiable information grouped together – like medical records or financial transactions – and this is an appropriate approach for HIPAA compliance. Performance is manageable, and is best when the sensitive tables can be fully segregated into their own tablespace or database. Field/Cell/Row Level Encryption, Label Security: Row level encryption is where a single row in a table is encrypted, and field or cell level encryption is where individual data elements within a database table are encrypted. They offer very fined control over data access, but can be a management and performance nightmare. Depending upon the implementation, there might be one key used for all elements or a key for each row. The performance penalty is a sharp limitation, especially when selecting or modifying multiple rows. More commonly, separation of duties is supported by label security.

Share:
Read Post

The Network Security Podcast Hits Episode 150 and 500K Downloads

I first got to know Martin McKeay back when I started blogging. The Network Security Blog was one of the first blogs I found, and Martin and I got to know each other thanks to blogging. Eventually, we started the Security Blogger’s Meetup together. After I left Gartner, Martin invited me to join him as a guest-host on the Network Security Podcast, and it eventually turned into a permanent position. I’ve really enjoyed both podcasting, and getting to know Martin better as we moved from acquaintances to friends. Last night was fairly monumental for the show and for Martin. We recorded episode 150, and a few hours later hit 500,000 total downloads. No, we didn’t do anything special (since we’re both too busy), but I think it’s pretty cool that some security guy with a computer and a microphone would eventually reach tens of thousands of individuals, with hundreds of hours of recordings, based on nothing more than a little internal motivation. Congratulations Martin, and thanks for letting me participate. Now on to the show: This is one of those good news/bad news weeks. On the bad side, Rich messed up and now has to retake an EMT refresher course, despite almost 20 years of experience. Yes, it’s important, but boy does it hurt to lose 2 full weekends learning things you already know. On the upside, this is, as you probably noticed from the title of the post, episode 150! No, we aren’t doing a 12 hour podcast like Paul and Larry did (of PaulDotCom Security Weekly), but we do have the usual collection of interesting security stories. Network Security Podcast, Episode 15, May 12, 2009 Time: 38:18 Show Notes UC Berkeley loses 160K healthcare records. Most people think they will be hacked. Duh. Heartland spends $12.6M on breach response. Possibly half going to MasterCard fines. Rich debuts the Data Breach Triangle, which Martin improves. Tonight’s Music: Neko Case with People Got a Lotta Nerve. Who knew Neko Case had a podsafe MP3 available? Share:

Share:
Read Post

Project Quant: Draft Survey Questions

Hey folks, While we aren’t posting everything related to Project Quant here on the site, I will be putting up some major milestones. One of the biggies is to develop a survey to gain a better understanding of how organizations manage their patching processes. I just completed my first rough draft of some survey questions over in the forums. The main goal is to understand to what degree people have a formal process, and how their processes are structured. I consider this very rough and in definite need of some help. Please pop over to this thread in the forums and let me know what you think. In particular I’m not sure I’ve actually captured the right set of questions, based on our priorities for the project (I know survey writing is practically an art form). Please let us know what you think. Once we lock it down we will use a variety of mechanisms to get the survey out there, and will follow it up with some focused interviews. Share:

Share:
Read Post

The Data Breach Triangle

I’d like to say I first became familiar with fire science back when I was in the Boulder County Fire Academy, but it really all started back in the Boy Scouts. One of the first things you learn when you’re tasked with starting, or stopping, fires is something known as the fire triangle. Fire is a pretty fascinating process when you dig into it. It demonstrates many of the characteristics of life (consumption, reproduction, waste production, movement), but is just a nifty chemical reaction that’s all sorts of fun when you’re a kid with white gas and a lighter (sorry Mom). The fire triangle is a simple model used to describe the elements required for fire to exist: heat, fuel, and oxygen. Take away any of the three, and fire can’t exist. (In recent years the triangle was updated to a tetrahedron, but since that would ruin my point, I’m ignoring it). In wildland fires we create backburns to remove fuel, in structure fires we use water to remove heat, and with fuel fires we use chemical agents to remove oxygen. With all the recent breaches, I came up with the idea of a Data Breach Triangle to help prioritize security controls. The idea is that, just like fire, a breach needs three elements. Remove any of them and the breach is prevented. It consists of: Data: The equivalent of fuel – information to steal or misuse. Exploit: The combination of a vulnerability and/or an exploit path to allow an attacker unapproved access to the data. Egress: A path for the data to leave the organization. It could be digital, such as a network egress, or physical, such as portable storage or a stolen hard drive. Our security controls should map to the triangle, and technically only one side needs to be broken to prevent a breach. For example, encryption or data masking removes the data (depending a lot on the encryption implementation). Patch management and proactive controls prevent exploits. Egress filtering or portable device control prevents egress. This assumes, of course, that these controls actually work – which we all know isn’t always the case. When evaluating data security I like to look for the triangle – will the controls in question really prevent the breach? That’s why, for example, I’m a huge fan of DLP content discovery for data cleansing – you get to ignore a whole big chunk of expensive security controls if there’s no data to steal. For high-value networks, egress filtering is a key control if you can’t remove the data or absolutely prevent exploits (exploits being the toughest part of the triangle to manage). The nice bit is that exploit management is usually our main focus, but breaking the other two sides is often cheaper and easier. Share:

Share:
Read Post

Consumer Protection and Software

CNET is reporting that last week the European Commission is proposing consumer protection laws be applied to software. Mentioning specifically anti-virus and video game software, commissioners Viviane Reding and Meglena Kuneva have proposed that EU consumer protections for physical products be extended to software in an effort to protect customers and implying that consumers would use more and buy more if the software was better. “extending the principles of consumer protection rules to cover licensing agreements of products like software downloaded for virus protection, games, or other licensed content,” according to the commissioners’ agenda. “Licensing should guarantee consumers the same basic rights as when they purchase a good: the right to get a product that works with fair commercial conditions.” In reality I am guessing some politician took notice that few in the voting public are for crappy software. Or perhaps they took notice that anti-virus software does not really stop malware, spyware, phishing and viruses as advertised? Or perhaps they still harbor resentment for “ET: The Game”? Who knows. I had to laugh at Business Software Alliance Director Francisco Mingorance’s comment that “Digital Content is not a tangible good and should not be subject to the same liability as toasters.” He’s right. If your toaster is mis-wired it could kill you. Or if you used it in the bathtub for that matter. If people are not happy with a $45.00 piece of software, and no one died from its use, do you think anyone is going to prosecute? Sure, Alvin & the Chipmunks really sucked; caveat emptor! Even if you should find a zealous prosecutor, if something should go wrong with the software, who will get the blame? The vendor for producing the code? The customer for they way they deployed, configured, and modified it? How would this work on an application stack or in one of the cloud models? Was the software fully functional to the point in time specification, but the surrounding environment changes created a vulnerable condition? If anti-virus stops one virus but not another, should it be deemed defective? There is not enough time, money or interest to address these questions, so the legislative effort is meaningless. I appreciate the EC’s frustration and admire them for wanting to do something about software quality and ‘efficacy’, but the proposal is not viable. Granted there are the few software developers who look upon their craft to build the best the best possible software, but most companies will continue to sell us the crappiest product that we will still buy. The only people who will benefit are the lawyers who will be needed to protect their clients from liability; you think EULAs are bad now, you have seen nothing yet! Do not be surprised if you see the software quality bandwagon rumble through Washington D.C. as well, but it will not make security software better because you cannot effectively legislate software quality. Meaningful change will come when customers vote with their dollars. Share:

Share:
Read Post

Data Harvesting and Privacy

Someone has finally captured my vision of what a data centric society without privacy rights looks like. This video is really funny … and scary. Law enforcement and drug companies have been doing this for years. And even if it is not public knowledge, many insurance companies are doing this as well. Orwell had no idea how deep the rabbit hole goes. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.