Securosis

Research

Cloud Data Security: Store (Rough Cut)

In our last post in this series, we covered the cloud implications of the Create phase of the Data Security Cycle. In this post we’re going to move on to the Store phase. Please remember that we are only covering technologies at a high level in this series on the cycle; we will run a second series on detailed technical implementations of data security in the cloud a little later. Definition Store is defined as the act of committing digital data to structured or unstructured storage (database vs. files). Here we map the classification and rights to security controls, including access controls, encryption and rights management. I include certain database and application controls, such as labeling, in rights management – not just DRM. Controls at this stage also apply to managing content in storage repositories (cloud or traditional), such as using content discovery to ensure that data is in approved/appropriate repositories. Steps and Controls Control Structured/Application Unstructured Access Controls DBMS Access Controls Administrator Separation of Duties File System Access Controls Application/Document Management System Access Controls Encryption Field Level Encryption Application Level Encryption Transparent Database Encryption Media Encryption File/Folder Encryption Virtual Private Storage Distributed Encryption Rights Management Application Logic Tagging/Labeling Tagging/Labeling Enterprise DRM Content Discovery Cloud-Provided Database Discovery Tool Database Discovery/DAM DLP/CMP Discovery Cloud-Provided Content Discovery DLP/CMP Content Discovery Access Controls One of the most fundamental data security technologies, built into every file and management system, and one of the most poorly used. In cloud computing environments there are two layers of access controls to manage – those presented by the cloud service, and the underlying access controls used by the cloud provider for their infrastructure. It’s important to understand the relationship between the two when evaluating overall security – in some cases the underlying infrastructure may be more secure (no direct back-end access) whereas in others the controls may be weaker (a database with multiple-tenant connection pooling). DBMS Access Controls: Access controls within a database management system (cloud or traditional), including proper use of views vs. direct table access. Use of these controls is often complicated by connection pooling, which tends to anonymize the user between the application and the database. A database/DBMS hosted in the cloud will likely use the normal access controls of the DBMS (e.g., hosted Oracle or MySQL). A cloud-based database such as Amazon’s SimpleDB or Google’s BigTable comes with its own access controls. Depending on your security requirements, it may be important to understand how the cloud-based DB stores information, so you can evaluate potential back-end security issues. Administrator Separation of Duties: Newer technologies implemented in databases to limit database administrator access. On Oracle this is called Database Vault, and on IBM DB2 I believe you use the Security Administrator role and Label Based Access Controls. When evaluating the security of a cloud offering, understand the capabilities to limit both front and back-end administrator access. Many cloud services support various administrator roles for clients, allowing you to define various administrative roles for your own staff. Some providers also implement technology controls to restrict their own back-end administrators, such as isolating their database access. You should ask your cloud provider for documentation on what controls they place on their own administrators (and super-admins), and what data they can potentially access. File System Access Controls: Normal file access controls, applied at the file or repository level. Again, it’s important to understand the differences between the file access controls presented to you by the cloud service, vs. their access control implementation on the back end. There is an incredible variety of options across cloud providers, even within a single SPI tier – many of them completely proprietary to a specific provider. For the purposes of this model, we only include access controls for cloud based file storage (IaaS), and the back-end access controls used by the cloud provider. Due to the increased abstraction, everything else falls into the Application and Document Management System category. Application and Document Management System Access Controls: This category includes any access control restrictions implemented above the file or DBMS storage layers. In non-cloud environments this includes access controls in tools like SharePoint or Documentum. In the cloud, this category includes any content restrictions managed through the cloud application or service abstracted from the back-end content storage. These are the access controls for any services that allow you to manage files, documents, and other ‘unstructured’ content. The back-end storage can consist of anything from a relational database to flat files to traditional storage, and should be evaluated separately. When designing or evaluating access controls you are concerned first with what’s available to you to control your own user/staff access, and then with the back end to understand who at your cloud provider can see what information. Don’t assume that the back end is necessarily less secure – some providers use techniques like bit splitting (combined with encryption) to ensure no single administrator can see your content at the file level, with strong separation of duties to protect data at the application layer. Encryption The most overhyped technology for protecting data, but still one of the most important. Encryption is far from a panacea for all your cloud data security issues, but when used properly and in combination with other controls, it provides effective security. In cloud implementations, encryption may help compensate for issues related to multi-tenancy, public clouds, and remote/external hosting. Application-Level Encryption: Collected data is encrypted by the application, before being sent into a database or file system for storage. For cloud-based applications (e.g., public or private SaaS) this is usually the recommended option because it protects the data from the user all the way down to storage. For added security, the encryption functions and keys can be separated from the application itself, which also limits the access of application administrators to sensitive data. Field-Level Encryption: The database management system encrypts fields within a database, normally at the column level. In cloud implementations you will generally want to encrypt data at the application

Share:
Read Post

XML Security Overview

As part of the interview process for our intern program, we asked candidates to prepare a couple slides and write a short blog post on a technical subject. Rich and I debated different subjects for the candidates to research and report on, but we both chose “XML Security”. It is a very broad subject that gave the candidates some latitude, and there was not too much research out there to read up on. It also happened to be a subject that neither Rich nor I had researched prior to the interviews. We did not want to bring biases to the subject, and we wanted to focus on presentation rather than content, to see where the candidates led us. This was not to be a full-blown research effort where we expected the candidate to take a month to dig into the subject, but rather meant a cursory effort to identify the highlights. We figured it would take between 2-10 hours depending upon the candidate’s background. Listening to the presentations by the candidates was fun as we had no idea what they would focus on or what viewpoint they would present. Each brought a different vision of what constituted XML security. Some focused on one aspect of the problem space, such as web security. Some provided an academic overview of XML issues, while others offered depth on seemingly random aspects. All of the presentation were different from each other, and far different than what I would have created, plus some of their statements were counter to my understanding of XML security issues. But the quality of the research was not really what was important – rather how they approached the task and communicated their findings. I cannot share those with you, but I found the subject interesting enough that I thought Securosis should provide some coverage on this topic, so I decided to go through the process myself. The slide deck is in the research library for you to check out if it interests you. It’s not comprehensive, but I think it covers the basics. I probably would come in second if I had been part of the competition, so it’s lucky I have already been vetted. Per my Friday Summary comment, I may learn more from this process than the interns! Share:

Share:
Read Post

New Definition: Vendor Myopia

Vendor Myopia (ven.dor my.o.pi.a) n. Inability to perceive competitive objects clearly. Abnormality in judgement resulting from drinking one’s own kool-aid. Suspect reasoning due to lack of broader perspective or omission of external facts. Distant objects may appear blurred due to strong focus on one’s own widget. Perception that new color and font define a new market. Symptoms may also include the sensation of being alone in a crowded space, or feelings of product-induced euphoria. Happy Monday! Share:

Share:
Read Post

There Are No Trusted Sites: New York Times Edition

Continuing our seemingly endless series on “trusted” sites that are compromised and then used to attack visitors, this week’s parasitic host is the venerable New York Times. It seems the Times was compromised via their advertising system (a common theme in these attacks) and was serving up scareware over the weekend (for more on scareware, and how to clean it, see Dancho Danchev’s recent article at the Zero Day blog). I recently had to clean up some scareware myself on my in-laws’ computer, but fortunately they didn’t actually pay for anything. Here are some of our previous entries in this series: BusinessWeek AMEX Paris Hilton McAfee Don’t worry, there are plenty more out there – these are just a few that struck our fancy. Share:

Share:
Read Post

Google and Micropayment

For a security blog, this is a little off topic. I recommend you stop reading if you consider my fascination with payment processing tiresome. Do any of you remember Project Xanadu? It was a precursosr to the world wide web, and envisioned as a way you could share documents and research. As I understand it, the project that died from trying to realize too many good ideas at once, and collapsed under the weight of its expectations. One of the ideas that came out of this project was the concept of micro-payments. I have spoken with team members from this project during its various phases, and been told that a micro-payment engine was being designed during the mid-90s to accommodate content providers who demanded they be paid to make their research available. I never did review the code released in 1998, so this is pure hearsay, or urban legend, or whatever you want to call it. Still, when word got out we working on a micro-payment engine at Transactor in 1997, there were warnings that people would not pay for content. In fact, the lesson seemed to be that much of the success of the web was due to the vast green fields of free information and community participation without cost. A lot has changed, but I still get that nagging feeling when I read about how Google’s proposed Micropayment System is going to help save publishers. Personally, I don’t think it will work. Not for the publishers. Not when the competitors give quality information away for free. Not when most users are reticent to even register, much less pay. But if a micropayment engine provides Google greater access to unique content, especially as it relates to newspapers, they win regardless. It becomes like Gmail in reverse. And on the flip side it extends the reach of their technology, establishing a financial relationship with everyday web users. Even if they don’t make a dime from sales commissions, it’s a brilliant idea as it promotes their existing business model. I told them as much in 2005 when I went through the second most bizarre interview process in my career. They have been playing footsie with this product idea for a long time and I have not figured out why they have been so slow to get a ‘beta’ product out there. There is room for competition and innovation in payment processing, but I remain convinced that micropayment has limited use cases, and news feeds is not a viable one. Share:

Share:
Read Post

Friday Summary – September 11, 2009

We announced the launch of the Contributing Analyst and Intern program earlier this week, with David Mortman and David Meier filling these respective roles. I think the very first Securosis blog comment I read was from Windexh8r (Meier), and Chris Hoff introduced me to David Mortman a couple years ago at RSA, so I am fortunately familiar with both our new team members. We are lucky to have people with such solid backgrounds wanting to join our open source research firm. Rich and I put up a blog post a few weeks ago and said, “Hey, want to learn how to be an analyst?” and far more people signed up than we thought, but the quality and and the depth of security experience of our applicants shocked us. That, and why they want to be analysts. I never considered being an analyst at any point in my career prior to joining Securosis. There were periods where I was not quite sure which path I would take in my line of work, so I experimented with several roles during my career (CTO, CIO, VP, Architect). It was a classic case of “the grass is always greener”, and I was always looking for a different challenge, and never quite satisfied. But here it is, some 15 months after joining Rich and I am enjoying the role of analyst. To tell you the truth, I am not really sure what the role is exactly, but I am having fun. This is not exactly a traditional analysis and research firm, so if you asked me the question “What does an analyst do?”, my answer would be very different than you’d get from an analyst for one of the big firms. A couple weeks ago when Rich and I decided to start the contributing analyst and intern positions, we understood we would have to train others to do what we do. Rich and I kind of share a vision for what we want to do, so there’s not a lot of discussion. Now we have to articulate and exemplify what we do for others. It dawned on me that I have been learning from Rich by watching. I had the research side down cold before I joined, but being on the receiving end of the briefings provides a stark contrast between vendor and analyst. I have been part of a few hundred press & analyst meetings over the years, and I understood my role as CTO was to describe what was new, why it mattered, and how it made customers happy. I never considered what it took to be on the other side of the table. To be harsh about it, I assumed most of the press and analysts were neither technical nor fully versed in customer issues because they had never been in the trenches, and really lacked the needed perspective to help either vendors or customers in a meaningful way. They could sniff out newsworthy items, but not why it mattered to the buyers. Working with Rich dispelled this myth. The depth and breadth of information we have access to is staggering. Plus Rich as an analyst possesses both the technical proficiency and the same drive (passion) to learn which good software developers and security researchers possess. Grasp the technology, product, and market; then communicate how the three relate; is a big part of what we do. And perhaps most importantly, he has the stomach to tell people the truth that their baby is ugly. Anyway, this phase of Securosis development is going to be good for me and I will probably end up learning as much of more than our new team members. I look forward to the new dimension David and David will bring. And with that, here is the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Rich was quoted in SC Magazine on Trustwave’s acquisition of DLP vendor Vericept. Rich spoke last week at the Phoenix OWASP chapter. Favorite Securosis Posts Rich: My first rough cut post on data security in the cloud. I had another halfway finished, before our blog software ate it. I got bit in the aaS by our SaaS. Adrian: I have been wanting to talk about Format and Datatype Preserving Encryption for the last three months and finally got the chance to finish the research. Other Securosis Posts Say Hello to the New (Old) Guys Data Protection Decisions Seminar in DC next week! Critical MS Vulnerabilities – September 2009 Cloud Data Security Cycle: Create (Rough Cut) Project Quant Posts Project Quant Survey Results and Analysis Raw Project Quant Survey Results Favorite Outside Posts Adrian: Bruce Schneier’s post on File Deletion highlights the issues around data retention in Cloud/SaaS environments. Rich: Amrit Williams and Peter Kyper on the state of the security industry. Top News and Posts Critical Microsoft Vulnerabilities grab the headlines this week. Ryan Naraine’s update on one of the vulnerabilities. Some Defenses for the TCP DoS vulnerabilities posted at Dark Reading. Ignoring the article hype angle, cross VM hacking is interesting research, even if unrealistic. Government to accept Yahoo, Google and Paypal credentials. Holy hackers, Batman, it’s full of holes. You know, holey. Nice post on Ars Technica on Anonymization and data obfuscation. Trustwave acquires Vericept. iPhone 3.1 anti-phishing seems to be working (or not) oddly. Firefox will now check your Flash version, which is pretty darn awesome and should be in every browser. Court allows woman to sue bank after her account is leeched. Expect to see more of this, since this sort of crime is dramatically increasing. Ever travel? Check out everything the TSA stores about you. Blog Comment of the Week This week’s best comment comes from pktsniffer in response to Format and Datatype Preserving Encyrption: Your right on the money. We had Voltage in recently to give us their encryption pitch. It was the ease of deployment using FFSEM that they were ‘selling’. I too have concerns regarding the integrity of the encryption but from an ease

Share:
Read Post

Data Protection Decisions Seminar in DC next week!

Rich and I are going to be at TechTarget’s Washington DC Data Protection Decisions Seminar on September 15th. We will be presenting on the following subjects: Pragmatic Data Security Database Activity Monitoring Understanding and Selecting a DLP Solution Data Encryption It is being held at the Sheraton National in Arlington. If you are interested in attending there is more information on the TechTarget site. Heck, I even think you earn CPE credits for listening. While it’s going to be a brief stay for both of us, let us know if you’re in town so we can catch up. Share:

Share:
Read Post

Say Hello to the New (Old) Guys

A little over a month ago we decided to try opening up an intern and Contributing Analyst program. Somewhat to our surprise, we ended up with a bunch of competitive submissions, and we’ve been spending the past few weeks performing interviews and running candidates through the ringer. We got all mean and even made them present some research on a nebulous topic, just to see what they’d come up with. It was a really tough decision, but we decided to go with one intern and one Contributing Analyst. David Meier, better known to most of you as Windexh8r, starts today as the very first Securosis intern. Dave was a very early commenter on the blog, has an excellent IT background, and helped us create the ipfw firewall rule set that’s been somewhat popular. He blogs over at Security Stallions, and we’re pretty darn excited he decided to join us. He’s definitely a no-BS kind of guy who loves poking holes in things and looking for unique angles of analysis. We’re going to start hazing him as soon as he sends the last paperwork over (with that liability waver). We’re hoping he’s not really as good as we think, or we’ll have to promote him and find another intern to beat. David Mortman, the CSO-in-Residence of Echelon One, and a past contributor to this blog, is joining us as our first Contributing Analyst. David’s been a friend for years now, and we even split a room at DefCon. Since I owed David a serious favor after he covered the blog for me while I was out last year for my shoulder surgery, he was sort of a shoe-in for the position. He has an impressive track record in the industry, and we are extremely lucky to have him. You might also know David as the man behind the DefCon Security Jam, and he’s a heck of a bread baker (and cooker of other things, but I’ve only ever tried his bread). Dave and David (yeah, we know) can be reached at dmeier@securosis.com, and dmortman@securosis.com (and all their other email/Twitter/etc. addresses). You’ll start seeing them blogging and participating in research over the next few weeks. We’ve gone ahead and updated their bios on our About page, and listed any conflicts of interest there. (Interns and Contributing Analysts are included under our existing NDAs and confidentiality agreements, but will be restricted from activities, materials, and coverage of areas where they have conflicts of interest). Share:

Share:
Read Post

Format and Datatype Preserving Encryption

That ‘pop’ you heard was my head exploding after trying to come to terms with this proof on why Format Preserving Encryption (FPE) variants are no less secure than AES. I admitted defeat many years ago as a cryptanalyst because, quite frankly, my math skills are nowhere near good enough. I must rely on the experts in this field to validate this claim. Still, I am interested in FPE because it was touted as a way to save all sorts of time and money with database encryption as, unlike other ciphers, if you encrypted a small number, you got a small number or hex value back. This means that you did not need to alter the database to handle some big honkin’ string of ciphertext. While I am not able to tell you if this type of technology really provides ‘strong’ cryptography, I can tell you about some of the use cases, how you might derive value, and things to consider if you investigate the technology. And as I am getting close to finalizing the database encryption paper, I wanted to post this information before closing that document for review. FPE is also called Datatype Preserving Encryption (DPE) and Feistel Finite Set Encryption Mode (FFSEM), amongst other names. Technically there are many labels to describe subtle variations in the methods employed, but in general these encryption variants attempt to retain the same size, and in some cases data type, as the original data that is being encrypted. For example, encrypt ‘408-555-1212’ and you might get back ‘192807373261’ or ‘a+3BEJbeKL7C’. The motivation is to provide encrypted data without the need to change all of the systems that use that data; such as database structure, queries, and all the application logic. The business justification for this type of encryption is a little foggy. The commonly cited reasons you would consider FPE/DTP are: a) if changing the database or other storage structures are impossibly complex or cost prohibitive, or b) if changing the applications that process sensitive data would be impossibly complex or cost prohibitive. Therefore you need a way to protect the data without requiring these changes. The cost you are looking to avoid is changing your database and application code, but on closer inspection this savings may be illusory. Changing the database structure for most is a simple alter table command, along with changes to a few dozen queries and some data cleanup and you are done. For most firms that’s not so dire. And regardless of what form of encryption you choose, you will need to alter application code somewhere. The question becomes whether an FPE solution will allow you to minimize application changes as well. If the database changes are minimal and FPE requires the same application changes as non-FPE encryption, there is not a strong financial incentive to adopt. You also need to consider tokenization, wherein you remove the sensitive data completely – for example by replacing credit card numbers with tokens which each represent a single CC#. As the token can be of an arbitrary size and value to fit in with the data types you already use, it has most of the same benefits as a FPE in terms of data storage. Most companies would rather get rid of the data entirely if they can, which is why many firms we speak with are seriously investigating, or already plan to adopt, tokenization. It costs about the same and there is less risk if credit cards are removed entirely. Two vendors currently offer products in this area: Voltage and Protegrity (there may be more, but I am only aware of these two). Each offers several different variations, but for the business use cases we are talking about they are essentially equivalent. In the use case above, I stressed data storage as the most frequently cited reason to use this technology. Now I want to talk about another real life use case, focused on moving data, that is a little more interesting and appropriate. You may remember a few months ago when Heartland and Voltage produced a joint press release regarding deployment of Voltage products for end to end encryption. What I understand is that the Voltage technology being deployed is an FPE variant, not one of the standard implementations of AES. Sathvik Krishnamurthy, president and chief executive officer of Voltage said “With Heartland E3, merchants will be able to significantly reduce their PCI audit scope and compliance costs, and because data is not flowing in the clear, they will be able to dramatically reduce their risks of data breaches.” The reason I think this is interesting, and why I was reviewing the proof above, is that this method of encryption is not on the PCI’s list of approved ‘strong’ cryptography ciphers. I understand that NIST is considering the suitability of the AES variant FFSEM (pdf) as well as DTP (pdf) encryption, but they are not approved at this time. And Voltage submitted FFSEM, not FPE. Not only was I a little upset at letting myself be fooled into thinking that Heartland’s breach was accomplished through the same method as Hannaford’s – which we now know is false – but also for taking the above quote at face value. I do not believe that the network outside of Heartland comes under the purview of the PCI audit, nor would the FPE technology be approved if it did. It’s hard to imagine this would greatly reduce their PCI audit costs unless their existing systems left the data open to most internal applications and needed a radical overhaul. That said, the model which Voltage is prescribing appears to be ideally suited for this technology: moving sensitive data securely across multi-system environments without changing every node. For data encryption to address end to end issues in Hannaford and similar types of breach responses, FPE would allow for all of the existing nodes to continue to function along the chain, passing encrypted data from POS to payment processor. It does not require

Share:
Read Post

Cloud Data Security Cycle: Create (Rough Cut)

Last week I started talking about data security in the cloud, and I referred back to our Data Security Lifecycle from back in 2007. Over the next couple of weeks I’m going to walk through the cycle and adapt the controls for cloud computing. After that, I will dig in deep on implementation options for each of the potential controls. I’m hoping this will give you a combination of practical advice you can implement today, along with a taste of potential options that may develop down the road. We do face a bit of the chicken and egg problem with this series, since some of the technical details of controls implementation won’t make sense without the cycle, but the cycle won’t make sense without the details of the controls. I decided to start with the cycle, and will pepper in specific examples where I can to help it make sense. Hopefully it will all come together at the end. In this post we’re going to cover the Create phase: Definition Create is defined as generation of new digital content, either structured or unstructured, or significant modification of existing content. In this phase we classify the information and determine appropriate rights. This phase consists of two steps – Classify and Assign Rights. Steps and Controls   < div class=”bodyTable”> Control Structured/Application Unstructured Classify Application Logic Tag/Labeling Tag/Labeling Assign Rights Label Security Enterprise DRM   Classify Classification at the time of creation is currently either a manual process (most unstructured data), or handled through application logic. Although the potential exists for automated tools to assist with classification, most cloud and non-cloud environments today classify manually for unstructured or directly-entered database data, while application data is automatically classified by business logic. Bear in mind that these are controls applied at the time of creation; additional controls such as access control and encryption are managed in the Store phase. There are two potential controls: Application Logic: Data is classified based on business logic in the application. For example, credit card numbers are classified as such based on on field definitions and program logic. Generally this logic is based on where data is entered, or via automated analysis (keyword or content analysis) Tagging/Labeling: The user manually applies tags or labels at the time of creation e.g., manually tagging via drop-down lists or open fields, manual keyword entry, suggestion-assisted tagging, and so on. Assign Rights This is the process of converting the classification into rights applied to the data. Not all data necessarily has rights applied, in which cases security is provided through additional controls during later phases of the cycle. (Technically rights are always applied, but in many cases they are so broad as to be effectively non-existent). These are rights that follow the data, as opposed to access controls or encryption which, although they protect the data, are decoupled from its creation. There are two potential technical controls here: Label Security: A feature of some database management systems and applications that adds a label to a data element, such as a database row, column, or table, or file metadata, classifying the content in that object. The DBMS or application can then implement access and logical controls based on the data label. Labels may be applied at the application layer, but only count as assigning rights if they also follow the data into storage. Enterprise Digital Rights Management (EDRM): Content is encrypted, and access and use rights are controlled by metadata embedded with the content. The EDRM market has been somewhat self-limiting due to the complexity of enterprise integration and assigning and managing rights. Cloud SPI Tier Implications Software as a Service (SaaS) Classification and rights assignment are completely controlled by the application logic implemented by your SaaS provider. Typically we see Application Logic, since that’s a fundamental feature of any application – SaaS or otherwise. When evaluating your SaaS provider you should ask how they classify sensitive information and then later apply security controls, or if all data is lumped together into a single monolithic database (or flat files) without additional labels or security controls to prevent leakage to administrators, attackers, or other SaaS customers. In some cases, various labeling technologies may be available. You will, again, need to work with your potential SaaS provider to determine if these labels are used only for searching/sorting data, or if they also assist in the application of security controls. Platform as a Service (PaaS) Implementation in a PaaS environment depends completely on the available APIs and development environment. As with internal applications, you will maintain responsibility for how classification and rights assignment are managed. When designing your PaaS-based application, identify potential labeling/classification APIs you can integrate into program logic. You will need to work with your PaaS provider to understand how they can implement security controls at both the application and storage layers – for example, it’s important to know if and how data is labeled in storage, and if this can be used to restrict access or usage (business logic). Infrastructure as a Service (IaaS) Classification and rights assignments depend completely on what is available from your IaaS provider. Here are some specific examples: Cloud-based database: Work with your provider to determine if data labels are available, and with what granularity. If they aren’t provided, you can still implement them as a manual addition (e.g., a row field or segregated tables), but understand that the DBMS will not be enforcing the rights automatically, and you will need to program management into your application. Cloud-based storage: Determine what metadata is available. Many cloud storage providers don’t modify files, so anything you define in an internal storage environment should work in the cloud. The limitation is that the cloud provider won’t be able to tie access or other security controls to the label, which is sometimes an option with document management systems. Enterprise DRM, for example, should work fine with any cloud storage provider. This should give you a good idea of how to manage classification and

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.