Securosis

Research

Proposed Internet Wiretapping Law Fundamentally Incompatible with Security

It’s been a while since I waded in on one of these government-related privacy thingies, but a report this morning from the New York Times reveals yet another profound, and fundamental, misunderstanding of how technology and security function. The executive branch is currently crafting a legislative proposal to require Internet-based communications providers to support wiretap capabilities in their products. I support law enforcement’s capability to perform lawful intercepts (with proper court orders), but requirements to alter these technologies to make interception easier will result in unintended consequences on both technical and international political levels. According to the article, the proposal has three likely requirements: Communications services that encrypt messages must have a way to unscramble them. Foreign providers that do business inside the United States must establish a domestic office capable of performing intercepts. Developers of software that enables peer-to-peer communication must redesign their services to allow interception. Here’s why those are all a bad ideas: To allow a communications service to decrypt messages, they will need an alternative decryption key (master key). This means that anyone with access to that key has access to the communications. No matter how well the system is architected, this provides a single point of security failure within organizations and companies that don’t have the best security track record to begin with. That’s not FUD – it’s hard technical reality. Requiring foreign providers to have interception offices in the US is more of a political than technical issue. Because once we require it, foreign companies will reciprocate and require the same for US providers. Want to create a new Internet communications startup? Better hope you get millions in funding before it becomes popular enough for people in other countries to use it. And that you never need to correspond with a foreigner whose government is interested in their actions. There are only 3 ways to enable interception in peer to peer systems: network mirroring, full redirection, or local mirroring with remote retrieval. Either you copy all communications to a central monitoring console (which either the provider or law enforcement could run), route all traffic through a central server, or log everything on the local system and provide law enforcement a means of retrieving it. Each option creates new opportunities for security failures, and is also likely to be detectable with some fairly basic techniques – thus creating the Internet equivalent of strange clicks on the phone lines, never mind killing the bad guys’ bandwidth caps. Finally, the policymakers need to keep in mind that once these capabilities are required, they are available to any foreign governments – including all those pesky oppressive ones that don’t otherwise have the ability to compel US companies to change their products. Certain law enforcement officials are positioning this as restoring their existing legal capability for intercept. But that statement isn’t completely correct – what they are seeking isn’t a restoration of the capability to intercept, but creation of easier methods of intercept through back doors hard-coded into every communications system deployed on the Internet in the US. (I’d call it One-Click Intercept, but I think Amazon has a patent on that.) I don’t have a problem with law enforcement sniffing bad guys with a valid court order. But I have serious a problem with the fundamental security of my business tools being deliberately compromised to make their jobs easier. The last quote in the article really makes the case: “No one should be promising their customers that they will thumb their nose at a U.S. court order,” Ms. Caproni said. “They can promise strong encryption. They just need to figure out how they can provide us plain text.” Yeah. That’ll work. Share:

Share:
Read Post

Attend the Securosis/SearchSecurity Data Security Event on Oct 26

We may not run our own events, but we managed to trick the folks at Information Security Magazine/SearchSecurity into letting us take over the content at the Insider Data Threats seminar in San Francisco. The reason this is so cool is that it allowed us to plan out an entire day of data-protection goodness with a series of interlocked presentations that build directly on each other. Instead of a random collection from different presenters on different topics, all our sessions build together to provide deep actionable advice. And did I mention it’s free? Mike Rothman and I will be delivering all the content, and here’s the day’s structure: Involuntary Case Studies in Data Security: We dig into the headlines and show you how real breaches happen, using real names. Introduction to Pragmatic Data Security: This session lays the foundation for the rest of the day by introducing the Pragmatic Data Security process and the major management and technology components you’ll use to protect your organization’s information. Network and Endpoint Security for Data Protection: We’ll focus on the top recommendations for using network and endpoint security to secure the data, not just… um… networks and endpoints. Quick Wins with Data Loss Prevention, Encryption, and Tokenization: This session shows the best ways to derive immediate value from three of the hottest data protection technologies out there. Building Your Data Security Program: In our penultimate session we tie all the pieces together and show you how to take a programatic approach, rather than merely buying and implementing a bunch of disconnected pieces of technology. Stump the Analysts: We’ll close the day with a free-for-all battle royale. Otherwise known as “an extended Q&A session”. There’s no charge for the event if you qualify to attend – only a couple short sponsor sessions and a sponsors area. Our sessions target the management level, but in some places we will dig deep into key technology issues. Overall this is a bit of experiment for both us and SearchSecurity, so please sign up and we’ll see you in SF! Share:

Share:
Read Post

Government Pipe Dreams

General Keith Alexander heads the U.S. Cyber Command and is the Director of the NSA. In prepared testimony today he said the government should set up a secure zone for themselves and critical infrastructure, walled off from the rest of the Internet. “You could come up with what I would call a secure zone, a protected zone, that you want government and critical infrastructure to work in that part,” Alexander said. “At some point it’s going to be on the table. The question is how are we going to do it.” Alexander said setting up such a network would be technically straightforward, but difficult to sell to the businesses involved. Explaining the measure to the public would also be a challenge, he added. I don’t think explaining it to the public would be too tough, but practically speaking this one is a non-starter. Even if you build it, it will only be marginally more secure than the current Internet. Here’s why: The U.S. government currently runs its own private networks for managing classified information. For information of a certain classification, the networks and systems involved are completely segregated from the Internet. No playing Farmville on a SIPRnet-connected system. Extending this to the private sector is essentially a non-starter, at least without heavy regulation and a ton of cash. Most of our critical infrastructure, such as power generation/transmission and financial services, used to also be on their own private networks. But – often against the advice of us security folks – due to various business pressures they’ve connected these to Internet-facing systems and created a heck of a mess. When you are allowed to check your email on the same system you use to control electricity, it’s hard to not get hacked. When you put Internet facing web applications on top of back-end financial servers, it’s hard to keep the bad guys from stealing your cash. Backing out of our current situation could probably only happen with onerous legislation and government funding. And even then, training the work forces of those organizations to not screw it up and reconnect everything back to the Internet again would probably be an even tougher job. Gotta check that Facebook and email at work. If they pull it off, more power to them. From a security perspective isolating the network could reduce some of our risk, but I can’t really imagine the disaster we’d have to experience before we could align public and private interests behind such a monumental change. Share:

Share:
Read Post

New Paper (+ Webcast): Understanding and Selecting a Tokenization Solution

Around the beginning of the year Adrian and I released our big database encryption paper: Understanding and Selecting a Database Encryption or Tokenization Solution. We realized pretty quickly there was no way we could do justice to tokenization in that paper, so we are now excited to release Understanding and Selecting a Tokenization Solution. In this paper we dig in and cover all the major ins and outs of tokenization. How it works, why you might want to use it, architectural and integration options, and key selection criteria. We also include descriptions of three major use cases… with pretty architectural diagrams. This was a fun project – the more we dug in, the more we learned about the inner workings of these systems and how they affect customers. We were shocked at how such a seemingly simple technology requires all sorts of design tradeoffs, and the different approaches taken by each vendor. In support of this presentation we are also giving a webcast with the sponsor/licensee, RSA. The webcast is September 28th at 1pm ET, and you can register. The content was developed independently of sponsorship, using our Totally Transparent Research process. You can download the PDF directly here, and the paper is also available (without registration) at RSA. Since they were so nice as to help feed my kid without mucking with the content, please pay them a visit to learn more about their offerings. Share:

Share:
Read Post

FireStarter: It’s Time to Talk about APT

There’s a lot of hype in the press (and vendor pitches) about APT – the Advanced Persistent Threat. Very little of it is informed, and many parties within the security industry are quickly trying to co-opt the term in order to advance various personal and corporate agendas. In the process they’ve bent, manipulated and largely tarnished what had been a specific description of a class of attacker. I’ve generally tried to limit how much I talk about it – mostly restricting myself to the occasional Summary/Incite comment, or this post when APT first hit the hype stage, and a short post with some high level controls. I self-censor because I recognize that the information I have on APT all comes either second-hand, or from sources who are severely restricted in what they can share with me. Why? Because I don’t have a security clearance. There are groups, primarily within the government and its contractors, with extensive knowledge of APT methods and activities. A lot of it is within the DoD, but also with some law enforcement agencies. These guys seem to know exactly what’s going on, including many of the businesses within private industry being attacked, the technical exploit details, what information is being stolen, and how it’s exfiltrated from organizations. All of which seems to be classified. I’ve had two calls over the last couple weeks that illustrate this. In the first, a large organization was asking me for advice on some data protection technologies. Within about 2 minutes I said, “if you are responding to APT we need to move the conversation in X direction”. Which is exactly where we went, and without going into details they were essentially told they’d been compromised and received a list, from “law enforcement”, of what they needed to protect. The second conversation was with someone involved in APT analysis informing me of a new technique that technically wasn’t classified… yet. Needless to say the information wasn’t being shared outside of the classified community (e.g., not even with the product vendors involved) and even the bit shared with me was extremely generic. So we have a situation where many of the targets of these attacks (private enterprises) are not provided detailed information by those with the most knowledge of the attack actors, techniques, and incidents. This is an untenable situation – further, the fundamental failure to share information increases the risk to every organization without sufficient clearances to work directly with classified material. I’ve been told that in some cases some larger organizations do get a little information pertinent to them, but the majority of activity is still classified and therefore not accessible to the organizations that need it. While it’s reasonable to keep details of specific attacks against targets quiet, we need much more public discussion of the attack techniques and possible defenses. Where’s all the “public/private” partnership goodwill we always hear about in political speeches and watered-down policy and strategy documents? From what I can tell there are only two well-informed sources saying anything about APT – Mandiant (who investiages and responds to many incidents, and I believe still has clearances), and Richard Bejtlich (who, you will notice, tends to mostly restrict himself to comments on others’ posts, probably due to his own corporate/government restrictions). This secrecy isn’t good for the industry, and, in the end, it isn’t good for the government. It doesn’t allow the targets (many of you) to make informed risk decisions because you don’t have the full picture of what’s really happening. I have some ideas on how those in the know can better share information with those who need to know, but for this FireStarter I’d like to get your opinions. Keep in mind that we should try and focus on practical suggestions that account for the nuances of the defense/intelligence culture being realistic about their restrictions. As much as I’d like the feds to go all New School and make breach details and APT techniques public, I suspect something more moderate – perhaps about generic attack methods and potential defenses – is more viable. But make no mistake – as much hype as there is around APT, there are real attacks occurring daily, against targets I’ve been told “would surprise you”. And as much as I wish I knew more, the truth is that those of you working for potential targets need the information, not just some blowhard analysts. UPDATE Richard Bejtlich also highly recommends Mike Cloppert as a good source on this topic. Share:

Share:
Read Post

Friday Summary: September 17, 2010

Reality has a funny way of intruding into the best laid plans. Some of you might have noticed I haven’t been writing that much for the past couple weeks and have been pretty much ignoring Twitter and the rest of the social media world. It seems my wife had a baby, and since this isn’t my personal blog anymore I was able to take some time off and focus on the family. Needless to say, my “paternity leave” didn’t last nearly as long as I planned, thanks to the work piling up. And it explains why this may be showing up in your inbox on Saturday, for those of you getting the email version. Which brings me to my next point, one we could use a little feedback on. If you look at the blog this week we hit about 20 posts… many of them in-depth research to support our various projects. I’m starting to wonder if we are overwhelming people a little? As the blogging community has declined we spend less time with informal commentary and inter-blog discussions, and more time just banging out research. As a ‘research’ company, it isn’t like we won’t publish the harder stuff, but I want to make sure we aren’t losing people in the process like that boring professor everyone really respects, but who has to slam a book on the desk at the end of class to let everyone know they can go. Finally, this week it was cool to ship out the iPad for the winning participant in the 2010 Data Security Survey. When I contacted him he asked, “Is this some phishing trick?”, but I managed to still get his mailing address and phone number after a few emails. Which is cool, because now I have a new bank account with better credit, and it looks like his is good enough for the mortgage application. (But seriously, he wanted one & didn’t have one, and it was really nice to send it to someone who appreciated it). On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences SIEM In The Spotlight with ArcSight Acquisition. HP to buy ArcSight for $1.5 billion. Favorite Securosis Posts Adrian Lane: NSO Quant: Monitor Metrics – Analyze. Definitely correlate. Ten minutes to Wapner. Mike Rothman: Monitoring up the Stack: Introduction. We’re starting another research project, pushing forward on our Monitor Everything philosophy. Keep an eye on this one – it’s going to be great. Rich: HP Sets its Arcsights on Security. Mike’s analysis of the HP/Arcsight deal, which tells you whether and why this matters. Other Securosis Posts The Securosis 2010 Data Security Survey Report Rates the Top 5 Data Security Controls. Incite 9/15/2010: Up, down, up, down, Repeat. FireStarter: Automating Secure Software Development. Understanding and Selecting an Enterprise Firewall Deployment Considerations. Management. Advanced Features, Part 1. Advanced Features, Part 2. To UTM or Not to UTM? DLP Selection Process Step 1. Defining the Content. Protection Requirements. Infrastructure Integration Requirements. Favorite Outside Posts Pepper: DRG SSH Username and Password Authentication Tag Clouds. Nice rendering of human nature (you can call it laziness or stupidity, as you prefer). Adrian Lane: Gift Card FAIL. Gift cards seemed designed to be scammed. Does the bank ever lose, or only merchants? Something to think about. Mike Rothman: Evil WiFi – Captive Portal Edition. Ax0n provides very detailed instructions on building your own Evil WiFi kit. For research purposes, of course… David Mortman: Security Planning – who watches the watchers?. It’s almost but not quite Banksy. Rich: Want to know if your app (especially Adobe Reader) is using unsafe functions? Errata has an app for that. Project Quant Posts NSO Quant: Monitor Metrics – Validate and Escalate. NSO Quant: Monitor Metrics – Analyze. NSO Quant: Monitor Metrics – Collect and Store. NSO Quant: Monitor Metrics – Define Policies. NSO Quant: Monitor Metrics – Enumerate and Scope. Research Reports and Presentations Security + Agile = FAIL Presentation. Data Encryption 101: A Pragmatic Approach to PCI. White Paper: Understanding and Selecting SIEM/Log Management. White Paper: Endpoint Security Fundamentals. Top News and Posts Flash Flaw Puts Android at Risk. Web Hacking Incident Database updated. HDCP Encryption Supposedly Hacked. It’s not like you can’t reverse engineer the set top box, but the details on this will be interesting. Another Adobe Flash zero day under attack. Old-school worm making the rounds. How nostalgic. Martin: What skillz should a geek kid learn? Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Troy, in response to Tokenization Will Become the Dominant Payment Transaction Architecture. Interesting discussion. As I read the article I was also interested in the ways in which a token could be used as a ‘proxy’ for the PAN in such a system – the necessity of having the actual card number for the initial purchase seems to assuage most of that concern. Another aspect of this method that I have not seen mentioned here: if the Tokens in fact conform to the format of true PANs, won’t a DLP scan for content recognition typically ‘discover’ the Tokens as potential PANs? How would the implementing organization reliably prove the distinction, or would they simply rest on the assumption that as a matter of design any data lying around that looks like a credit card number must be a Token? I’m not sure that would cut the mustard with a PCI auditor. Seems like this could be a bit of a sticky wicket still? Troy – in this case you would use database fingerprinting/exact data matching to only look for credit card numbers in your database, or to exclude the tokens. Great question! Share:

Share:
Read Post

DLP Selection: Infrastructure Integration Requirements

In our last post we detailed content protection requirements, so now it’s time to close out our discussion of technical requirements with infrastructure integration. To work properly, all DLP tools need some degree of integration with your existing infrastructure. The most common integration points are: Directory servers to determine users and build user, role, and business unit policies. At minimum, you need to know who to investigate when you receive an alert. DHCP servers so you can correlate IP addresses with users. You don’t need this if all you are looking at is email or endpoints, but for any other network monitoring it’s critical. SMTP gateway this can be as simple as adding your DLP tool as another hop in the MTA chain, but could also be more involved. Perimeter router/firewall for passive network monitoring you need someplace to position the DLP sensor – typically a SPAN or mirror port, as we discussed earlier. Web gateway will probably integrate with your DLP system if you want to on filtering web traffic with DLP policies. If you want to monitor SSL traffic (you do!), you’ll need to integrate with something capable of serving as a reverse proxy (man in the middle). Storage platforms to install client software to integrate with your storage repositories, rather than relying purely on remote network/file share scanning. Endpoint platforms must be compatible to accept the endpoint DLP agent. You may also want to use an existing software distribution tool to deploy the it. I don’t mean to make this sound overly complex – many DLP deployments only integrate with a few of these infrastructure components, or the functionality is included within the DLP product. Integration might be as simple as dropping a DLP server on a SPAN port, pointing it at your directory server, and adding it into the email MTA chain. But for developing requirements, it’s better to over-plan than miss a crucial piece that blocks expansion later. Finally, if you plan on deploying any database or document based policies, fill out the storage section of the table. Even if you don’t plan to scan your storage repositories, you’ll be using them to build partial document matching and database fingerprinting policies. Share:

Share:
Read Post

The Securosis 2010 Data Security Survey Report Rates the Top 5 Data Security Controls

Over the summer we initiated what turned out to be a pretty darn big data security survey. Our primary goal was to assess what data security controls people find most effective; and get a better understanding of how they are using the controls, what’s driving adoption, and a bit on what kinds of incidents they are experiencing. The response was overwhelming – we had over 1,100 people participate from across the IT spectrum. The responses were almost evenly split between security and regular IT folks, which helps reduce some of the response bias: I try to be self critical, and there were definitely some mistakes in how we designed the survey (although the design process was open to the public and available for review before we launched, so I do get to blame all you a bit too, for letting me screw up). But despite those flaws I think we still obtained some great data – especially on what controls people consider effective (and not), and how you are using them. Due to an error on my part we can’t release the full report here at Securosis for 30 days, but it is available from our sponsor, Imperva, who is also re-posting the survey so those of you who haven’t taken it yet can run through the questions and compare yourselves to the rest of the responses. We will also be releasing the full (anonymized) raw data so you can perform your own analysis. Everything is free under a Creative Commons license. I apologize for not being able to release the report immediately as usual – it was a mistake on my part and won’t happen again. Key Findings We received over 1,100 responses with a completion rate of over 70%, representing all major vertical markets and company sizes. On average, most data security controls are in at least some stage of deployment in 50% of responding organizations. Deployed controls tend to have been in use for 2 years or more. Most responding organizations still rely heavily on “traditional” security controls such as system hardening, email filtering, access management, and network segregation to protect data. When deployed, 40-50% of participants rate most data security controls as completely eliminating or significantly reducing security incident occurrence. The same controls rated slightly lower for reducing incident severity (when incidents occur), and still lower for reducing compliance costs. 88% of survey participants must meet at least 1 regulatory or contractual compliance requirement, with many needing to comply with multiple regulations. Despite this, “to improve security” is the most cited primary driver for deploying data security controls, followed by direct compliance requirements and audit deficiencies. 46% of participants reported about the same number of security incidents in the most recent 12 months compared to the previous 12, with 27% reporting fewer incidents, and only 12% reporting a relative increase. Organizations are most likely to deploy USB/portable media encryption and device control or data loss prevention in the next 12 months. Email filtering is the single most commonly used control, and the one cited as least effective. Our overall conclusion is that even accounting for potential response bias, data security has transitioned past early adopters and significantly penetrated the early mainstream of the security industry. Top Rated Controls (Perceived Effectiveness): The 5 top rated controls for reducing number of incidents are network data loss prevention, full drive encryption, web application firewalls, server/endpoint hardening, and endpoint data loss prevention. The 5 top rated controls for reducing incident severity are network data loss prevention, full drive encryption, endpoint data loss prevention, email filtering, and USB/portable media encryption and device control. (Web application firewalls nearly tied, and almost made the top 5). The 5 top rated controls for reducing compliance costs are network data loss prevention, endpoint data loss prevention, storage data loss prevention, full drive encryption, and USB and portable media encryption and device control. These were very closely followed by network segregation and access management. We’ll be logging more findings throughout the week, and please visit Imperva to get your own copy of the full analysis. Share:

Share:
Read Post

DLP Selection Process: Protection Requirements

Now that you’ve figured out what information you want to protect, it’s time to figure out how to protect it. In this step we’ll figure out your high-level monitoring and enforcement requirements. Determine Monitoring/Alerting Requirements Start by figuring out where you want to monitor your information: which network channels, storage platforms, and endpoint functions. Your high-level options are: Network Email Webmail HTTP/FTP HTTPS IM/Messaging Generic TCP/IP Storage File Shares Document Management Systems Databases Endpoint Local Storage Portable Storage Network Communications Cut/Paste Print/Fax Screenshots Application Control You might have some additional requirements, but these are the most common ones we encounter. Determine Enforcement Requirements As we’ve discussed in other posts, most DLP tools include various enforcement actions, which tend to vary by channel/platform. The most basic enforcement option is “Block” – the activity is stopped when a policy violation is detected. For example, an email will be filtered, a file not transferred to a USB drive, or an HTTP URL will fail. But most products also include other options, such as: Encrypt: Encrypt the file or email before allowing it to be sent/stored. Quarantine: Move the email or file into a quarantine queue for approval. Shadow: Allow a file to be moved to USB storage, but send a protected copy to the DLP server for later analysis. Justify: Warn the user that this action may violate policy, and require them to enter a business justification to store with the incident alert on the DLP server. Change rights: Add or change Digital Rights Management on the file. Change permissions: Alter the file permissions. Map Content Analysis Techniques to Monitoring/Protection Requirements DLP products vary in which policies they can enforce on which locations, channels, and platforms. Most often we see limitations on the types or size of policies that can be enforced on an endpoint, which change based as the endpoint moves off or onto the corporate network, because some require communication with the central DLP server. For the final step in this part of the process, list your content analysis requirements for each monitoring/protection requirement you just defined. These tables directly translate to the RFP requirements that are at the core of most DLP projects: what you want to protect, where you need to protect it, and how. Share:

Share:
Read Post

DLP Selection Process: Defining the Content

In our last post we kicked off the DLP selection process by putting the team together. Once you have them in place, it’s time to figure out which information you want to protect. This is extremely important, as it defines which content analysis techniques you require, which is at the core of DLP functionality. This multistep process starts with figuring out your data priorities and ends with your content analysis requirements: Stack rank your data protection priorities The first step is to list our which major categories of data/content/information you want to protect. While it’s important to be specific enough for planning purposes, it’s okay to stay fairly high-level. Definitions such as “PCI data”, “engineering plans”, and “customer lists” are good. Overly general categories like “corporate sensitive data” and “classified material” are insufficient – too generic, and they cannot be mapped to specific data types. This list must be prioritized; one good way of developing the ranking is to pull the business unit representatives together and force them to sort and agree to the priorities, rather than having someone who isn’t directly responsible (such as IT or security) determine the ranking. Define the data type For each category of content listed in the first step, define the data type, so you can map it to your content analysis requirements: Structured or patterned data is content like credit card numbers, Social Security Numbers, and account numbers – that follows a defined pattern we can test against. Known text is unstructured content, typically found in documents, where we know the source and want to protect that specific information. Examples are engineering plans, source code, corporate financials, and customer lists. Images and binaries are non-text files such as music, video, photos, and compiled application code. Conceptual text is information that doesn’t come from an authoritative source like a document repository but may contain certain keywords, phrases, or language patterns. This is pretty broad but some examples are insider trading, job seeking, and sexual harassment. Match data types to required content analysis techniques Using the flowchart below, determine required content analysis techniques based on data types and other environmental factors, such as the existence of authoritative sources. This chart doesn’t account for every possibility but is a good starting point and should define the high-level requirements for a majority of situations. Determine additional requirements Depending on the content analysis technique there may be additional requirements, such as support for specific database platforms and document management systems. If you are considering database fingerprinting, also determine whether you can work against live data in a production system, or will rely on data extracts (database dumps to reduce performance overhead on the production system). Define rollout phases While we haven’t yet defined formal project phases, you should have an idea early on whether a data protection requirement is immediate or something you can roll out later in the project. One reason for including this is that many DLP projects are initiated based on some sort of breach or compliance deficiency relating to only a single data type. This could lead to selecting a product based only on that requirement, which might entail problematic limitations down the road as you expand your deployment to protect other kinds of content. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.