Securosis

Research

If Not Java, What?

You have probably noticed some security issues with Java lately. Some vendors – including Apple – are blocking Java in order to close known and unforeseen security problems. And the claim that open source Java frameworks pose a business risk. But through this latest flame war, I have not seen an answer to the basic question: If not Java, what? If you’re going to get Java out of the enterprise to address a security risk and replace it with something else, what would you select? Do we really have evidence that platforms like Ruby, JSON, or Node.js are more secure? Clojure and Scala rely on a JVM and the same frameworks as Java, so they cannot be more secure than the shared JVM. And remember, Java also does a few things very well, which is why it has become so popular over the last 15 years. It has a very good object model. Cross-platform compatibility. Easy to learn syntax. Extensible. Tons of tools. Easy integration. All reasons why I think we have proven C++ and C# are not replacements. I really don’t have an answer for this question. But I think I can say, in the same way that we can’t go back and rewrite all insecure code because there is is not enough time or money to do it, we are not going to throw Java out because it’s insecure. We can make a decision to block it from the browser, but that does not address the myriad ways Java is used in the enterprise. In fact I don’t even see an alternative that would enable us to begin migrating off. Share:

Share:
Read Post

A New Kind of Commodity Hardware

I was driving down the road the other day when I passed what I thought was a shipping container on the back of an 18-wheel truck. When I noticed data and power ports on the side, I realized it was a giant data center processing module. Supercomputing on wheels. Four trucks with two modules per truck, rolling down the highway. Inside reside thousands of stripped down motherboards stacked with tons of memory, packed side by side. Some of these are even designed to be filled with dielectric fluid to keep them cool. If you have not seen these things up close and personal, check out the latest article on Microsoft’s new data center When Microsoft wants to quickly ramp up a new data center, it can move dirt, pour a foundation, and build one of the most boring buildings you’ve ever seen. Or it can load up a few of its custom-designed data center modules onto a truck and drop them on the site. One of the key concepts behind big data is the realization that sometimes it’s cheaper to move computing to the data, rather than moving data to the processors. In that way you use any computing power that’s logically nearby. And there is a similar trend with data centers – in this case physically adjust location to your needs. Raw processing power. Modular. Mobile. In the event that a data center site gets flooded by a hurricane, you back up the truck, plug in a generator, and you’re back on line. It can be much for enterprises to buy a crate of computing than to provision a traditional data center. Share:

Share:
Read Post

Twitter Hacked

Twitter announced this evening that some 250k user accounts were compromised. This week, we detected unusual access patterns that led to us identifying unauthorized access attempts to Twitter user data. We discovered one live attack and were able to shut it down in process moments later. However, our investigation has thus far indicated that the attackers may have had access to limited user information – usernames, email addresses, session tokens and encrypted/salted versions of passwords – for approximately 250,000 users. Passwords and session tokens were reset to contain the problem. It is likely that personal information, including direct messages, were exposed. The post asks users to use strong passwords of at least 10 characters, and requests that they disable Java in the browser, which together provide a pretty fair indication of how the attacks were conducted. Disable Java in the browser – where have you heard that before? We will update this post as we learn more. Update by Rich: Adrian and I both posted this within minutes. Here is my comment: Also from the post: This attack was not the work of amateurs, and we do not believe it was an isolated incident. The attackers were extremely sophisticated, and we believe other companies and organizations have also been recently similarly attacked. For that reason we felt that it was important to publicize this attack while we still gather information, and we are helping government and federal law enforcement in their effort to find and prosecute these attackers to make the Internet safer for all users. Twitter has a hell of a good security team with some serious firepower, including Charlie Miller. Share:

Share:
Read Post

Understanding IAM for Cloud Services: Use Cases

This post delves into why companies are looking at new Identity and Access Management technologies for cloud deployments. Cloud computing poses (sometimes subtly) different challenges and requires rethinking IAM deployments. The following use cases are the principal motivators listed by organizations moving existing applications to the cloud – both internal or external deployments – along with how they integrate with third party cloud services. IAM architecture often feels pretty abstract; describing traits is a bit like postulating how many angels can dance on the head of a pin or whether light behaves more like a particle or a wave. And then there are standards – lots and lots of standards. But use cases are concrete – they show the catalyst, the activity, and the value to the enterprise and the user. Instead companies should start their decision process with use cases and then look for identity technologies and standards, rather than the other way around. To help understand why cloud computing requires companies to re-think their Identity and Access Management strategies, we will provide a handful of cases that illustrate common problems. The following cases embody the catalysts for altering IAM deployment structure, and embody the need for new protocols to propagate user privileges and establish identity in distributed environments. Before we get to the use cases themselves let’s look at the types of actors IAM introduces. There can be numerous different roles in a cloud IAM system, but the following are part of most deployments: Identity Provider: Consulted at runtime, the IdP is an authoritative source of information about users. This is often Active Directory or an LDAP server – which in turn provides token to represent the user identities. Cloud computing architectures often include more than one IdP. Relying Party: An RP is an application that relies upon an Identity Provider to establish identity. The relying party validates the provided token as genuine, and from the identity provider, and then uses it to assert the user’s identity. Attribute Provider: An AP either has access to or directly stores the fine-grained attributes that define user capabilities. Permissions may be role-based, attribute-based, or both. The value proposition is that attribute provider enable dynamic, data driven access control. This information is critical – it defines application behavior and gates user access to functions and data. How it provides attribute information, and how it integrates with the application, varies greatly. Authoritative Source: This is the authority on identity and provisioning settings. The AP is typically the HR system that stores master identity records, used as the source of truth for account status. This system has rights to add, edit, and disable accounts from other systems – typically via a provisioning system. For legal and compliance requirements, these systems keep detailed transaction logs. Policy Decision Point: The PDP handles authorization decisions by mapping each access request to a policy. This may be performed in application code or as a separately configured policy. There may be other IAM system roles in your deployment, but the above is the core set for cloud IAM. The location of each of these services varies, along with whether each role is supplied by the cloud provider and/or the enterprise, but these roles factor into every cloud deployment. Most cloud deployments address some combination of these three IAM Use cases: Use Cases Single Sign On Single sign on is the single greatest motivation for companies to look at new IAM technologies to support cloud computing. And for good reason – during our careers in security we have experienced few occasions when people have been glad to see security features introduced. Single Sign On (SSO) is one happy exception to this rule, because it makes every user’s life easier. Supply your password once, and you automagically get access to every site you use during the course of the day. Adding many new cloud applications (Salesforce, Amazon AWS, and Dropbox, to name a few) only makes SSO more desirable. Most security does not scale well, but SSO was built to scale. Behind the scenes SSO offers other more subtle advantages for security and operations. SSO, through management of user identity (Identity Provider), provides a central location for policies and control. The user store behaves as the authoritative source for identity information, and by extending this capability to the cloud – through APIs, tokens and third party services – the security team need not worry about discrepancies between internal and cloud accounts. The Identity Provider effectively acts as the source of truth for cloud apps. But while we have mastered this capability with traditional in-house IT services, extending SSO to the cloud presents new challenges. There are many flavors to SSO for the cloud, some based on immature and evolving standards, while other popular interfaces are proprietary and vendor-specific. Worse, the means by which identity is ‘consumed’ vary, with some services ‘pulling’ identity directly from other IT systems, while others requiring you ‘push’ information to them. FInally, the protocols used to accomplish these tasks vary as well: SAML, OAuth, OAuth II, vendor APIs, and so on. Fortunately SAML is the agreed-upon standard, used in most cases, but it is a complex protocol with many different options and deployment variations. Another challenge to cloud SSO is the security of the identity tokens themselves. As tokens become more than just simple session cookies for web apps, and embody user capabilities for potentially dozens of applications, they become more attractive as targets. An attacker with an SSO gains all the user rights conveyed by the token – which might provide access to dozens of cloud applications. This would be less of an issue if all the aforementioned protocols adequately protected tokens communicated across the Internet, but some do not. So SSO tokens should always be protected by TLS/SSL on the wire, and thought should be given to a protection regime for token access and storage from applications. SSO makes life easier for users and administrators, but for developers is only a partial solution. The sign-on

Share:
Read Post

Universal Plug and Play Vulnerable to Remote Code Injection

Rapid7 has announced that the UPnP (Universal Plug and Play) service is vulnerable to remote code injection. Because this code is deployed in millions of devices – that’s the ‘Universal’ part – there are a freakishly large number of people vulnerable to this simple attack. From The H Security: During an IP scan of all possible IPv4 addresses, Rapid7, the security firm that is known for the Metasploit attack framework, has discovered 40 to 50 million network devices that can potentially be compromised remotely with a single data packet. The company says that remote attackers can potentially inject code into these devices, and that this may, for example, enable them to gain unauthorised access to a user’s local network. All kinds of network-enabled devices including routers, IP cameras, NAS devices, printers, TV sets and media servers are affected. They all have several things in common: they support the Universal Plug and Play network protocol, respond to UPnP requests from the internet, and use a vulnerable UPnP library to do so. Rapid7 is offering users a free scanning tool to identify vulnerable devices, but the real question is “How can I protect myself?” The CERT Advisory advises users to block “untrusted hosts from access to port 1900/UDP”, but that’s provided they know how to do that, the devices are protected by a firewall, and disabling the port does not break legitimate apps. Honestly, not a lot to go on right now, so we will update this post if we come across more actionable advice. Share:

Share:
Read Post

Friday Summary: January 25, 2013

Will Hadoop be to NoSQL what Red Hat is to Linux? Will it become more known for commercial flavors than the open-source core? Lately I have been noticing similarities between the two life-cycles, with the embrace of packaged variants. What I notice is this: In 1994 I replaced an unreliable BSD distribution with a Slackware distribution of Linux – itself a UNIX derivative. Suddenly “this old PC” was not only reliable, it felt 5x faster than it did running from the Windows partition. Slackware Linux was a great product limited to the realm of uber-geeks – you needed to assemble and compile before you could use it. But you could customize it any way you wanted – and it put a truly powerful OS on the desktop – free. Then Linux started to go a bit mainstream as it allowed us to cost-effectively run applications that previously required a substantial investment and very particular hardware. Caldera was a big deal for a while because they produced a ‘corporate’ flavor. Some companies noticed Linux was a powerful platform and embraced it; others viewed it – along with most open source – as a security threat. But its flexibility and ability to deliver a server-quality OS on commodity hardware were too compelling to ignore. Then we got ‘professional’ distributions, tools, and services. Adoption rates really started to take off. But while the free and open nature of the platform still roots the movement, it started to feel like you need a commercial version for support and tools. These days few people grab different pieces and assemble their own custom Linux distributions. I think Big Data is already moving from the fully open source “piece it together yourself” model into complete productized versions. If that’s true I expect to see the 125+ versions of NoSQL begin to simplify, dropping many of the esoteric distributions, likely boiling the market down to a few main players within the next few years – and eventually the Big Data equivalent of a LAMP stack. After that the NoSQL growth curve will be about standardized versions of Hadoop. The question is whether it will look more like Red Hat or Ubuntu? This really has nothing to do with security, but I thought there were too many similarities to ignore. -Adrian On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Milestone: Episode 300 of NetSec podcast. Mike quoted on Reuters on Cisco’s network security competitiveness. Mike quoted in the Merc about Cisco’s network security (missed) opportunity. Favorite Securosis Posts Mike Rothman: Don’t respond to a breach like this. Small minds make poor decisions. And everyone else should continue to do the right thing, even if small minds can’t understand it and take action against it. Adrian Lane: Emotional Whiplash. Mike nailed it. And I only saw the first and fourth quarters! Favorite Outside Posts Adrian Lane: “Cyber” Insurance and an Opportunity. Fascinating. Mike Rothman: XSS, password flaws found in popular ESPN app. Man, this sucks. Any big sports fan uses the ESPN app. Good thing it doesn’t store anything sensitive because I can’t live without my scores and NFL news. Recent Research Papers Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Pragmatic WAF Management: Giving Web Apps a Fighting Chance. Top News and Posts Aaron Swartz’s death Backdoors Found in Barracuda Networks Gear Google Tells Cops to Get Warrants for User E-Mail, Cloud Data Twitter flaw allowed third party apps to access direct messages Blog Comment of the Week This week’s best comment goes to -ds, in response to It’s just Dropbox. What’s the risk?. If we make security break users, we make users break security. This is such a basic principle. I’m tired of being in an industry where my peers would rather have the illusion of control then actual, effective, risk proportinate security. We have so many pretenders and unfortunately many of them are loud voices and dominate the coversation to the extent that newly minted security practicioners think they are the ideal. Next one of them that says “we do X because it is a best practice” is getting a wedgie. Share:

Share:
Read Post

HIPAA Omnibus, Meet Indifference

Do you want to know what you will be reading about in the coming weeks? HIPAA. The Department of Health and Human Services has updated the HIPAA requirements. The 563-page package of regulations includes: Extensive modifications to the HIPAA privacy, security, and enforcement rules, including security and privacy requirements for business associates and their subcontractors. A final version of the HIPAA breach notification rule, which clarifies when a breach must be reported to authorities. Dramatic changes to marketing and fundraising requirements. Modifications to the Genetic Information Nondiscrimination Act (GINA) which prohibits health plans from disclosing genetic information for underwriting purposes. With topics such as breach notification and marketing constraints, it’s the page-turner you’d imagine it to be. Hundreds of pages of distilled public comments and final rulings. Even if you’re like me, and have an interest in these esoteric topics, they are just words on a page. Does this change anything? Probably not. We have been hearing about the serious nature of HIPAA and HITECH for about a decade without meaningful changes to data privacy or security for health related information. While there is a renewed focus on discouraging healthcare firms from marketing protected health data, or selling patient data to third-party marketing firms, there is little do promote proactive changes to data security or privacy. HIPAA will remain a “topic of interest”, but see little action until we see serious fines or someone goes to jail. Expect lots of media coverage and very little action. Share:

Share:
Read Post

Understanding IAM for Cloud Services: Integration

“The Cloud” is a term so overused and often misapplied that it has become meaningless without context. This series will discuss identity and access management as it pertains to the three major cloud service models (Infrastructure, Platform, and Software). Each of these models (SaaS, PaaS, and IaaS) presents its own unique challenge for IAM, because each model promotes different approaches and each vendor offers their own unique flavor. The cloud service model effectively acts as a set of constraints which the IAM architect must factor into their architecture. With the SaaS model most enterprises look to federated identity, meaning the enterprise uses federation capabilities to gate access to cloud applications, keeping control over provisioning accounts internal. This approach is both simpler and offers better security policy control than the primary alternative, of replicating accounts into the SaaS provider – copying big chunks of user directories into the cloud. A middle road has emerged where account management is available via a Identity as a Service cloud provider; we will discuss this later in this series. For IaaS, Identity Federation is an option as well, but the need is not as great because you manage everything above the Infrastructure level. Infrastructure providers have some built in capabilities for identity management, but since you control most of the infrastructure, extending your current capabilities into the cloud is a more natural progression. IaaS vendors like Amazon’s AWS have offered limited support for federation over the years, but the quality and depth of their functionality demonstrates that the service providers largely expect customers to handle this. PaaS, as usual, is somewhere in between. Most PaaS service providers offer more robust capabilities, so federation is a first-class choice in most major platforms today. Cloud Identity Deployment Models IAM deployments for the cloud generally use some combination of the following approaches: * Store Accounts in the Cloud: This is exactly what it sounds like: copy your existing accounts into the cloud environment. You effectively replicate or synchronize enterprise user accounts into the cloud provider’s environment. It is conceptually very simple, and as most IT departments are already familiar with directory services, it’s the easiest way to get started in the cloud. It is also a model that creates security problems when you replicate – and potentially expose – sensitive information including keys, passwords, and other access control data. Remember, “The Cloud” is naturally a multi-tenant environment, shared by other customers and administrators not on your staff. Role changes, as well as account removal or disabling, can lag unacceptably before the internal change is effective at the cloud provider. * Federation: The next option is federation, where user identities are managed locally but identity assertions can be validated through tokens presented to the cloud service interface. The enterprise retains local control and management, typically via Active Directory. This removes the issue of secret data (such as passwords) being stored in the cloud. Federation lets the enterprise leverage existing IDM processes, possibly across multiple systems, which simplifies management and provisioning. * IDMaaS: Identity Management as a Service is an emerging architecture to watch. This is effectively a hybrid of the first two approaches. A separate cloud is run for identity management, usually managed by a different cloud service provider, who links directly to internal on-premise systems for policy decision support. One of the major advantages of this approach is that the IDMaaS provider then links you to various cloud services, handling all their technical idiosyncrasies behind the scenes. The IDMaaS provider effectively glues everything together, providing identity federation and authorization support. IAM Service Delivery Most cloud deployments today mainly work toward leveraging federated identity, but this is where the commonality stops. The way identity is used, and the types of IAM services available, cover a wide range of possibilities. Authentication: A deceptively simple concept, which is devilishly hard in practice. As IT managers and programmers we understand in-house authentication – but what about cloud apps, middleware, and proxies? What about privileged users? Where and how is the session controlled? And how are authentication events audited? Authentication is one area most enterprises think they have nailed, but the edges and external interfaces add complexity and raise questions, which are not yet fully addressed. SSO: Single sign-on is table stakes for any cloud provider, but the way it is delivered varies between providers and service models. SSO is a subset of federated identity, and provides the seamless integration users demand. Enterprises should seek cloud providers with open standards based SSO to reduce complex and costly integration work. Standards: First, the good news: there are many identity standards to choose from, and each excels at one aspect of identity propagation. The bad news is there are many identity standards to choose among. Choose wisely – cloud IAM architecture should be standards-based, but the standards should not drive the cloud IAM architecture. Federation: Federation of identity is another sina qua non for SaaS, and a basic capability of most cloud service providers. Key success factors include integration for the federation servers to cloud consumer and cloud provider. Authorization: Identity defines “who is who”, and authorization then defines what those users can do. Traditionally users are assigned roles which define what they can do and access. This makes administration easy, as a single change can update many users with the same role. But roles are not good enough any more; apps need attributes – granular characteristics – provided by an authoritative source. The question is: where are they? Cloud side, enterprise side, or both? Policy-based authorization via XACML is steadily growing trend. Today XACML is more likely to factor into PaaS and IaaS deployments, but it is likely to be the de facto authorization standard in the future. Provisioning: Account and policy lifecycle management – including user account creation, propagation, and maintenance – is primarily an enterprise function. Cloud apps are just another endpoint for the provisioning system to speak to. SCIM is a proposed standard for provisioning, but some people use SAML beyond

Share:
Read Post

Does Big Data Advance Security Analytics?

If you follow the security press, you know many predict that big data will transform information security. RSA recently released a security brief on security analytics with big data that mirrors the press. Depending on your perspective, security analytics with big data may be the concept that we’ll leverage big data clusters for actionable intel in coming years. Or if you talk to SIEM vendors who run on top of NoSQL repositories, the future has been here for 5 years. You may go with “none of the above”. To me it is simply a good idea that has yet to be fully implemented, which is currently just something we talk about in the security echo chamber. But that did not stop me from enjoying the paper. And I don’t say that about most vendor-led research. Most of it makes me angry, to the point where I avoid writing about it to avoid saying really nasty things in public, which should not be printed. But I want to make a couple comments on the assumptions here – specifically, “Big data’s new role in security comes at a time time when organizations confront un-precedented risk arising from two conditions:”, which implies a connection to both security concerns and the need for big data analytics. I think that link is tenuous, and serves their premise poorly. The dissolving perimeter has little or nothing to do with security analytics with big data. The “dissolving perimeter” became a topic for discussion because third-party cloud services, combined with mobile devices, have destroyed the security value of the corporate IT ‘perimeter’. The ‘edge’ of the network now has so many holes that it no longer forms a discernible boundary between inside and outside. We do, however – given the number of servers, services, and mobile computing platforms (all programmed to deliver event data) get a wealth of constantly generated information. Cheap computing resources, coupled with nearly free analytics tools, make storage and processing of this data newly feasible. And do you think we have more sophisticated adversaries? APT is one argument for this idea, but I tend to think we have more determined adversaries. Given the increasing complexity of IT systems, there seems to be plentiful “low-hanging fruit” – accessible security vulnerabilities for attackers to take advantage of. We have evidence that some security measures are really working – Jeremiah Grossman discussed how this is shifting attacker tactics. Many attacks are not so sophisticated, but still hard to detect. I think the link to big data and attackers appears when you couple the complexity of IT environments with the staggering volume of data, and it becomes very difficult to find the proverbial needle in the haystack. The good news is that this is exactly the type of outlier big data can detect – provided it’s programmed to do so. But ultimately I agree with their assertions, albeit for slightly different reasons. I have every confidence that big data holds promise for security intelligence, both because I have witnessed attacker behavior captured in event data just waiting to be pulled out, and because I have also seen miraculous ideas sprout from people just playing around with database queries. In the same way hackers stumble on vulnerabilities while playing with protocols, engineers stumble on interesting data just by asking the same question (query) different ways. The data holds promise. The mining of most data, and all of the work that will be required in writing M-R scripts to locate actionable intelligence, is not yet here. It will take years of dedicated work – and it’s will take script development on different data types for different NoSQL varieties. Finally, I like the helpful graphic differentiating passive vs. active inputs. I also really like Amit Yoran’s commentary; he is dead on target. The need to aggregate, normalize, and correlate in advance can go away when you move to big data repositories. It’s ironic, but you can get better intelligence faster when you do not pre-process the data. It may smell a bit like forecasts and new year’s predictions, but the paper is worth a read. Share:

Share:
Read Post

Bolting on Security—at Scale

GigaOm offers a fascinating glimpse into Netflix’s EC2 architecture: Netflix shows off how it does Hadoop in the cloud: “Hadoop is more than a platform on which data scientists and business analysts can do their work. Aside from their 500-plus-nod[sic] cluster of Elastic MapReduce instances, there’s another equally sized cluster for extract-transform-load (ETL) workloads – essentially, taking data from other sources and making it easy to analyze within Hadoop. Netflix also deploys various “development” clusters as needed, presumably for ad hoc experimental jobs.” The big data users I have spoken with about data security agreed that data masking at that scale is infeasible. Given the rate of data insertion (also called ‘velocity’), masking sensitive data before loading it into a cluster would require “an entire ETL cluster to front the Hadoop cluster”. But apparently it’s doable, and Netflix did just that – fronted its analytics cluster with a data transformation cluster, all within EC2. 500 nodes massaging data for another 500 nodes. While the ETL cluster is not used for masking, note that it is about the same size as the analysis cluster. It’s this one-to-one mapping that I often worry about with security. Ask yourself, “Do we need another whole cluster for masking?” No? Then what about NoSQL activity monitoring? What about IAM, application monitoring, and any other security tasks. Do you start to see the problem with bolting on security? Logging and auditing are embeddable – most everything else is not. When the Cloud Security Alliance advised reinvestment of some savings back into security, I don’t think this is quite what they had in mind. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.