Securosis

Research

OpenStack Security Guide Released

An OpenStack Security Guide epub was released this week, and among the contributors was our friend Andrew Hay. Trying to find this info before was like locating a piece of hay in a haystack (not an Andrew Hay – he would be considerably easier to find in a haystack). We use OpenStack for the Cloud Security Alliance training labs, and I had to figure out a lot of this myself through painful reading of barely-legible documentation. The book was created in a 5-day sprint so it’s a little rough. Some sections are pretty light but they intend to improve it over time. The sections on hardening the Keystone identity service, picking a hypervisor, hardening core services such as the message queue, and secure networking, are pretty decent. You can’t secure OpenStack just by reading this – you need to understand the platform first – but this guide will definitely point you in the right directions. Share:

Share:
Read Post

Want Privacy? Have Your Kids Browse for You

The FTC has issued new rules on data collection for minors: Now, the list of what counts as “personal information” has been expanded to include geolocation markers, IP addresses, pictures or audio of the child, and persistent cookies that can track users across sites. The rules also now apply to companies that make plug-ins or advertising networks, which often collect information but aren’t thought of as discrete sites that fall under the rules. I’m pulling my kids from daycare and having them do all my browsing. Then I can sue Google and anyone else who tracks me them. Share:

Share:
Read Post

The Battle over Active Defense Continues

One of our favorite friends, Jack Daniels, has a new post on Active Defense: If you make the claim that “active defense” is only a euphemism for “hacking back”, you are either hyping an agenda, or selling a (probably outdated) security model. Or perhaps you’ve just been misled by the previously mentioned shysters. By my count that’s three flavors of wrong, although one may be slightly less bitter. … Let’s start with “active defense”. It is not a new idea, and it doesn’t necessarily mean hacking back. It may encompass counterattacks, but there are a lot of active defenses far short of attack. I refer you back to my post on active defense definitions last summer. I really don’t know where all the confusion is coming from – I meet almost no security professionals who don’t understand the difference. It seems to be more of a press/PR issue. Share:

Share:
Read Post

API Gateways: Key Management

For developers one of the most visible API gateway operations is key management. But dear reader this is not your father’s key management – the kind laden with X.509, PKI, and baroque foofaraw that security teams had to beg developers to implement. This is 2013 and the keys are OAuth access keys! And developers are asking us for the keys too, so what should we do? Before we answer that question, for those of you who are not programmers, let’s describe these “access keys” in a little detail. OAuth is a method for authorizing clients (end users and client applications) to use the third party APIs served by the API gateway. It is essentially how developers give access to consumers without consumers needing to share information such as user name and password. OAuth relies upon a trusted identity service to vouch for the client and pass an authorization token to the API, which in turn gives the client access. OAuth enables four parties (a user or consumer, a client application created by a third-party developer, the owner of the APIs, and an identity service provider such as Google or Facebook) to cooperate onto deliver services. As we have discussed, developers are not much keener on the theoretical underpinnings of different identity protocols than the consumers who use their applications. They just want to get their users access to the application so they can move on to more ‘meaningful’ development tasks – like building the client application itself… This shifts the responsibility of identity and authorization onto security teams, which is a new position for them to be in: managing the process instead of cleaning up afterward. Rather than engaging toward the end of a project to conduct a vulnerability assessment, security teams may select identity protocols to be used, establish identity requirements, and guide developers through the process of building them into their applications. This is an unusual collaboration between developers and security – in both degree and kind. The role of the security team as leader for a portion of the development process sets them up as a true design and development partner. Key setup & distribution Setting up keys can be handled in several different ways, but the process is typically initiated through self-service features of the gateway (we told you it’s not your father’s PKI). The developer registers their application and client(s). The steps of the OAuth protocol dance vary by implementation, but the core generally includes: Developer account: A master account for the developer, which could span multiple clients and services Client ID: The key that identifies the consumer and grants access Client secret: How the consumer authenticates Client types: Gateways use these to distinguish between different clients such as iOS and Android Resource: The URLs, redirects and other services the client is requesting access to Once this bootstrap process is complete – whatever variation your API gateway uses – the client application developer should have everything they need. Once the client has their authorization access token they are able to call the APIs and access data with their token. Each subsequent call to the APIs protected by the API gateway includes an OAuth access token. The tokens are passed along with every call from the client app to the API so the API can make access control decisions. This brings up an important part of OAuth’s value proposition: the process of acquiring a token and using a token are kept separate. One implication is that the enterprise security architect must ensure that though these two independent processes – token issuance and token usage – are separate, their policy and governance models are consistent. Users should only be allowed access to the APIs they are authorized for, and not to see other APIs or other users’ data. The access rights requested at token issuance must match runtime behavior. Key verification services Developers may not be that interested in identity protocols but they are all interested in whether their code works. Distributed applications are notoriously difficult to debug, so anything fundamental to operations must be tested. Once access keys are issued and ready for use the API gateway should offer testing tools to ensure there are no surprises at runtime. The API provider should actively help validate the client code to protect their API! There are a number of considerations: Ensure a production-like system is available for testing. Any networked application must deal with a myriad of issues such as ports, routing, and redirects. A token cannot simply be appended to access and refresh requests – each variant of API usage requires its own test cases. Make simple tools available – many APIs include simple cURL scripts to test applications. For example: “curl https://example.com/API/myservice -H ‘Authorization: your OAuth access token’” The gateway should include several scripts to validate client usage of the API. Provide documentation and guidance for more testing and debug functionality as needed for the client environment. Key lifecycle management OAuth isn’t magic security dust, and using it doesn’t make an application secure. API developers and consumers need to be clear on safe handling of OAuth tokens across their entire lifecycle. Some rules are straightforward, such as always use TLS/SSL. But most are context dependent, such as secure storage for tokens and safe handling of redirects. Two operations that generally require special attention in security policy are refresh and revocation. OAuth access tokens provide shorter-lived access but can create long-lived sessions through with refresh tokens. The refresh token is effectively a protection against an access token being replayed. So each consumer may have two different types of tokens. Security policy makers should align these policies and make use of the separation between shorter-lived access tokens and longer-lived refresh tokens. Policy is not as simple as “one and done”. In addition to refreshing sessions, access revocation requires consideration. Token revocation may seem minor but anyone who has lost their mobile device can say with authority that it is nice to be able to log into twitter.com and turn off access to your lost mobile phone so its clients no longer

Share:
Read Post

The doctor is in the house (and knocking your site down)

Andy Ellis (yes, @csoandy) had a good educational post on DNS Reflection attacks. The DrDos (no, Digital Research DOS isn’t making a comeback – dating myself FTW) has proven an effective way for attackers to scale Denial of Service (DoS) attacks to over 100gbps. Andy explains how DNS Reflection works, why it’s hard to deal with, and what targets can do to defend themselves. The first line of defense is always capacity. Without enough bandwidth at the front of your defenses, nothing else matters. This needs to be measurable both in raw bandwidth, as well as in packets per second, because hardware often has much lower bandwidth capacity as packet sizes shrink. He also mentions filtering out DNS requests and protecting your DNS servers, among other tactics. If you haven’t had the pleasure of being pummeled by a DoS, and having it magnified by reflection attacks, you probably will. So learning as much as you can and making sure you have proper defenses can help you keep sites up and running. Share:

Share:
Read Post

Black Hat Schedule

Our schedules are already filling up for Black Hat this year, so if you want to meet please drop us a line. And for those who want a real schedule, [James Arlen put one together for easy import into your calendar].(https://www.google.com/calendar/ical/f9lvmur9pjc2r1oi7psi3li40s%40group.calendar.google.com/public/basic. Share:

Share:
Read Post

Standards don’t move fast enough

Branden Williams is exactly right: 2013 is a pivotal year for PCI DSS. A new version of the guidance will hit later this year. So why is 2013 so important for PCI DSS? In this next revision (which will be released this year, enforced in 2015, and retired at the end of 2017) the standard has to play catch up. It’s notoriously been behind the times when it comes to the types of attacks that merchants face (albeit, most merchants don’t even follow PCI DSS well enough to see if compliance could prevent a breach), but now it’s way behind the times on the technologies that drive business. Enforced in 2015. Yeah, 2015. You know, things change pretty quickly in technology – especially for attackers. But the rub is that the size and disruption of infrastructure changes for the large retailers who control the PCI DSS mean they cannot update their stuff fast enough. So they only update the DSS on a 3-year cycle to allow time to implement the changes (and keep the ROC). Let’s be clear: attackers are not waiting for the new version of PCI to figure out ways to bust new technologies. Did you think they were waiting to figure out how to game mobile payments? Of course not – but no specific guidance will be in play for at least 2 years. Regardless of whether it’s too little, it’s definitely too late. So what to do? Protect your stuff, and put PCI (and the other regulatory mandates) into the box that it belongs. A low bar you need to go beyond if you want to protect your data. Photo credit: “Don’t let this happen to you! too little, too late order coal now!” originally uploaded by Keijo Knutas Share:

Share:
Read Post

Friday Summary: June 28, 2013—“Summer’s here” edition

Normally by this time of year things slow down, people go on vacation, and we get to relax a bit, but not this year. At least not for me. It has been seven days a week here for a while, playing catch-up with all the freakin’ research projects going on. And I have wanted to comment on a ton of news items, but have not had the time. So this week’s summary consists of comments on a few headlines I have not had any other the chance to comment on. Here we go: All I can think about when I read these stories on NSA spying and Snowden news items: It is criminal for you, the public, to know our secrets. But it’s totally okay for us to spy on you. Nothing to worry about. Move along now. Love Square. Great product. Disruptive payment medium. But it has been reported they want to create a marketplace to compete with eBay, Amazon and – my interpretation, not something they have stated – craigslist. So let me ask you: Are they friggin’ nuts? Speaking of crazy, why would anyone claim HP is too late to enter the big data race? Has their tardiness in rolling out big data or big-data-like technologies hurt them in the SIEM space? No question. But general big data services is a very new market, and the race for leadership in packaged services has not even begun yet. Was I the only one shocked to learn RSA’s call for papers started this week? WTF? Didn’t I just get back from that conference? We are still a month away from Black Hat. It is currently 109F here in Phoenix, and all I want to do is find a cold beer and keep out of the heat. This just does not feel like the time to be thinking about presentation outlines… But if you want to present next February consider this a friendly reminder. For those three of you who have been emailing me about passwords and password managers because of my comments during the Key Management webcast last week, it’s okay. We will continue to use passwords here and there. I like password managers. Corporate and personal. I use them every day. But passwords will be replaced by tokens and identity certificates for Internet services because a) identity tokens allow us to do much more with identity and authorization than we can with passwords, and b) tokens remove the need to store password hashes on the server. Which is a another way of saying passwords can’t do what certificates do. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s white paper on 10 Common Database Vulnerabilities. Mike’s DR Post: The Slippery Slope Of Security Invisibility. Rich’s DR Post: Security Needs More Designers, Not Architects. Adrian’s Dark Reading post Database Configuration Standards. Adrian’s Key Management webcast. Rich’s Macworld article on Apple’s Security Strategy. It’s older, but I just saw Mike’s Security Myth-busting video and it’s funny. Favorite Securosis Posts Rich: Adrian on SQLi. He gets a little pedantic, but that’s what we love about him. Mike Rothman: Security Analytics with Big Data: Deployment Issues. Adrian did a fantastic job with this series. Read all the posts and learn about the future of SIEM… Adrian Lane: Top 10 Stupid Sales/Press/Analyst Presentation Tricks. We see stupid human tricks every week and I don’t think most companies understand how they or their slide decks are perceived. Other Securosis Posts Database Denial of Service [New Series]. API Gateways: Developer Tools. iOS 7 Adds Major Data Security Improvements. Incite 6/26/2013: Camp Rules. The Black Hole of DLP. Automation Awesomeness and Your Friday Summary (June 21, 2013). Full Disk Encryption (FDE) Advice from a Reader. Scamables. Talking Head Alert: Adrian on Key Management. How China Is Different. Microsoft Offers Six Figure Bounty for Bugs. Project Communications. Network-based Malware Detection 2.0: Deployment Considerations. Favorite Outside Posts Adrian Lane: Data Leakage In A Google World. People forget that Google is a powerful tool, which often finds data companies did not want exposed. It’s a tool to hack with, and yes, a tool to phish with. Chris Pepper: Solaris patching is broken because Oracle is dumb and irresponsible. Feh. Mike Rothman: Wences Casares: Teach Your Children to be Doers. Great post here by a start-up CEO about how to teach your kids to get things done. If only all those “entitlement kids” got a similar message from their parents. Dave Lewis: Opera Software Hit by ‘Infrastructure Attack’; Malware Signed with Stolen Cert Rich: TheStreet on Brian Krebs. I think it’s awesome that Brian is doing so well – he writes circles around everyone else on the cybercrime beat. Needless to say, we are fans of the low-overhead direct model. Seems to be working for us at least. Research Reports and Presentations Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. The Endpoint Security Management Buyer’s Guide. Top News and Posts Oracles releases critical security update for Java, Apple follows suit. The DEA Seized Bitcoins In A Silk Road Drug Raid. Turkey seeks to tighten control over Twitter. Why Snowden Asked Visitors in Hong Kong to Refrigerate Their Phones. Snowden distributed encrypted copies of NSA docs around the world. Pentagon’s failed flash drive ban policy: A lesson for every CIO. U.S. Surveillance Is Not Aimed at Terrorists. Attackers sign malware using crypto certificate stolen from Opera Software. Software Flaw Threatens LG Android Smartphones. South Korean cyberattacks. Researcher nets $20K for finding serious Facebook flaw. Vast majority of malware attacks spawned from legit sites. More from Google’s Safe Browsing disclosures. Google Adds Malware and Phishing Data to Transparency Report. HP Confirms Backdoor In StoreOnce Backup Product Line. Blog Comment of the Week This week’s best comment goes to Guillaume, in response to iOS 7 Adds Major Data Security Improvements. The share sheet thing is pretty

Share:
Read Post

Database Denial of Service [New Series]

We have begun to see a shift in Denial of Service (DoS) tactics by attackers, moving up the stack from networks to servers and from servers to the application layer. Over the last 18 months we have also witnessed a new wave of vulnerabilities and isolated attacks against databases, all related to denial of service. We have seen recent issues with Oracle with invalid object pointers, a serious vulnerability in the workload manager, the TNS listener barfing on malformed packets, a PostgreSQL issue with unrestricted networking access that was rumored to allow file corruption to crash the database, the IBM DB2 XML feature, and multiple vulnerabilities in MySQL including remote ability to crash the database. A vulnerability does not mean that exploitation has occurred but we hear more off-the-record accounts of database attacks. We cannot quantify the risk or likelihood of attack, but this seems like a good time to describe these attacks briefly and offer some mitigation suggestions. It may come as a surprise but database denial of service attacks have been common over the last decade. We don’t hear much about them because they are lost among the din of SQL injection (SQLi) attacks, which cause more damage and offer attackers a wider range options. All things being equal, attackers generally prefer SQLi attacks as more directly useful for their objectives. Database DoS doesn’t make headlines compared to SQLi, because injection attacks often take control of the database and can be more damaging. But interruption of service is no longer a trivial matter. Ten years ago it was still common practice to take a database or application off the Internet while an attack was underway. But now web services and the databases are tied into them are critical business infrastructure. Take down a database and a company loses money – quite possibly a lot of money. As Mike noted in his recent research on Denial of Service attacks, the most common DoS approaches are “flooding the pipes” rather than “exhausting the servers”. Flooding the pipes is accomplished by sending so many network packets that they simply overwhelm the network equipment. This type of volumetric attack is the classic denial of service, most commonly performed as a Distributed Denial of Service (DDoS) because it takes hundreds or thousands of malicious clients to flood a large network. Legitimate network traffic is washed away in the tide of junk, and users cannot reach servers. Exhausting servers is different – these attacks target software running on the server, such as the operating system or web application components – to waste all its CPU, memory, or other resources and effectively disable it. These attacks can target either vulnerabilities or features of application stacks to overwhelm servers and prevent legitimate traffic from accessing web pages or completing transactions. The insidious part of this for attack is that, as you consume more than roughly 80% of hardware or software resources, these platforms become less efficient. The closer they get to maximum utilization the more they slow down. Push them to the limit and they may simply lock up, waiting for resources to become available. In some cases a reduction in load does not bring servers back – you need to reset or restart them. Databases have their own networking features and offer a full complement of services, so both these models apply. The motivation for attacks is very similar to traditional DoS attacks. Hacktivism is a major trend, and taking down a major commercial web site is a weapon for people who dislike a company but lack legal or financial means to voice their complaints. “Covering attacks” are very common, where criminals flood servers and networks – including security systems – in order to mask an ongoing attack. common scenarios include shutting down a competitor, criminal racketeers threatening DoS and demanding ransom, and financial trading manipulation, and the list goes on. The motivations behind database DoS are essentially the same. The current tactics are a response to a couple new factors. Network and server defenses are getting better with the next generation of firewall technologies, and it has gotten nearly impossible to DoS cloud services providers with seemingly limitless redundant, and geographically dispersed resources. Attackers are looking for new ways to keep old crimes profitable. But attackers are not discriminatory – they are happy to exploit any piece of hardware or software that allows them to accomplish their attacks, including web applications and databases sitting atop servers. Database denial of service is conceptually no different than traditional DoS attacks at the sever or application layers, but there are many more clever ways to create a denial of service attack against a database. Unlike DDoS you don’t need to throw everything including the kitchen sink at a site – often you just need to find a small logic flaw in a database function to push it over. Relational database platforms are some of the most complex application platforms in existence so there is a lot of room for mischief. Attackers sometimes morph traditional protocol and server based denial of service attacks to move up the stack. But in most cases they exploit specific database features in novel ways to take down their targets. Current defensive systems are geared to block DoS-based network flooding and server attacks, so attackers are seeking greener fields in the application layer to better blend their incursions with legitimate customer transactions. With protection resources poured into the lower layers, relatively little is done at the application layer, and virtually nothing to stop database attacks. Worse, application layer attacks are much more difficult to detect because most look like legitimate database requests! Our next post will take a look at the different classes of database DoS attacks. I will look at some historic examples of database DoS attacks and discuss current ones to help you understand the difficulty of defending databases from DoS. Share:

Share:
Read Post

API Gateways: Developer Tools

Our previous post discussed the first step in the development process: getting access to the API gateway through access provisioning. Now that you have access it’s time to discuss how the gateway supports your code development and deployment processes. An API gateway must accomplish two primary functions: help developers build, test, and deploy applications; and help companies control use of their API. They are part development environment and part operational security tool. API Catalog The APIs catalog is basically a menu of APIs, services, and support services that provide developers front-end integration to access back-office applications, external APIs (for mashups), data and related services, along with all the supporting tools to build and deploy applications. Catalogs typically include APIs, documentation, coding help, build tools, configuration requirements, testing tools, guidance, and sample code for each supported function. They offer other relevant details such as network controls, access controls, integration options, orchestration, brokering and messaging options – all bundled into a management interface for selecting and configuring the services you want. Developer time is expensive so anything that streamlines this process is a win. Security controls such as identity protocols are notoriously difficult to fully grasp and implement. If your security architects want developers to “do it right”, this is the place to invest time to show them how. Traditionally security tools are bolted onto – or in front of – applications, generating howls of displeasure from developers who don’t want the added complexity nor performance impact. With third-party APIs things are different, as security is part of the core value. API gateways offer features than enable network, interface, and data security as part of the core feature set. For example it is faster and easier to enable built-in SAML or OAuth identity services than to build them from scratch – or worse to build a password management system. Even better, the features are available at design time, before you assemble the application, so they can be bundled into the development process. Reference implementations are extremely helpful. For example, consider OAuth: if you look at 10 different companies’ OAuth implementations you will probably find a dozen different implementations. Don’t assume developers will just figure it all out – connect the dots. To have a chance at a secure deployment developers need concrete guidance for security services – especially for things as abstract as identity protocols. Reference implementations show end-to-end examples of the identity protocol in practice. For a developer trying to “do it right” this is like finding diamonds in the backyard. The reference implementation is even more effective if it is backed up by testing tools that can verify developer implementations. Access management is a principal feature of API gateways. The gateway helps you enforce access controls, building in authentication and authorization services into the API set. Gateways typically rely on token-based security services, and support one or more token services such as SAML and OAuth. All API gateways offer authentication support, and most integrate with other identity sources to support federation. Gateways provide basic role-based authorization support, sometimes with fine-grained authorization to constrain data access by user identity or endpoint device. Beyond identity protocols, some gateways offer services to defend against attacks such as replay attacks and other forms of session hijacking. API gateways provide dynamic filtering of requests, allowing policy-based routing and response to API calls. Developers get tools to parse incoming calls, filter or transform messages, and then route to appropriate services. This facilitates modification of application function, debugging of application functions, and application of different security or compliance controls in response to user requests. Filters also provide a mechanism for sending requests to different locations, workflow modification, or even sending requests to different applications. This flexibility is a powerful security capability, particularly for analysis of and protection against suspect clients – access to services and data can be dynamically adjusted. API gateway providers offer a range of pre-deployment tools to validate applications prior to deployment. Sandbox testing and runtime simulators both validate correct API usage, and can also verify that the application developer properly handles input variables and simulated attacks. Some test harnesses are provided with gateways and others are custom implementations by API service owners. Pre-deployment validation is good a way to ensure all third-party developers meet a minimum security standard, and no single user becomes the proverbial weak link. If possible, tests should be executed as part of the normal integration process, (i.e., Jenkins) so implementation quality can be continually tested. Deployment Support The API catalog provides options for how to build security into your application, but API gateways also offer deployment support. When you are push APIs that connect the world to internal systems you need to account for a myriad of different threats at multiple network, protocol, application, and data layers. Denial of service, parser attacks, code injection, replay attacks, HTTP protocol abuse, network sniffing, and denial of service attacks are all things to consider. API gateways can optionally provide privacy and security for network sessions through SSL. Most also offer network firewall capabilities such as IP whitelisting, blacklisting, and signature-based detection. While network security is a must have for many, it’s not really their core value to security. The key security features are overall security of the API and message-level filtering. API gateways provide capabilities to detect code injection, cross-site scripting, and various encoding attacks; most also offer off-the-shelf filters for input validation and sanitization. Logging, Monitoring, and Reporting As an application platform API gateways capture activity and generate audit logs. Sitting between developer applications and the API, they are perfectly positioned to capture API usage – useful for throttling, billing, and metering API access, as well as security. Log files are essential for security, operations, and compliance, so these teams all rely upon gateway audit trails. Most API gateways provide flexible configuration of which audit events are collected, record format, and record destination. Audit capabilities are mostly designed for the gateway owner rather than developers. But the audit trail captures sessions of all

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.