Securosis

Research

Understanding and Selecting RASP 2019: Use Cases

Updated 9-13 to include business requirements The primary function of RASP is to protect web applications against known and emerging threats. In some cases it is deployed to block attacks at the application layer, before vulnerabilities can be exploited, but in many cases RASP tools process a request until it detects an attack and then blocks the action. Astute readers will notice that these are basically the classic use cases for Intrusion Detection Systems (IDS) and Web Application Firewalls (WAFs). So why look for something new, if other tools in the market already provide the same application security benefits? The answer is not in what RASP does, but rather in how it does works, which makes it more effective in a wide range of scenarios. Let’s delve into detail about what clients are asking for, so we can bring this into focus. Primary Market Drivers RASP is a relatively new technology, so current market drivers are tightly focused on addressing the security needs of two distinct “buying centers” which have been largely unaddressed by existing security applications. We discovered this important change since our last report in 2017 through hundreds of conversations with buyers, who expressed remarkably consistent requirements. The two “buying centers” are security and application development teams. Security teams are looking for a reliable WAF replacement without burdensome management requirements, and development teams ask for a security technology to protect applications within the framework of existing development processes. The security team requirement is controversial, so let’s start with some background on WAF functions and usability. It’s is essential to understand the problems driving firms toward RASP. Web Application Firewalls typically employ two methods of threat detection; blacklisting and whitelisting. Blacklisting is detection – and often blocking – of known attack patterns spotted within incoming application requests. SQL injection is a prime example. Blacklisting is useful for screening out many basic attacks against applications, but new attack variations keep showing up, so blacklists cannot stay current, and attackers keep finding ways to bypass them. SQL injection and its many variants is the best illustration. But whitelisting is where WAFs provide their real value. A whitelist is created by watching and learning acceptable application behaviors, recording legitimate behaviors over time, and preventing any requests which do not match the approved behavior list. This approach offers substantial advantages over blacklisting: the list is specific to the application monitored, which makes it feasible to enumerate good functions – instead of trying to catalog every possible malicious request – and therefore easier (and faster) to spot undesirable behavior. Unfortunately, developers complain that in the normal course of application deployment, a WAF can never complete whitelist creation – ‘learning’ – before the next version of the application is ready for deployment. The argument is that WAFs are inherently too slow to keep up with modern software development, so they devolve to blacklist enforcement. Developers and IT teams alike complain that WAF is not fully API-enabled, and that setup requires major manual effort. Security teams complain they need full-time personnel to manage and tweak rules. And both groups complain that, when they try to deploy into Infrastructure as a Service (IaaS) public clouds, the lack of API support is a deal-breaker. Customers also complain of deficient vendor support beyond basic “virtual appliance” scenarios – including a lack of support for cloud-native constructs like application auto-scaling, ephemeral application stacks, templating, and scripting/deployment support for the cloud. As application teams become more agile, and as firms expand their cloud footprint, traditional WAF becomes less useful. To be clear, WAF can provide real value – especially commercial WAF “Security as a Service” offerings, which focus on blacklisting and some additional protections like DDoS mitigation. These are commonly run in the cloud as a proxy service, often filtering requests “in the cloud” before they pass into your application and/or RASP solution. But they are limited to a ‘Half-a-WAF’ role – without the sophistication or integration to leverage whitelisting. Traditional WAF platforms continue to work for on-premise applications with slower deployment, where the WAF has time to build and leverage a whitelist. So existing WAF is largely not being “ripped and replaced”, but it is largely unused in the cloud and by more agile development teams. So security teams are looking for an effective application security tool to replace WAF, which is easier to manage. They need to cover application defects and technical debt – not every defect can be fixed in a timely fashion in code. Developer requirements are more nuanced: they cite the same end goal, but tend to ask which solutions can be fully embedded into existing application build and certification processes. To work with development pipelines, security tools need to go the extra mile, protecting against attacks and accommodating the disruption underway in the developer community. A solution must be as agile as application development, which often starts with compatible automation capabilities. It needs to scale with the application, typically by being bundled with the application stack at build time. It should ‘understand’ the application and tailor its protection to the application runtime. A security tool should not require that developers be security experts. Development teams working to “shift left” to get security metrics and instrumentation earlier in their process want tools which work in pre-production, as well as production. RASP offers a distinct blend of capabilities and usability options which make it a good fit for these use cases. This is why, over the last three years, we have been fielding several calls each week to discuss it. Functional Requirements The market drivers mentioned above change traditional functional requirements – the features buyers are looking for. Effectiveness: This seems like an odd buyer requirement. Why buy a product which does not actually work? The short answer is ‘false positives’ that waste time and effort. The longer answer is many security tools don’t work well, produce too many false positives to be usable, or require so much maintenance that building your bespoke seems like a better

Share:
Read Post

Understanding and Selecting RASP: 2019

During our 2015 DevOps research conversations, developers consistently turned the tables on us, asking dozens of questions about embedding security into their development process. We were surprised to discover how much developers and IT teams are taking larger roles in selecting security solutions, working to embed security products into tooling and build processes. Just like they use automation to build and test product functionality, they automate security too. But the biggest surprise was that every team asked about RASP, Runtime Application Self-Protection. Each team was either considering RASP or already engaged in a proof-of-concept with a RASP vendor. This was typically in response to difficulties with existing Web Application Firewalls (WAF) – most teams still carry significant “technical debt”, which requires runtime application protection. Since 2017 we have engaged in over 200 additional conversations on what gradually evolved into ‘DevSecOps’ – with both security and development groups asking about RASP, how it deploys, and benefits it can realistically provide. These conversations solidified the requirement for more developer-centric security tools which offer the agility developers demand, provide metrics prior to deployment, and either monitor or block malicious requests in production. Research Update Our previous RASP research was published in the summer of 2016. Since then Continuous Integration for application build processes has become the norm, and DevOps is no longer considered wild idea. Developers and IT folks have embraced it as a viable and popular tool approach for producing more reliable application deployments. But it has raised the bar for security solutions, which now need to be as agile and embeddable as developers’ other tools to be taken seriously. The rise of DevOps has also raised expectations for integration of security monitoring and metrics. We have witnessed the disruptive innovation of cloud services, with companies pivoting from “We are not going to the cloud.” to “We are building out our multi-cloud strategy.” in three short years. These disruptive changes have spotlit the deficiencies of WAF platforms, both lack of agility and inability to go “cloud native”. Similarly, we have observed advancements in RASP technologies and deployment models. With all these changes it has become increasingly difficult to differentiate one RASP platform from another. So we are kicking off a refresh of our RASP research. We will dive into the new approaches, deployment models, and revised selection criteria for buyers. Defining RASP Runtime Application Self-Protection (RASP) is an application security technology which embeds into an application or application runtime environment, examining requests at the application layer to detect attacks and misuse in real time. RASP products typically contain the following capabilities: Unpack and inspect requests in the application context, rather than at the network or HTTP layer Monitor and block application requests; products can sometimes alter requests to strip out malicious content Fully functional through RESTful APIs Protect against all classes of application attacks, and detect whether an attack would succeed Pinpoint the module, and possibly the specific line of code, where a vulnerability resides Instrument application functions and report on usage As with all our research, we welcome public participation in comments to augment or discuss our content. Securosis is known for research positions which often disagree with vendors, analyst firms, and other researchers, so we encourage civil debate and contribution. The more you add to the discussion, the better the research! Next we will discuss RASP use cases and how they have changed over the last few years. Share:

Share:
Read Post

Firestarter: Multicloud Deployment Structures and Blast Radius

In this, our second Firestarter on multicloud deployments, we start digging into the technological differences between the cloud providers. We start with the concept of how to organize your account(s). Each provider uses different terminology but all support similar hierarchies. From the overlay of AWS organizations to the org-chart-from-the-start of an Azure tenant we dig into the details and make specific recommendations. We also discuss the inherent security barriers and cover a wee bit of IAM. Watch or listen: Share:

Share:
Read Post

DisruptOps: Breaking Attacker Kill Chains in AWS: IAM Roles

Breaking Attacker Kill Chains in AWS: IAM Roles Over the past year I’ve seen a huge uptick in interest for concrete advice on handling security incidents inside the cloud, with cloud native techniques. As organizations move their production workloads to the cloud, it doesn’t take long for the security professionals to realize that the fundamentals, while conceptually similar, are quite different in practice. One of those core concepts is that of the kill chain, a term first coined by Lockheed Martin to describe the attacker’s process. Break any link and you break the attack, so this maps well to combining defense in depth with the active components of incident response. Read the full post at DisruptOps. Share:

Share:
Read Post

Firestarter: So you want to multicloud?

This is our first in a series of Firestarters covering multicloud. Using more than one IaaS cloud service provider is, well, a bit of a nightmare. Although this is widely recognized by anyone with hands-on cloud experience that doesn’t mean reality always matches our desires. From executives worried about lock in to M&A activity we are finding that most organizations are being pulled into multicloud deployments. In this first episode we lay out the top level problems and recommend some strategies for approaching them. Watch or listen: Share:

Share:
Read Post

What We Know about the Capital One Data Breach

I’m not a fan of dissecting complex data breaches when we don’t have any information. In this case we do know more than usual due to the details in the complaint filed by the FBI. I want to be very clear that this post isn’t to blame anyone and we have only the most basic information on what happened. The only person we know is worthy of blame here is the attacker. As many people know Capital One makes heavy use of Amazon Web Services. We know AWS was involved in the attack because the federal complaint specifically mentions S3. But this wasn’t a public S3 bucket. Again, all from the filed complaint: The attacker discovered a server (likely an instance – it had an IAM role) with a misconfigured firewall. It presumably had a software vulnerability or was vulnerable due to to a credential exposure. The attacker compromised the server and extracted out its IAM role credentials. These ephemeral credentials allow AWS API calls. Role credentials are rotated automatically by AWS, and much more secure than static credentials. But with persistent access you can obviously update credentials as needed. Those credentials (an IAM role with ‘WAF’ in the title) allowed listing S3 buckets and read access to at least some of them. This is how the attacker exfiltrated the files. Some buckets (maybe even all) were apparently encrypted, and a lot of the data within those files (which included credit card applications) was encrypted or tokenized. But the impact was still severe. The attacker exfiltrated the data and then discussed it in Slack and on social media. Someone in contact with the attacker saw that information, including attack details in GitHub. This person reported it to Capital One through their reporting program. Capital One immediately involved the FBI and very quickly closed the misconfigurations. They also began their own investigation. They were able to determine exactly what happened very quickly, likely through CloudTrail logs. Those contained the commands issued by that IAM role from that server (which are very easy to find). They could then trace back the associated IP addresses. There are many other details on how they found the attacker in the complaint, and it looks like Capital One did quite a bit of the investigation themselves. So misconfigured firewall (Security Group?) > compromised instance > IAM role credential extraction > bucket enumeration > data exfiltration. Followed by a rapid response and public notification. As a side note, it looks like the attacker may have been a former AWS employee, but nothing indicates that was a factor in the breach. People will say the cloud failed here, but we saw breaches like this long before the cloud was a thing. Containment and investigation seem to have actually run far faster than would have been possible on traditional infrastructure. For example Capital One didn’t need to worry about the attacker turning off local logging – CloudTrail captures everything that touches AWS APIs. Normally we hear about these incidents months or years later, but in this case we went from breach to arrest and disclosure in around two weeks. I hope that someday Capital One will be able to talk about the details publicly so the rest of us can learn. No matter how good you are, mistakes happen. The hardest problem in security is solving simple problems at scale. Because simple doesn’t scale, and what we do is damn hard to get right every single time. Share:

Share:
Read Post

DisruptOps: Build Your Own Multi-Cloud Security Monitoring in 30 Minutes or Less with StreamAlert

Build Your Own Multi-Cloud Security Monitoring in 30 Minutes or Less with StreamAlert One of the most difficult problems in cloud security is building comprehensive multi-account/multi-cloud security monitoring and alerting. I’d say maybe 1 out of 10 organizations I assess or work with have something effective in place when I first show up. That’s why I added a major monitoring lab based on AirBnB’s StreamAlert project to the Securosis Advanced Cloud Security and Applied DevSecOps training class (we still have some spots available for our Black Hat 2019 class). Read the full post at DisruptOps Share:

Share:
Read Post

Apple Flexes Its Privacy Muscles

Apple events follow a very consistent pattern, which rarely changes beyond the details of the content. This consistency has gradually become its own language. Attend enough events and you start to pick up the deliberate undertones Apple wants to communicate, but not express directly. They are the facial and body expressions beneath the words of the slides, demos, and videos. Five years ago I walked out of the WWDC keynote with a feeling that those undertones were screaming a momentous shift in Apple’s direction. That privacy was emerging as a foundational principle for the company. I wrote up my thoughts at Macworld, laying out my interpretation of Apple’s privacy principles. Privacy was growing in importance at Apple for years before that, but that WWDC keynote was the first time they so clearly articulated that privacy not only mattered, but was being built into foundational technologies. This year I sat in the WWDC keynote, reading the undertones, and realized that Apple was upping their privacy game to levels never before seen from a major technology company. That beyond improving privacy in their own products, the company is starting to use its market strength to pulse privacy throughout the tendrils that touch the Apple ecosystem. Regardless of motivations – whether it be altruism, the personal principles of Apple executives, or simply shrewd business strategy – Apple’s stance on privacy is historic and unique in the annals of consumer technology. The real question now isn’t whether they can succeed at a technical level, but whether Apple’s privacy push can withstand the upcoming onslaught from governments, regulators, the courts, and competitors. Apple has clearly explained that they consider privacy a fundamental human right. Yet history is strewn with the remains of well-intentioned champions of such rights. How privacy at Apple changed at WWDC19 When discussing these shifts in strategy, at Apple or any other technology firm, it’s important to keep in mind that the changes typically start years before outsiders can see them, and are more gradual than we can perceive. Apple’s privacy extension efforts started at least a couple years before WWDC14, when Apple first started requiring privacy protections to participate in HomeKit and HealthKit. The most important privacy push from WWDC19 is Sign In with Apple, which offers benefits to both consumers and developers. In WWDC sessions it became clear that Apple is using a carrot and stick approach with developers: the stick is that App Review will require support for Apple’s new service in apps which leverage competing offerings from Google and Facebook, but in exchange developers gain Apple’s high security and fraud prevention. Apple IDs are vetted by Apple and secured with two-factor authentication, and Apple provides developers with the digital equivalent of a thumbs-up or thumbs-down on whether the request is coming from a real human being. Apple uses the same mechanisms to secure iCloud, iTunes, and App Store purchases, so this seems to be a strong indicator. Apple also emphasized they extend this privacy to developers themselves. That it isn’t Apple’s business to know how developers engage with users inside their apps. Apple serves as an authentication provider and collects no telemetry on user activity. This isn’t to say that Google and Facebook abuse their authentication services, Google denies this accusation and offers features to detect suspicious activity. Facebook, on the other hand, famously abused phone numbers supplied for two-factor authentication, as well as a wide variety of other user data. The difference between Sign In with Apple and previous privacy requirements within the iOS and Mac ecosystems is that the feature extends Apple’s privacy reach beyond its own walled garden. Previous requirements, from HomeKit to data usage limitations on apps in the App Store, really only applied to apps on Apple devices. This is technically true for Sign In with Apple, but practically speaking the implications extend much further. When developers add Apple as an authentication provider on iOS they also need to add it on other platforms if they expect customers to ever use anything other than Apple devices. Either that or support a horrible user experience (which, I hate to say, we will likely see plenty of). Once you create your account with an Apple ID, there are considerable technical complexities to supporting non-Apple login credentials for that account. So providers will likely support Sign In with Apple across their platforms, extending Apple’s privacy reach beyond its own platforms. Beyond sign-in Privacy permeated WWDC19 in both presentations and new features, but two more features stand out as examples of Apple extending its privacy reach: a major update to Intelligent Tracking Prevention for web advertising, and HomeKit Secure Video. Privacy preserving ad click attribution is a surprisingly ambitious effort to drive privacy into the ugly user and advertising tracking market, and HomeKit Secure Video offers a new privacy-respecting foundation for video security firms which want to be feature competitive without the mess of building (and securing) their own back-end cloud services. Intelligent Tracking Prevention is a Safari feature to reduce the ability of services to track users across websites. The idea is that you can and should be able to enable cookies for one trusted site, without having additional trackers monitor you as you browse to other sites. Cross-site tracking is endemic to the web, with typical sites embedding dozens of trackers. This is largely to support advertising and answer a key marketing question: did an ad lead to you visit a target site and buy something? Effective tracking prevention is an existential risk to online advertisements and the sites which rely on them for income, but this is almost completely the fault of overly intrusive companies. Intelligent Tracking Prevention (combined with other browser privacy and security features) is a stick and privacy preserving ad click attribution is the corresponding carrot. It promises to enable advertisers to track conversion rates without violating user privacy. An upcoming feature of Safari, and a proposed web standard, Apple promises that browsers will remember ad clicks for seven days. If

Share:
Read Post

DisruptOps: The Security Pro’s Quick Comparison: AWS vs. Azure vs. GCP

I’ve seen a huge increase in the number of questions about cloud providers beyond AWS over the past year, especially in recent months. I decided to write up an overview comparison over at DisruptOps. This will be part of a slow-roll series going into the differences across the major security program domains – including monitoring, perimeter security, and security management. Here’s an excerpt: The problem for security professionals is that security models and controls vary widely across providers, are often poorly documented, and are completely incompatible. Anyone who tells you they can pick up on these nuances in a few weeks or months with a couple training classes is either lying or ignorant. It takes years of hands-on experience to really understand the security ins and outs of a cloud provider. … AWS is the oldest and most mature major cloud provider. This is both good and bad, because some of their enterprise-level options were basically kludged together from underlying services weren’t architected for the scope of modern cloud deployments. But don’t worry – their competitors are often kludged together at lower levels, creating entirely different sets of issues. … Azure is the provider I run into the most when running projects and assessments. Azure can be maddening at times due to lack of consistency and poor documentation. Many services also default to less secure configurations. For example if you create a new virtual network and a new virtual machine on it, all ports and protocols are open. AWS and GCP always start with default deny, but Azure starts with default allow. … Like Azure, GCP is better centralized, because many capabilities were planned out from the start – compared to AWS feature which were only added a few years ago. Within your account Projects are isolated from each other except where you connect services. Overall GCP isn’t as mature as AWS, but some services – notably container management and AI – are class leaders. Share:

Share:
Read Post

Selecting Enterprise Email Security: the Buying Process

To wrap up this series we will bring you through a process of narrowing down the shortlist and then testing products and/or services in play. With email it’s less subjective because malicious email is… well, malicious. But given the challenges of policy management at scale (discussed in our last post), you’ll want to ensure a capable UX and sufficient reporting capabilities as well. Let’s start with the first rule of buying anything: you drive the process. You’ll have vendors who want you to use their process, their RFP/RFP language, their PoC guide, and their contract language. All that is good and well if you want to buy their product. But what you want is the best product to solve your problems, which means you need to drive your selection process. We explained in our introduction that a majority of attacks start with a malicious email. So selecting the best platform remains critical for enterprises. You want to ensure your chosen vendor addresses the email-borne threats of not just today, but tomorrow as well. A simple fact of the buying process is that no vendor ever says “We’re terrible at X, but you should buy us because Y is what’s most important to you.” Even though they should. It’s up to you to figure out each vendor’s real strengths and weaknesses and line them up against your requirements. That’s why it’s critical to have a firm handle on your requirements before you start talking to vendors. The first step is to define your short list of 2-3 vendors who appear to meet your needs. You accomplish this by talking to folks on all sides of the decision. Start with vendors but also talk to friends, third parties (like us), and possibly resellers or managed service providers. When meeting vendors stay focused on how their tool addresses your current threats and their expectations for the next wave of email attacks. Make any compliance or data protection issues (or both) very clear because they drive the architecture and capabilities you need to test. Don’t be afraid to go deep with vendors. You will spend a bunch of time testing platforms, so you should ask every question you can to make an educated decision. The point of the short list is to disqualify products that won’t work early in the process so you don’t waste time later. Proof of Concept Once you have assembled the short list it’s time to get hands-on with the email security platforms and run each through its paces through a Proof of Concept (PoC) test. The proof of concept is where sales teams know they have a chance to win or lose, so they bring their best and brightest. They raise doubts about competitors and highlight their own capabilities and successes. They have phone numbers for customer references handy. But forget all that now. You are running this show, and the PoC needs to follow your script – not theirs. Preparation Vendors design PoC processes to highlight their product strengths and hide weaknesses. Before you start any PoC be clear about the evaluation criteria. Your criteria don’t need to be complicated. Your requirements should spell out the key capabilities you need, with a plan to further evaluate each challenger based on squishier aspects such as set-up/configuration, change management, customization, user experience/ease of use, etc. With email it all starts with accuracy. So you’ll want to see how well the email security platforms detect and block malicious email. Of course you could stop there and determine the winner based on who blocks 99.4%, which is better than 99.1%, right? Yes, we’re kidding. You also need to pay attention to manageability at scale. The preparation involves figuring out the policies you’ll want to deploy on the product. These policies need to be consistent across all of the products and services you test. Here are some ideas on policies to think about: Email routing Blocked attacks (vs. quarantined) Spam/phishing reporting Email plug-in Threat intelligence feeds to integrate Disposition of email which violates policy Attributes requiring email encryption Integration with enterprise security systems: SIEM, SOAR, help desk And we’re sure there are a bunch of other policy drivers we missed. Work with the vendor’s sales team to make sure you can exercise each product or service to its fullest capabilities. Make sure to track additional policies, above and beyond the policies you defined for all the competitors – you want an apples to apples comparison, but also want to factor in additional capabilities offered by any competitors. One more thing: we recommend investing in screen capture technology. It is hard to remember what each tool did and how – especially after you have worked a few unfamiliar tools through the same paces. Capture as much video as you can of the user experience – it will come in handy as you reach the decision point. Without further ado, let’s jump into the PoC. Testing Almost every email system (Exchange, Office 365, Google Suite, etc.) provides some means of blocking malicious email. So that is the base level for comparison. The next question becomes whether you want to take an active or passive approach during the PoC. In an active test you introduce malicious messages (known malware and phishing messages) into the environment to track whether the product or service catches messages which should be detected. A passive test uses the product against your actual mail stream, knowing it will get a bunch of spam, phishes, and attacks if you look at enough messages. To undertake an active test you need access to these malicious messages, which isn’t a huge impediment as there are sites which provide known phishing messages, and plenty of places to get malware for testing. Of course you’ll want to take plenty of precautions to ensure you don’t self-inflict a real outbreak. There is risk in doing an active test, but it enables you to evaluate false negatives (missing malicious messages), which create far more damage than false positives (flagging

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.