Securosis

Research

Incite 7/10/2013: Selfies

Before she left for camp XX1 asked me to download her iPhone photos to our computer, so she could free up some space. Evidently 16gb isn’t enough for these kids today. What would Ken Olson say about that? (Dog yummy for those catching the reference.) I happened to notice that a large portion of her pictures were these so-called selfies. Not in a creeper, micro-managing Dad way, but in a curious, so that’s what the kids are up to today way. A selfie is where you take a picture of yourself (and your friends) with your camera phone. Some were good, some were bad. But what struck me was the quantity. No wonder she needed to free up space – she had all these selfies on her phone. Then I checked XX2 and the Boy’s iTouch devices, and sure enough they had a bunch of selfies as well. I get it, kind of. I have been known to take a selfie or two, usually at a Falcons game to capture a quick memory. Or when the Boss and I were at a resort last weekend and we wanted to capture the beauty of the scene. My Twitter avatar remains a self-defense selfie, and has been for years. I haven’t felt the need to take a new selfie to replace it. Then I made a critical mistake. I searched Flickr for selfies. A few are interesting, and a lot are horrifying. I get that some folks want to take pictures of themselves, but do you need to share them with the world? Come on, man (or woman)! There are some things we don’t need to see. Naked selfies (however psuedo-artistic) are just wrong. But that’s more a statement about how social media has permeated our environment. Everyone loves to take pictures, and many people like to share them, so they do. On the 5th anniversary of the iTunes App Store, it seems like the path to success for an app is to do photos or videos. It worked for Instagram and Snapchat, so who knows… Maybe we should turn the Nexus into a security photo sharing app. Pivoting FTW. As for me, I don’t share much of anything. I do a current status every so often, especially when I’m somewhere cool. But for the most part I figure you don’t care where I am, what my new haircut looks like (pretty much the same) or whether the zit on my forehead is pulsating or not (it is). I guess I am still a Luddite. –Mike Photo credit: “Kitsune #selfie” originally uploaded by Kim Tairi Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Continuous Security Monitoring Defining CSM Why. Continuous. Security. Monitoring? Database Denial of Service Attacks Introduction API Gateways Key Management Developer Tools Access Provisioning Security Enabling Innovation Security Analytics with Big Data Deployment Issues Integration New Events and New Approaches Use Cases Introduction Newly Published Papers Quick Wins with Website Protection Services Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Incite 4 U If it’s code it can be broken: A fascinating interview in InfoWorld with a guy who is a US government funded attacker. You get a feel for how he got there (he like to hack things) and that they don’t view what they do as over the line – it’s a necessary function given that everyone else is doing it to us. He maintains they have tens of thousands of 0-day attacks for pretty much every type of software. Nice, eh? But the most useful part of the interview for me was: “I wish we spent as much time defensively as we do offensively. We have these thousands and thousands of people in coordinate teams trying to exploit stuff. But we don’t have any large teams that I know of for defending ourselves. In the real world, armies spend as much time defending as they do preparing for attacks. We are pretty one-sided in the battle right now.” Yeah, man! The offensive stuff is definitely sexy, but at some point we will need to focus on defense. – MR Open to the public: A perennial area of concern with database security is user permission management, as Pete Finnigan discussed in a recent examination of default users in Oracle 12cR1. Default user accounts are a security problem because pretty much everything comes with default access credentials. That usually means a default password, or the system may require the first person to access the account to set a password. But regardless it is helpful to know the 36 issues you need to immediately address after installing Oracle. Pete also notes the dramatic increase in use of the PUBLIC permissions, a common enabler of 0-day database exploits. More stuff to add to your security checklist, and if you rely upon third party assessment solutions it’s time to ask your provider for updated policies. By the way, this isn’t just an issue with Oracle, or databases for that matter. Every computing system has these issues. – AL Want to see the future of networking? Follow the carriers… I started my career as a developer but I pretty quickly migrated down to the network. It was a truism back then (yes, 20+ years ago – yikes) that the carriers were the first to play around and deploy new technologies, and evidently that is still true today. Even ostriches have heard of software-defined networking at this point. The long-term impact on network security is still not clear, but clearly carriers will be leading the way with SDN deployment. given their need for flexibility and agility. So those of you in the enterprise should be paying attention, because as inspection and policy enforcement (the basis of security) happens in software, it will have

Share:
Read Post

RSA Acquires Aveksa

EMC has announced the acquisition of Aveksa, one of the burgeoning players in the identity management space. Aveksa will be moved into the RSA security division, and no doubt merged with existing authentication products. From the Aveksa blog: … business demands and the threat landscape continue to evolve, and organizations now expect even more value from IAM platforms. As a standalone company, Aveksa began this journey by connecting our IAM platform to DLP and SIEM solutions – allowing organizations to connect identity context, access policies, and business processes to these parts of the security infrastructure. This has been successful, and also led us to recognize the massive and untapped potential for IAM as part of a broader security platform – one that includes Adaptive Authentication, GRC, Federation, and Security Analytics. At first blush it looks like RSA made a good move, identifying their weakest solutions areas and acquiring a firm that provides many of the missing pieces they need to compete. RSA has been trailing in this space, focusing most of its resources on authentication issues and filling gaps with partnerships rather than building their own. They have been trailing in provisioning, user management, granular role-based access, and – to a lesser extent – governance. Some of RSA’s recent product advancements, such as risk-based access control, directly address customer pain points. But what happens after authentication is the real question, and that the question this purchase is intended to answer. Customers have been looking for platforms that offer the back-end plumbing needed to link together existing business systems, and the Aveksa acquisition correctly targets the areas RSA needs to bolster. It looks like EMC has addressed a need with a proven solution, and acquired a reasonable customer base for their money. We expect to see move moves like this in the mid-term as more customers struggle to coalesce authentication, authorization, and identity management issues – which have been turned on their heads by cloud and mobile computing demands – into more unified product suites. Share:

Share:
Read Post

Multitenancy is the Least Interesting Security Property of Cloud Computing

Today I was mildly snarky on the Security Metrics email list when a few people suggested that instead of talking about cloud computing we should talk about shared infrastructure. In their minds, ‘shared’ = ‘cloud’. I fully acknowledge that I may be misinterpreting their point, but this is a common thread I hear. Worse yet, very frequently when I discuss security risks, other security professionals key in on multitenancy as their biggest concern in cloud computing. To be honest it may be the least interesting aspect of the cloud from a security perspective. Shared infrastructure and applications are definitely a concern – I don’t mean to say they do not pose any risk. But multitenancy is more an emergent property of cloud computing rather than an essential characteristic – and yes, I am deliberately using NIST terms. In my humble opinion – please tell me if I’m wrong in the comments – the combination of resource pooling (via abstraction) and orchestration/automation creates the greatest security risk. This is primarily for IaaS and PaaS, but also can apply to SaaS when it isn’t just a standard web app. With abstraction and automation we add a management layer that effectively network-enables direct infrastructure management. Want to wipe out someone’s entire cloud with a short bash script? Not a problem if they don’t segregate their cloud management and harden admin systems. Want to instantly copy the entire database and make it public? That might take a little PHP or Ruby code, but well under 100 lines. In neither of those cases is relying on shared resources a factor – it is the combination of APIs, orchestration, and abstraction. These aren’t fully obvious until you start really spending time using and studying the cloud directly – as opposed to reading articles and research reports. Even our cloud security class only starts to scratch the surface, although we are considering running a longer version where we spend a bunch more time on it. The good news is that these are also very powerful security enablers, as you will see later today or tomorrow when I get up some demo code I have been working on. Share:

Share:
Read Post

How Not to Handle a Malware Outbreak

Malware is a pervasive problem in enterprises today. It can often be insidious as hell and difficult to ferret out. But sometimes the response to a malware outbreak defies basic common sense. The CIO for the Economic Development Administration (EDA) thought a scorched earth policy was the best approach… From the Depart of Commerce audit report (.pdf): EDA’s CIO concluded that the risk, or potential risk, of extremely persistent malware and nation-state activity (which did not exist) was great enough to necessitate the physical destruction of all of EDA’s IT components. 20 EDA’s management agreed with this risk assessment and EDA initially destroyed more than $170,000 worth of its IT components,21 including desktops, printers, TVs, cameras, computer mice, and keyboards. By August 1, 2012, EDA had exhausted funds for this effort and therefore halted the destruction of its remaining IT components, valued at over $3 million. EDA intended to resume this activity once funds were available. However, the destruction of IT components was clearly unnecessary because only common malware was present on EDA’s IT systems. And there was this: Not only was EDA’s CIO unable to substantiate his assertion with credible evidence, EDA’s IT staff did not support the assertion of an infection in the e-mail server There are no words to express my complete amazement at this abjectly irresponsible waste of taxpayer dollars. The real rub from the report: There was no widespread malware infection There was no indication of an infection in the e-mail server The fundamental disconnect here is mind-boggling. Share:

Share:
Read Post

Kudos: Microsoft’s App Store Security Policy

Today on the Microsoft Security Response Center Blog: Under the policy, developers will have a maximum of 180 days to submit an updated app for security vulnerabilities that are not under active attack and are rated Critical or Important according to the Microsoft Security Response Center rating system. The updated app must be submitted to the store within 180 days of the first report that reproduces the issue. Microsoft reserves the right to take swift action in all cases, which may include immediate removal of the app from the store, and will exercise its discretion on a case-by-case basis. But the best part: If you have discovered a vulnerability in a store application and have unsuccessfully attempted to work with the developer to address it, you can request assistance by contacting secure@microsoft.com. Clear, concise, and puts users first. My understanding is that Apple is also pretty tight on suspending vulnerable apps, but they don’t have it formalized into visible policy, with a single contact point. If anyone knows Google’s policy (formal or otherwise), please drop it in the comments, but that is clearly a different ecosystem. Share:

Share:
Read Post

Continuous Security Monitoring: Defining CSM

In our introduction to Continuous Security Monitoring we discussed the rapid advancement of attacks, and why that means you can never “get ahead of the threat”. That means you need to react faster to what’s happening, which requires shortening the window of exposure by embracing extensive security monitoring. We tipped our hats to both PCI Council and the US government for requiring monitoring as a key aspect of their mandates. The US government pushed it a step further by including continuous in its definition of monitoring. We love the term ‘continuous’, but this one word has caused a lot of confusion in folks responsible for monitoring their environments. As we are prone to do, it is time to wade through the hyperbole to define what we mean by Continuous Security Monitoring, and then identify some of the challenges you will face in moving towards this ideal. Defining CSM We will not spend any time defining security monitoring – we have been writing about it for years. But now we need to delve into how continuous any monitoring really needs to be given recent advances in attack tactics. Many solutions claim to offer “continuous monitoring”, but all to many simply scan or otherwise assess devices every couple of days — if that often. Sorry, but no. We have heard many excuses for why it is not practical to monitor everything continuously, including concerns about consumption of device resources, excessive bandwidth usage, and inability to deal with an avalanche of alerts. All those issues ring hollow because intermittent assessment leaves a window of exposure for attackers, and for critical devices you don’t have that luxury. Our definition of continuous is more in line with the dictionary definition: con.tin.u.ous: adjective \kən-ˈtin-yue-əs\ – marked by uninterrupted extension in space, time, or sequence The key word there is uninterrupted: always active. The constructionist definition of continuous security monitoring should be that the devices in question are monitored at all times – there is no window where attackers can make a change without it being immediately detected. But we are neither constructionist nor religious – we take a realistic and pragmatic approach, which means accepting that not every organization can or should monitor all devices at all times. So we include asset criticality in our usage of CSM. Some devices have access to very important stuff. You know, the stuff that if leaked will result in blood (likely yours and your team’s) flowing through the halls. The stuff that just cannot be compromised. Those devices need to be monitored continuously. And then there is everything else. In the “everything else” bucket land all those devices you still need to monitor and assess, but not as urgently or frequently. You will monitor these devices periodically, so long as you have other methods to detect and identify compromised devices, like network analytics/anomaly detection and/or aggressive egress filtering. The secret to success at CSM is in choosing your high-criticality assets well, so we will get into that later in this series. Another critical success factor is discovering when new devices appear, classifying them quickly, and getting them into the monitoring system quickly. This requires strong process and technology to ensure you have visibility into all of your networks, can aggregate the data you need, and have sufficient computational horsepower for analysis. Adapting the Network Security Operations process map we published a few years back, here is our Continuous Security Monitoring Process. The process is broken down into three phases. In the Plan phase you define policies, classify assets, and continuously discover new assets within your environment. In the Monitor phase you pull data from devices and other sources, to aggregate and eventually analyze, in order to fire an alert if a potential attack or other situation of concern becomes apparent. You will monitor not only to detect attacks, but also to confirm changes and identify unauthorized changes, and substantiate compliance with organizational and regulatory standards (mandates). In the final phase you take action (really determine what action, if any, to take) by validating the alert and escalating as needed. As with all our process models, not all these activities will work or fit in your environment. We publish these maps to give you ideas about what you’ll need to do – they always require customization to your own needs. The Challenge of Full Visibility As we mentioned above, the key challenge in CSM is classifying assets, but your ability to do so is directly related to the visibility of your environment. You cannot monitor or protect devices you don’t know about. So the key enabler for this entire CSM concept is an understanding of your network topology and the devices that connect to your networks. The goal is to avoid an “oh crap” moment, when a bunch of unknown devices and/or applications show up – and you have no idea what they are, what they have access to, or whether they are steaming piles of malware. So we need to be sure you are clear on how to do discovery in this context. There are a number of discovery techniques, including actively scanning your entire address space for devices and profiling what you find. That works well enough and is how most vulnerability management offerings handle discovery, so active discovery is one requirement. But a full address space scan can have a substantial network impact, so it isn’t appropriate during peak traffic times. And be sure to search both your IPv4 and IPv6 address spaces. You don’t have IPv6, you say? You will want to confirm that – many devices have IPv6 turned on by default, broadcasting those addresses to potential attackers. You should supplement active discovery with a passive capability that monitors network traffic and identifies new devices from their network communications. Sophistication passive analysis can profile devices and identify vulnerabilities, but passive monitoring’s primary goal is to find new unmanaged devices faster, then trigger a full active scan on identification. Passive discovery is also helpful for identifying devices hidden behind firewalls and on protected segments, which block active discovery and vulnerability scanning. It is also important to visualize your network topology – a drill-down map

Share:
Read Post

Proactive WebAppSec

Earlier this week rsnake blogged about the Top 10 Proactive Web Application Security Measures. He has a very good set of recommendations, a highly recommended read for web application developers and webmasters alike: anti-CSRF cryptographic nonces on all secure functions: We recommend building nonces (one time tokens tied to user sessions) into each form and validating that to ensure that your site can’t be forced to perform actions. This can be a huge pain to retrofit because it means touching a database or shared memory on every hit – not to mention the code that needs to be inserted into each page with a form and subsequent function to validate the nonce. … DAL (data/database access layer): DALs help to prevent SQL injection. Few companies know about them or use them correctly, but by front ending all databases with an abstraction layer many forms of SQL injection simply fail to work because they are not correctly formed. DALs can be expensive and extremely complex to retrofit because every single database call requires modification and interpolation at the DAL layer. What I appreciate most is that its recommendations are direct, development-centric responses to security issues with web apps. Unfortunately most companies don’t think critically and ask, “How should I solve this web security problem?” The more common approach is just to wonder, “What’s the cheapest, fastest way to minimize the issue?” That is not necessarily wrong, but that difference in mindset is why most customers go for bolt-on partial solutions, and it will probably prevent people from embrace these sound recommendations. Rsnake stresses that these ideas are best implemented before deployment, but I argue that agile web application development teams can still retrofit these ideas without too much pain. I will drill into a few of these recommendations in coming posts, where I have been fortunate enough to have implement the ideas in previous companies and I can offer advice. A couple of his recommendations are far outside the norm. I am willing to bet you have never encountered database abstraction layers for security, and while you have probably heard of immutable logs and security frameworks in source code, you have probably never used them. That’s because you are probably using WAF, DAM, log management, and piecemeal SQLi protection. The depth of rsnake’s experience could fill volumes – he is pulling from years of experience to hit only the highlights – and these recommendations warrant more discussion. The recommendations themselves are really good, and the tools are not that difficult to build – the difficulty are in the management and deployment considerations. Share:

Share:
Read Post

Calendar Bites Google Security in the Ass

Well, this is embarrassing: Blowing hash and signing functions so that the underlying code can be changed without the hash and sigs changing is horrifyingly atrocious. This is the code equivalent of impersonating a person with a mask so good nobody, not even the real person themselves, can tell the difference. … Google espouses 60 days to fix exploitable bugs and going public one week after private notification. According to Bluebox they told Google about this via bug 8219321 in February 2013. That’s a little bit more than 60 days ago. Seeing as it’s now July, I think (and I’m not very good at math, so bear with me here) that’s at least twice as many. It’s especially more than 7 days. I’m not sure how Google are following their own disclosure policy. I suspect the people motivated to publish Google’s disclosure policy were all or mostly on the web side. It is a much different problem when you are dealing with software updates, especially on a platform that often you cannot update. I have yet to find a ROM past Android 4.0 (current is 4.2) that I can get running on my test phone. HTC certainly isn’t providing it, which means many millions of phones will be vulnerable… forever. There was little doubt that publishing that policy wouldn’t eventually haunt them. Share:

Share:
Read Post

Why. Continuous. Security. Monitoring? [New Series]

Remember the old marketing tagline, “Get Ahead of the Threat?” It seems pretty funny now, doesn’t it? Given the kinds of attacks we are facing and attackers’ increasing sophistication, we never see the threats coming and being even marginally reactive seems like a pipe dream. The bad news is that it will not get easier any time soon. Don’t shoot the messenger, but understand that is the reality of today’s information security landscape. The behavior of most organizations over the past decade hasn’t helped, either. Most companies spend the bulk of their security budget on protective controls that have been proven over and over again to be ineffective. Part of this is due to compliance mandates for ancient technologies, but only very forward-thinking organizations have invested sufficiently in the detection and response aspects of their security programs. Unfortunately organizations become enlightened only after cleaning up major data breaches. For the unenlightened detection and response remain horribly under-resourced and underfunded. At the same time the US government has been pushing a “continuous monitoring” (CM) agenda on both military and civilian agencies to provide “situational awareness,” which is really just a fancy term for understanding what the hell is happening in your environment at any given time. The problem is that CM applies to a variety of operations disciplines in the public sector, and it doesn’t really mean ‘continuous’. CM is a good first step, but as with most first steps, too many organizations take it for the destination rather than the first step of a long journey. We have always strongly advocated security monitoring, and have published a ton of research on these topics, from our philosophical foundation: Monitor Everything, to our SIEM research: (Understanding and Selecting, SIEM Replacement). And don’t forget our process modeling of Network Security Operations, which is all about security monitoring. So we don’t need to be sold on the importance of security monitoring, but evidently the industry still needs to be convinced, given the continued failure of even large organizations to realize they must combine a strong set of controls with (at least) equally strong capabilities for detection, monitoring, and incident response. To complicate matters technology continues to evolve, which means the tools and processes for a comprehensive security monitoring look different than even 18 months ago, and they will look different again 18 months from now. So we are spinning up a series called Continuous Security Monitoring (CSM) to evaluate these advancements, fleshing out our definition of CSM and breaking down the decision points and technology platforms to provide this cornerstone of your security program. React Faster and Better We have gotten a lot of mileage from our React Faster and Better concept, which really just means you need to accept and plan for the reality that you cannot stop all attacks. Even more to the point (and potentially impacting your wallet), success is heavily determined by how quickly you detect attacks and how effectively you respond to them. We suggest you read that paper for a detailed perspective on what is involved in incident response – along with ideas on the organization, processes, and tools required to do it well. This series is not a rehash of that territory – instead it will help you assemble a toolkit (including both technology and process) to monitor your information assets to detect attacks more quickly and effectively. If you don’t understand the importance of this aspect of security, just consider that a majority of breaches (at least according to the latest Verizon Business Data Breach Report) continue to be identified by third parties, such as payment processors and law enforcement. That means organizations have no idea when they are compromised. And that is a big problem. Why CSM? We can groan all day and night about how behind the times the PCI-DSS remains, or how the US government has defined Continuous Monitoring. But attackers innovate and move much more quickly than regulation, and that is not going to change. So you need to understand these mandates for what they are: a low bar to get you moving toward a broader goal of continuous security monitoring. But before we take the security cynical approach and gripe about what’s wrong, let’s recognize the yeoman’s work already done to highlight the importance of monitoring to protecting information (data). Without PCI and the US government mandating security data aggregation and analysis we would still be spending most of our time evangelizing the need for even simplistic monitoring in the first place. The fact that we don’t is a testament to the industry’s ability to parlay a mandate into something productive. That said, if you are looking to solve security problems and identify advanced attackers, you need to go well beyond the mandates. This series will introduce what we call “Continuous Security Monitoring” and dig into the different sources of data you need to figure out how big your problem is. See what we did there? You have a problem and we won’t argue that – your success hinges on determining what has been compromised and for how long. As with all our research we will focus on tangible solutions that can be implemented now, while positioning yourself for future advances. We will make sure to discuss the technologies that enable Continuous Security Monitoring, and identify pitfalls to avoid as you progress. As a reminder, we develop our research using our Totally Transparent Research methodology to make sure that you all have an opportunity to let us know when we are right – and more importantly when we are wrong. Finally, we would like to thank Qualys, Tenable, and Tripwire for agreeing to potentially license the paper at the end of this process. After the July 4th holiday we will get going fast and furious. But no race cars will be harmed in the production of this series… Share:

Share:
Read Post

New Paper: Quick Wins with Website Protection Services

Simple website compromises can feel like crimes with no clear victims. Who cares if the Joey’s Bag of Donuts website gets popped? But that is not a defensible position any more. Attackers don’t just steal data from these websites – they also use them to host malware, command and control nodes, and proxies to defeat IP reputation systems. Even today, strange as it sounds, far too many websites have no protection at all. They are built on vulnerable technologies without a thought for securing critical data, and then let loose in a very hostile world. These sites are sitting ducks for script kiddies and organized crime. In this paper we took a step back to write about protecting websites using Security as a Service (SECaaS) offerings. We used our Quick Wins framework to focus on how Website Protection Services can protect web properties quickly and without fuss. Of course it’s completely valid to deploy and manage your own devices to protect your websites; but Mr. Market tells us every day that the advantages of an always-on, simple-to-deploy, and secure-enough service consistently win out over yet another complex device in the network perimeter. The landing page is in our Research Library. You can also download Quick Wins with Website Protection Services (PDF) directly. We would like to thank Akamai Technologies for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you folks for this most excellent price, without companies licensing our content. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.