Securosis

Research

ATM PIN Thefts

The theft of Citibank ATM PINs is in the news again as it appears that indictments have been handed down on the three suspects. This case will be interesting to watch, to see what the fallout will be. It is not still really clear if the PINs were leaked in transit, or if the clearing house servers were breached. There are a couple of things about this story that I still find amusing. The first is that Fiserv, the company that operates the majority of the network, is pointing fingers at Cardtronics Inc. The quote by the Fiserv representative “Fiserv is confident in the integrity and security of our system” is great. They both manage elements of the ‘system’. When it comes down to it, this is like two parties who are standing in a puddle of gasoline, accusing each other of lighting a match. It won’t matter who is at fault when they both go up in flames. In the public mind, no one is going to care, and they will be blamed equally and quite possibly both go out of business if their security was shown to be grossly lacking. My second though on this subject was, once you breach the ‘system’, you have to get the money out. In this case, it has been reported that over $2M was ‘illegally gained’. If the average account is hacked for $200.00, we are talking about at least 10,000 separate ATM withdrawals. That is a lot of time spent at the 7-11! But seriously, that is a lot of time to spend making ATM withdrawals. I figure that they way they got caught is that the thief’s picture keept turning up on security cameras … otherwise this is a difficult crime to detect and catch. I also got to thinking about ATMs and the entire authentication process is not much more than basic two factor authentication combined with some simple behavioral checks at the back end. The security of these networks is really not all that advanced. Typically PIN codes are four digits in length, and it really does not make a lot of sense to use hash algorithms given the size of the PIN and the nature of the communications protocol. And while it requires some degree of technical skill, the card itself can be duplicated, making a fairly weak two factor system. Up until a couple years ago, DES was still the typical encryption algorithm in use, and only parts of the overall transaction processing systems keep the data encrypted. Many of the ATMs are not on private networks, but utilize the public Internet and airwaves. Given the amount of money and the number of transactions that are processed around the world, it is really quite astonishing how well the system as a whole holds up. Finally, while I have been known to bash Microsoft for various security miscues over the years, it seems somewhat specious to state “Hackers are targeting the ATM system’s infrastructure, which is increasingly built on Microsoft Corp.’s Windows operating system.” Of course they are targetting the infrastructure; that is the whole point of electronic fraud. They probably meant the back end processing infrastructure. And why mention Windows? Windows may make familiarity with the software easier; this case does not show that any MS product was at fault for the breach. Throwing that into the story seems like they are trying to cast blame on MS software without any real evidence. Share:

Share:
Read Post

What’s My Motivation?

‘Or more appropriately, “Why are we talking about ADMP?” In his first post on the future of application and database security, Rich talked about Forces and Assumptions heading us down an evolutionary path towards ADMP. I want to offer a slightly different take on my motivation, or belief, in this strategy. One of the beautiful things about mode application development is our ability to cobble together small, simple pieces of code into a larger whole in order to accomplish some task. Not only do I get to leverage existing code, but I get to bundle it together in such a way that I alter the behavior depending upon my needs. With simple additions, extensions and interfaces, I can make a body of code behave very differently depending upon how I organize and deploy the pieces. Further, I can bundle different application platforms together in a seamless manner to offer extraordinary services without a great deal of re-engineering. A loose confederation of applications cooperating together to solve business problems is the typical implementation strategy today, and I think that the security challenge needs to account for the model rather than the specific components within the model. Today, we secure components. We need to be able to ‘link up’ security in the same way that we do the application platforms (I would normally go off on an Information Centric Security rant here, but that is pure evangelism, and a topic for another day). I have spent the last four years with a security vendor that provided assessment, monitoring, and auditing of databases and databases specifically. Do enough research into security problems, customer needs, and general market trends; and you start to understand the limitations of securing just a single application in the chain of events. For example, I found that database security issues detected as part of an assessment scan may have specific relevance to the effectiveness of database monitoring. I believe Web Application security providers witness the same phenomenon with SQL Injection as they may lack some context for the attack, or at least the more subtle subversions of the system or exploitation of logic flaws in the database or database application. A specific configuration might be necessary for business continuity and processing, but could open an acknowledged security weakness that I would like to address with another tool, such as database monitoring. That said, where I am going with this line of thought is not just the need for detective and preventative controls on a single application like a web server or database server, but rather the Inter-application benefit of a more unified security model. There were many cases where I wanted to share some aspect of the database setup with the application or access control system that could make for a more compelling security offering (or visa-versa, for that matter). It is hard to understand context when looking at security from a single point outside an application, or from the perspective of a single application component. I have said many times that the information we have at any single processing node is limited. Yes, my bias towards application level data collection vs. network level data collection is well documented, but I am advocating collection of data from multiple sources. A combination of monitoring of multiple information sources, coupled with a broad security and compliance policy set, would be very advantageous. I do not believe this is simply a case of (monitoring) more is better, but of solving specific problems where it is most efficient to do so. There are certain attacks that are easier to address at the web application level, and others best dealt with in the database, while others should be intercepted at the network level. But the sharing of policies, policy enforcement, and suspect behaviors, can be both more effective and more efficient. Application and Database Monitoring and Protection is a concept that I have been considering/researching/working towards for several years now. With my previous employer, this was a direction I wanted to take the product line, as well as some of the partner relationships to make this happen across multiple security products. When Rich branded the concept with the “ADMP” moniker it just clicked with me for the reasons stated above, and I am glad he posted more on the subject last week. But I wanted to put a little more focus on the motivation for what he is describing and why it is important. This is one of the topics we will both be writing about more often in the weeks and months ahead. Share:

Share:
Read Post

Best Practices For Endpoint DLP: Part 1

As the first analyst to ever cover Data Loss Prevention, I’ve had a bit of a tumultuous relationship with endpoint DLP. Early on I tended to exclude endpoint only solutions because they were more limited in functionality, and couldn’t help at all with protecting data loss from unmanaged systems. But even then I always said that, eventually, endpoint DLP would be a critical component of any DLP solution. When we’re looking at a problem like data loss, no individual point solution will give us everything we need. Over the next few posts we’re going to dig into endpoint DLP. I’ll start by discussing how I define it, and why I don’t generally recommend stand-alone endpoint DLP. I’ll talk about key features to look for, then focus on best practices for implementation. It won’t come as any surprise that these posts are building up into another one of my whitepapers. This is about as transparent a research process as I can think of. And speaking of transparency, like most of my other papers this one is sponsored, but the content is completely objective (sponsors can suggest a topic, if it’s objective, but they don’t have input on the content). Definition As always, we need to start with our definition for DLP/CMP: “Products that, based on central policies, identify, monitor, and protect data at rest, in motion, and in use through deep content analysis”. Endpoint DLP helps manage all three parts of this problem. The first is protecting data at rest when it’s on the endpoint; or what we call content discovery (and I wrote up in great detail). Our primary goal is keeping track of sensitive data as it proliferates out to laptops, desktops, and even portable media. The second part, and the most difficult problem in DLP, is protecting data in use. This is a catch all term we use to describe DLP monitoring and protection of content as it’s used on a desktop- cut and paste, moving data in and out of applications, and even tying in with encryption and enterprise Document Rights Management (DRM). Finally, endpoint DLP provides data in motion protection for systems outside the purview of network DLP- such as a laptop out in the field. Endpoint DLP is a little difficult to discuss since it’s one of the fastest changing areas in a rapidly evolving space. I don’t believe any single product has every little piece of functionality I’m going to talk about, so (at least where functionality is concerned) this series will lay out all the recommended options which you can then prioritize to meet your own needs. Endpoint DLP Drivers In the beginning of the DLP market we nearly always recommended organizations start with network DLP. A network tool allows you to protect both managed and unmanaged systems (like contractor laptops), and is typically easier to deploy in an enterprise (since you don’t have to muck with every desktop and server). It also has advantages in terms of the number and types of content protection policies you can deploy, how it integrates with email for workflow, and the scope of channels covered. During the DLP market’s the first few years, it was hard to even find a content-aware endpoint agent. But customer demand for endpoint DLP quickly grew thanks to two major needs- content discovery on the endpoint, and the ability to prevent loss through USB storage devices. We continue to see basic USB blocking tools with absolutely no content awareness brand themselves as DLP. The first batches of endpoint DLP tools focused on exactly these problems- discovery and content-aware portable media/USB device control. The next major driver for endpoint DLP is supporting network policies when a system is outside the corporate gateway. We all live in an increasingly mobile workforce where we need to support consistent policies no matter where someone is physically located, nor how they connect to the Internet. Finally, we see some demand for deeper integration of DLP with how a user interacts with their system. In part, this is to support more intensive policies to reduce malicious loss of data. You might, for example, disallow certain content from moving into certain applications, like encryption. Some of these same kinds of hooks are used to limit cut/paste, print screen, and fax, or to enable more advanced security like automatic encryption or application of DRM rights. The Full Suite Advantage As we’ve already hinted, there are some limitations to endpoint only DLP solutions. The first is that they only protect managed systems where you can deploy an agent. If you’re worried about contractors on your network or you want protection in case someone tries to use a server to send data outside the walls, you’re out of luck. Also, because some content analysis policies are processor and memory intensive, it is problematic to get them running on resource-constrained endpoints. Finally, there are many discovery situations where you don’t want to deploy a local endpoint agent for your content analysis- e.g. when performing discovery on a major SAN. Thus my bias towards full-suite solutions. Network DLP reduces losses on the enterprise network from both managed and unmanaged systems, and servers and workstations. Content discovery finds and protects stored data throughout the enterprise, while endpoint DLP protects systems that leave the network, and reduces risks across vectors that circumvent the network. It’s the combination of all these layers that provides the best overall risk reduction. All of this is managed through a single policy, workflow, and administration server; rather than forcing you to create different policies; for different channels and products, with different capabilities, workflow, and management. In our next post we’ll discuss the technology and major features to look for, followed by posts on best practices for implementation. Share:

Share:
Read Post

The Future Of Application And Database Security: Part 2, Browser To WAF/Gateway

Since Friday is usually “trash” day (when you dump articles you don’t expect anyone to read) I don’t usually post anything major. But thanks to some unexpected work that hit yesterday, I wasn’t able to get part 2 of this series out when I wanted to. If you can tear yourself away from those LOLCatz long enough, we’re going to talk about web browsers, WAFs, and web application gateways. These are the first two components of Application and Database Monitoring and Protection (ADMP), which I define as: Products that monitor all activity in a business application and database, identify and audit users and content, and, based on central policies, protect data based on content, context, and/or activity. Browser Troubles As we discussed in part 1, one of the biggest problems in web application security is that the very model of the web browsers and the World Wide Web is not conducive to current security needs. Browsers are the ultimate mashup tool- designed to take different bits from different places and seamlessly render them into a coherent whole. The first time I started serious web application programming (around 1995/96) this blew my mind. I was able to embed disparate systems in ways never before possible. And not only can we embed content within a browser, we can embed browsers within other content/applications. The main reason, as a developer, I converted from Netscape to IE was that Microsoft allowed IE to be embedded in other programs, which allowed us to drop it into our thick VR application. Netscape was stand alone only; seriously limiting its deployment potential. This also makes life a royal pain on the security front where we often need some level of isolation. Sure, we have the same-origin policy, but browsers and web programming have bloated well beyond what little security that provides. Same-origin isn’t worthless, and is still an important tool, but there are just too many ways around it. Especially now that we all use tabbed browsers with a dozen windows open all the time. Browsers are also stateless by nature, no matter what AJAX trickery we use. XSS and CSRF, never mind some more sophisticated attacks, take full advantage of the weak browser/server trust models that result from these fundamental design issues. In short, we can’t trust the browser, the browser can’t trust the server, and individual windows/tabs/sessions in the browser can’t trust each other. Fun stuff! WAF Troubles I’ve talked about WAFs before, and their very model is also fundamentally flawed. At least how we use WAFs today. The goal of a WAF is, like a firewall, to drop known bad traffic or only allow known good traffic. We’re trying to shield our web applications from known vulnerabilities, just like we use a regular firewall to block ports, protocols, sources, and destinations. Actually, a WAF is closer to IPS than it is to a stateful packet inspection firewall. But web apps are complex beasts; every single one a custom application, with custom vulnerabilities. There’s no way a WAF can know all the ins and outs of the application behind it, even after it’s well tuned. WAFs also only protect against certain categories of attacks- mostly some XSS and SQL injection. They don’t handle logic flaws, CSRF, or even all XSS. I was talking with a reference yesterday of one of the major WAFs, and he had no trouble slicing through it during their eval phase using some standard techniques. To combat this, we’re seeing some new approaches. F5 and WhiteHat have partnered to feed the WAF specific vulnerability information from the application vulnerability assessment. Imperva just announced a similar approach, with a bunch of different partners. These advances are great to see, but I think WAFs will also need to evolve in some different ways. I just don’t think the model of managing all this from the outside will work effectively enough. Enter ADMP The idea of ADMP is that we build a stack of interconnected security controls from the browser to the database. At all levels we both monitor activity and include enforcement controls. The goal is to start with browser session virtualization connected to a web application gateway/WAF. Then traffic hits the web server and web application server, both with internal instrumentation and anti-exploitation. Finally, transactions drop to the database, where they are again monitored and protected. All of the components for this model exist today, so it’s not science fiction. We have browser session virtualization, WAFs, SSL-VPNs (that will make sense in a minute), application security services and application activity monitoring, and database activity monitoring. In addition to the pure defensive elements, we’ll also tie in to the applications at the design and code level through security services for adaptive authentication, transaction authentication, and other shared services (happy Dre? 🙂 ). The key is that this will all be managed through a central console via consistent policies. In my mind, this is the only thing that makes sense. We need to understand the applications and the databases that back them. We have to do something at the browser level since even proper parameterization and server side validation can’t meet all our needs. We have to start looking at transactions, business context, and content, rather than just packets and individual requests. Point solutions at any particular layer have limited effectiveness. But if we stop looking at our web applications as pieces, and rather design security that addresses them as a whole, we’ll be in much better shape. Not that anything is perfect, but we’re looking at risk reduction, not risk elimination. A web application isn’t just a web server, just some J2EE code, or just a DB- it’s a collection of many elements working together to perform business transactions, and that’s how we need to look at them for effective security. The Browser and Web Application Gateway A little while back I wrote about the concept of browser session virtualization. To plagiarize myself and save a little writing time so I can get

Share:
Read Post

Don’t Use chmod To Block Mac OS X ARDAgent Vulnerability

Just a quick note- if you used chmod to change the permissions of ARDAgent to block the privilege escalation vulnerability being used by the new trojans you should still go compress or remove it. Repairing permissions restores ARDAgent and opens the vulnerability again. I suppose you could also make sure you don’t repair permissions, but it’s easiest to just remove it. I removed the chmod recommendation from the TidBITS article. Share:

Share:
Read Post

Let’s Start At The Very Beginning

‘Last week Jeremiah “Purple Belt” Grossman posted the following question: “You’re hired on at a new company placed in charge of securing their online business (websites). You know next to nothing about the technical details of the infrastructure other than they have no existing web/software security program and a significant portion of the organizations revenues are generated through their websites. What is the very first thing do on day 1?” Day one is going to be a long day, that’s for certain. Like several commentators on the original post, I’d start with talking with the people who own the application both at a business and technology level. Basically, this is a prime opportunity to not only understand what the goals of the business are but also get everyone’s perceptions of their needs, and equally important their perceptions of the cost of their systems being unavailable. The next few weeks would be used to determine where reality diverged from perception. But day one is when I get to make my first impression and if I can successfully convince people that I really am on their side, it will make the rest of my tenure much easier. I’ve found that I can do so by demonstrating that my prime concern is enabling the business to accomplish its goals with a minimum of hassle from me. One of the key ways of doing this is spending my time listening, and limiting my talking to asking questions that lead my interviewee to the necessary logical conclusions rather than being a dictator…. …not that I don’t reserve the right to hit things with a hammer later to protect the business, but day 1 sets the tone for the future, and that’s far more important than putting in X fix or blocking Y vulnerability. Share:

Share:
Read Post

The Future Of Application And Database Security: Part 1, Setting The Stage

I’ve been spending the past few weeks wandering around the country for various shows, speaking to some of the best and brightest in the world of application and database security. Heck, I even hired one of them. During some of my presentations I laid out my vision for where I believe application (especially web application) and database security are headed. I’ve hinted at it here on the blog, discussing the concepts of ADMP, the information-centric security lifecycle, and DAM, but it’s long past time I detailed the big picture. I’m not going to mess around and write these posts so they are accessible to the non-geeks out there. If you don’t know what secure SDLC, DAM, SSL-VPN, WAF, and connection pooling mean, this isn’t the series for you. That’s not an insult, it’s just that this would drag out to 20+ pages if I didn’t assume a technical audience. Will all of this play out exactly as I describe? No way in hell. If everything I predict is 100% correct I’m just predicting common knowledge. I’m shooting for a base level of 80% accuracy, with hopes I’m closer to 90%. But rather than issuing some proclamation from the mount, I’ll detail why I think things are going where they are. You can make your own decisions as to my assumptions and the accuracy of the predictions that stem from them. Also, apologies to Dre’s friends and family. I know this will make his head explode, but that’s a cost I’m willing to pay. Special thanks to Chris Hoff and the work we’ve been doing on disruptive innovation, since that model drives most of what I’m about to describe. Finally, this is just my personal opinion as to where things will go. Adrian is also doing some research on the concept of ADMP, and may not agree with everything I say. Yes, we’re both Securosis, but when you’re predicting uncertain futures no one can speak with absolute authority. (And, as Hoff says, no one can tell you you’re wrong today). Forces and Assumptions Based on the work I’ve been doing with Hoff, I’ve started to model future predictions by analyzing current trends and disruptive innovations. Those innovations that force change, rather than ones that merely nudge us to steer slightly around some new curves. In the security world, these forces (disruptions) come from three angles- business innovation, threat innovation, and efficiency innovation. The businesses we support are innovating for competitive advantage, as are the bad guys. For both of them, it’s all about increasing the top line. The last category is more internal- efficiency innovation to increase the bottom line. Here’s how I see the forces we’re dealing with today, in no particular order: Web browsers are inherently insecure. The very model of the world wide web is to pull different bits from different places, and render them all in a single view through the browser. Images from over here, text from over here, and, using iframes, entire sites from yet someplace else. It’s a powerful tool, and I’m not criticizing this model; it just is what it is. From a security standpoint, this makes our life more than a little difficult. Even with a strictly enforced same origin policy, it’s impossible to completely prevent cross-site issues, especially when people keep multiple sessions to multiple sites open all at the same time. That’s why we have XSS, CSRF, and related attacks. We are trying to build a trust model where one end can never be fully trusted. We have a massive repository of insecure code that grows daily. I’m not placing the blame on bad programmers; many of the current vulnerabilities weren’t well understood when much of this code was written. Even today, some of these issues are complex and not always easy to remediate. We are also discovering new vulnerability classes on a regular basis, requiring review and remediation on any existing code. We’re talking millions of applications, never mind many millions of lines of code. Even the coding frameworks and tools themselves have vulnerabilities, as we just saw with the latest Ruby issues. The volume of sensitive data that’s accessible online grows daily. The Internet and web applications are powerful business tools. It only makes sense that we connect more of our business operations online, and thus more of our sensitive data and business operations are Internet accessible. The bad guys know technology. Just as it took time for us to learn and adopt new technologies, the bad guys had to get up to speed. That window is closed, and we have knowledgeable attackers. The bad guys have an economic infrastructure. Not only can they steal things, but they have a market to convert the bits to bucks. Pure economics give them viable business models that depend on generating losses for us. Bad guys attack us to steal or assets (information) or hijack them to use against others (e.g., to launch a big XSS attack). They also sometimes attack us just to destroy our assets, but not often (less economic incentive, even for DoS blackmail). Current security tools are not oriented to the right attack vectors. Even WAFs offer limited effectiveness since they are more tied to our network security models than our data/information-centric models. We do not have the resources to clean up all existing code, and we can’t guarantee future code, even using a secure SDLC, won’t be vulnerable. This is probably my most contentious assumption, but most of the clients I work with just don’t have the resources to completely clean what they do have, and even the best programmers will still make mistakes that slip through to production. Code scanning tools and vulnerability analysis tools can’t catch everything, and can’t eliminate all false positives. They’ll never catch logic flaws, and even if we had a perfect tool, the second a new vulnerability appeared we’d have to go back and fix everything we’d built up to that point. We’re relying on more and more code and

Share:
Read Post

Network Security Podcast, Episode 109

This week, Martin and I are joined by Adam Shostack, bandleader of the Emergent Chaos Jazz Combo of the Blogosphere and co-author of The New School of Information Security. (And he sorta works for a big software company, but that’s not important right now). You can get the show notes and episode over at netsecpodcast.com. We spend a lot of time talking about statistics and the New School concepts. I’m a big fan of the book, and Adam and I share a lot of positions on where we are as an industry, and where we need to go. Share:

Share:
Read Post

Improving OS X Security

There’s been a bunch of news on the Mac security front in the past couple of weeks. From the Safari carpet bombing attack, to a couple trojans popping up. Over the weekend I submitted an email response to a press interview where I outlined my recommended improvements to OS X to keep Macs safer than Windows. On the technical side they included elements like completing implementation of library randomization (ASLR), adding more stack protection to applications, enhancing and extending sandboxing to most major OS X applications, running fewer processes as root/system, and more extensive use of DEP. I’m not bothering to lay this out in any more depth, because Dino Dai Zovi did a much better job of describing them over on his blog. Dino’s one of the top Mac security researchers out there, so I highly suggest you read his post if you’re interested in OS X security. There are a few additional things I’d like to see, outside of the OS level changes: A more-deeply staffed Apple Security Response Center, with public facing side to better communicate security issues and engage the research community. Apple absolutely sucks at working with researchers and communicating on security issues. Improvements here will go a way to increase confidence, manage security issues, and avoid many of the kinds of flareups we’ve seen in the past few years. Better policies on updating open source software included with OS X. In some cases, we’ve seen vulnerabilities in OS X due to included open source software, like Samba and Apache, that are unpatched for MONTHS after they are publicly known. These are fully exploitable on Macs and other Apple products until Apple issues an update. I realize this is a very tough issue, because Apple needs to run through extensive evaluation and testing before releasing updates, but they can mitigate this timeline by engaging deeply with those various open source teams to reduce the windows where users are exposed to the vulnerabilities. An Apple CSO- someone who is both the internal leader and external face of Apple security. They need an evangelist with credibility in the security world (no, I’m not trolling for a job; I don’t want to move to California, even for that). A secure development lifecycle for Apple products. The programmers there are amazing, but even great programmers need to follow secure coding practices that are enforced with tools and process. I have suspicions we might see some of these technical issues fixed in Snow Leopard, but the process issues are just as important for building and maintaining a sustainable, secure platform. Share:

Share:
Read Post

I’m With Ptacek- I Run My Mac As Admin

I’m still in New York for the FISD conference, listening to Team Cymru talk about the state of cybercrime as I wait for my turn at the podium (to talk about information-centric security and DLP). One problem with travel is keeping up with the news, so I pretty much missed the Applescript vulnerability and now have to write it up for TidBITS on the plane before Monday. I was reading Thomas Ptacek’s post on the vulnerability, and I think it’s time I joined Tom and came out of the closet. I run as admin on my Mac. All the time. And I’m not ashamed. Why? As Ptacek said, even without root/admin there’s a ton of nasty things you can do on my system. In fact, you can pretty much get anything I really worry about. I even once wrote some very basic Applescript malware that ran on boot (after jailbreaking an improperly configured virtual machine). It didn’t need admin to work. There. I feel better now. Glad to get that out there. (If you’re going to criticize this, go read Tom’s post and talk to him first. He’s smarter than me, and not on an airplane.) Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.