Securosis

Research

Need Brains. User Brains

As part of our support for the Open Web Application Security Project (OWASP), we participate in their survey program which runs quarterly polls on various application security issues. The idea is to survey a group of users to gain a better understanding of how they are managing or perceiving web application security. We also occasionally run our own surveys to support research projects, such as Project Quant. All these results are released free to the public, and if we’re running the survey ourself we also release the raw anonymized data. One of our ongoing problems is getting together a good group of qualified respondents. It’s the toughest part of running any survey. Although we post most of our surveys directly in the blog, we would also like to run some closed surveys so we can maintain consistency over time. We are going to try putting together a survey board of people in end user organizations (we may also add a vendor list later) who are willing to participate in the occasional survey. There would be no marketing to this list, and no more than 1-2 short (10 minutes or less is our target) surveys per quarter. All responses will be kept completely anonymous (we’re trying to set it up to scrub the data as we collect it), and we will return the favor to the community by releasing the results and raw data wherever possible. We’re also working on other ideas to give back to participants – such as access to pre-release research, or maybe even free Q&A emails/calls if you need some advice on something. No marketing. No spin. Free data.* If you are interested please send an email to survey@securosis.com and we’ll start building the list. We will never use any email addresses sent to this project for anything other than these occasional short surveys. Private data will never be shared with any outside organization. We obviously need to hit a certain number of participants to make this meaningful, so please spread the word. *Obviously we get some marketing for ourselves out of publishing data, but hopefully you don’t consider that evil or slimy. Share:

Share:
Read Post

What Do DLP and Condoms Have in Common?

They both work a heck of a lot better if you use them ahead of time. I just finished reading the Trustwave Global Security Report, which summarizes their findings from incident response and penetration tests during 2009. In over 200 breach investigations, they only encountered one case where the bad guy encrypted the data during exfiltration. That’s right, only once. 1. The big uno. This makes it highly likely that a network DLP solution would have detected, if not prevented, the other 199+ breaches. Since I started covering DLP, one of the biggest criticisms has been that it can’t detect sensitive data if the bad guys encrypt it. That’s like telling a cop to skip the body armor since the bad guy can just shoot them someplace else. Yes, we’ve seen cases where data was encrypted. I’ve been told that in the recent China hacks the outbound channel was encrypted. But based on the public numbers available, more often than not (in a big way) encryption isn’t used. This will probably change over time, but we also have other techniques to try to detect such other exfiltration methods. Those of you currently using DLP also need to remember that if you are only using it to scan employee emails, it won’t really help much either. You need to use promiscuous mode, and scan all outbound TCP/IP to get full value. Also make sure you have it configured in true promiscuous mode, and aren’t locking it to specific ports and protocols. This might mean adding boxes, depending on which product you are using. Yes, I know I just used the words ‘promiscuous’ and ‘condom’ in a blog post, which will probably get us banned (hopefully our friends at the URL filtering companies will at least give me a warning). I realize some of you will be thinking, “Oh, great, but now the bad guys know and they’ll start encrypting.” Probably, but that’s not a change they’ll make until their exfiltration attempts fail – no reason to change until then. Share:

Share:
Read Post

Pragmatic Data Security: Discover

In the Discovery phase we figure where the heck our sensitive information is, how it’s being used, and how well it’s protected. If performed manually, or with too broad an approach, Discovery can be quite difficult and time consuming. In the pragmatic approach we stick with a very narrow scope and leverage automation for greater efficiency. A mid-sized organization can see immediate benefits in a matter of weeks to months, and usually finish a comprehensive review (including all endpoints) within a year or less. Discover: The Process Before we get into the process, be aware that your job will be infinitely harder if you don’t have a reasonably up to date directory infrastructure. If you can’t figure out your users, groups, and roles, it will be much harder to identify misuse of data or build enforcement policies. Take the time to clean up your directory before you start scanning and filtering for content. Also, the odds are very high that you will find something that requires disciplinary action. Make sure you have a process in place to handle policy violations, and work with HR and Legal before you start finding things that will get someone fired (trust me, those odds are pretty darn high). You have a couple choices for where to start – depending on your goals, you can begin with applications/databases, storage repositories (including endpoints), or the network. If you are dealing with something like PCI, stored data is usually the best place to start, since avoiding unencrypted card numbers on storage is an explicit requirement. For HIPAA, you might want to start on the network since most of the violations in organizations I talk to relate to policy violations over email/web/FTP due to bad business processes. For each area, here’s how you do it: Storage and Endpoints: Unless you have a heck of a lot of bodies, you will need a Data Loss Prevention tool with content discovery capabilities (I mention a few alternatives in the Tools section, but DLP is your best choice). Build a policy based on the content definition you built in the first phase. Remember, stick to a single data/content type to start. Unless you are in a smaller organization and plan on scanning everything, you need to identify your initial target range – typically major repositories or endpoints grouped by business unit. Don’t pick something too broad or you might end up with too many results to do anything with. Also, you’ll need some sort of access to the server – either by installing an agent or through access to a file share. Once you get your first results, tune your policy as needed and start expanding your scope to scan more systems. Network: Again, a DLP tool is your friend here, although unlike with content discovery you have more options to leverage other tools for some sort of basic analysis. They won’t be nearly as effective, and I really suggest using the right tool for the job. Put your network tool in monitoring mode and build a policy to generate alerts using the same data definition we talked about when scanning storage. You might focus on just a few key channels to start – such as email, web, and FTP; with a narrow IP range/subnet if you are in a larger organization. This will give you a good idea of how your data is being used, identify some bad business process (like unencrypted FTP to a partner), and which users or departments are the worst abusers. Based on your initial results you’ll tune your policy as needed. Right now our goal is to figure out where we have problems – we will get to fixing them in a different phase. Applications & Databases: Your goal is to determine which applications and databases have sensitive data, and you have a few different approaches to choose from. This is the part of the process where a manual effort can be somewhat effective, although it’s not as comprehensive as using automated tools. Simply reach out to different business units, especially the application support and database management teams, to create an inventory. Don’t ask them which systems have sensitive data, ask them for an inventory of all systems. The odds are very high your data is stored in places you don’t expect, so to check these systems perform a flat file dump and scan the output with a pattern matching tool. If you have the budget, I suggest using a database discovery tool – preferably one with built in content discovery (there aren’t many on the market, as we’ll mention in the Tools section). Depending on the tool you use, it will either sniff the network for database connections and then identify those systems, or scan based on IP ranges. If the tool includes content discovery, you’ll usually give it some level of administrative access to scan the internal database structures. I just presented a lot of options, but remember we are taking the pragmatic approach. I don’t expect you to try all this at once – pick one area, with a narrow scope, knowing you will expand later. Focus on wherever you think you might have the greatest initial impact, or where you have known problems. I’m not an idealist – some of this is hard work and takes time, but it isn’t an endless process and you will have a positive impact. We aren’t necessarily done once we figure out where the data is – for approved repositories, I really recommend you also re-check their security. Run at least a basic vulnerability scan, and for bigger repositories I recommend a focused penetration test. (Of course, if you already know it’s insecure you probably don’t need to beat the dead horse with another check). Later, in the Secure phase, we’ll need to lock down the approved repositories so it’s important to know which security holes to plug. Discover: Technologies Unlike the Define phase, here we have a plethora of options. I’ll break this into

Share:
Read Post

You Have to Buy Data Security Tools

When Mike was reviewing the latest Pragmatic Data Security post he nailed me on being too apologetic for telling people they need to spend money on data-security specific tools. (The line isn’t in the published post). Just so you don’t think Mike treats me any nicer in private than he does in public, here’s what he said: Don’t apologize for the fact that data discovery needs tools. It is what it is. They can be like almost everyone else and do nothing, or they can get some tools to do the job. Now helping to determine which tools they need (which you do later in the post) is a good thing. I just don’t like the apologetic tone. As someone who is often a proponent for tools that aren’t in the typical security arsenal, I’ve found myself apologizing for telling people to spend money. Partially, it’s because it isn’t my money… and I think analysts all too often forget that real people have budget constraints. Partially it’s because certain users complain or look at me like I’m an idiot for recommending something like DLP. I have a new answer next time someone asks me if there’s a free tool to replace whatever data security tool I recommend: Did you build your own Linux box running ipfw to protect your network, or did you buy a firewall? The important part is that I only recommend these purchases when they will provide you with clear value in terms of improving your security over alternatives. Yep, this is going to stay a tough sell until some regulation or PCI-like standard requires them. Thus I’m saying, here and now, that if you need to protect data you likely need DLP (the real thing, not merely a feature of some other product) and Database Activity Monitoring. I haven’t found any reasonable alternatives that provide the same value. There. I said it. No more apologies – if you have the need, spend the money. Just make sure you really have the need, and the tool you are looking at really delivers the value, since not all solutions are created equal. Share:

Share:
Read Post

The Network Forensics (Full Packet Capture) Revival Tour

I hate to admit that of all the various technology areas, I’m probably best known for my work covering DLP. What few people know is that I ‘fell’ into DLP, as one of my first analyst assignments at Gartner was network forensics. Yep – the good old fashioned “network VCRs” as we liked to call them in those pre-TiVo days. My assessment at the time was that network forensics tools like Niksun, Infinistream, and Silent Runner were interesting, but really only viable in certain niche organizations. These vendors usually had a couple of really big clients, but were never able to grow adoption to the broader market. The early DLP tools were sort of lumped into this monitoring category, which is how I first started covering them (long before the term DLP was in use). Full packet capture devices haven’t really done that well since my early analysis. SilentRunner and Infinistream both bounced around various acquisitions and re-spin-offs, and some even tried to rebrand themselves as something like DLP. Many organizations decided to rely on IDS as their primary network forensics tool, mostly because they already had the devices. We also saw Network Behavior Analysis, SIEM, and deep packet inspection firewalls offer some of the value of full capture, but focused more on analysis to provide actionable information to operations teams. This offered a clearer value proposition than capturing all your network data just to hold onto it. Now the timing might be right to see full capture make a comeback, for a few reasons. Mike mentioned full packet capture in Low Hanging Fruit: Network Security, and underscored the need to figure out how to deal with these new more subtle and targeted attacks. Full packet capture is one of the only ways we can prove some of these intrusions even happened, given the patience and skills of the attackers and their ability to prey on the gaps in existing SIEM and IPS tools. Second, the barriers between inside and outside aren’t nearly as clean as they were 5+ years ago; especially once the bad guys get their initial foothold inside our ‘walls’. Where we once were able to focus on gateway and perimeter monitoring, we now need ever greater ability to track internal traffic. Additionally, given the increase in processing power (thank you, Moore!), improvement in algorithms, and decreasing price of storage, we can actually leverage the value of the full captured stream. Finally, the packet capture tools are also playing better with existing enterprise capabilities. For instance, SIEM tools can analyze content from the capture tool, using the packet captures as a secondary source if a behavioral analysis tool, DLP, or even a ping off a server’s firewall from another internal system kicks off an investigation. This dramatically improves the value proposition. I’m not claiming that every organization needs, or has sufficient resources to take advantage of, full packet capture network forensics – especially those on the smaller side. Realistically, even large organizations only have a select few segments (with critical/sensitive data) where full packet capture would make sense. But driven by APT hype, I highly suspect we’ll see adoption start to rise again, and a ton of parallel technologies vendors starting to market tools such as NBA and network monitoring in the space. Share:

Share:
Read Post

Pragmatic Data Security- Define Phase

Now that we’ve described the Pragmatic Data Security Cycle, it’s time to dig into the phases. As we roll through each of these I’m going to break it into three parts: the process, the technologies, and a case study. For the case study we’re going to follow a fictional organization through the entire process. Instead of showing you every single data protection option at each phase, we’ll focus on a narrow project that better represents what you will likely experience. Define: The Process From a process standpoint, this is both the easiest and hardest of the phases. Easy, since there’s only one thing you need to do and it isn’t very technical or complex, hard since it may involve coordination across multiple business units and the quest for executive sponsorship. Identify an executive sponsor to support your efforts. Without management support, the rest of the process will be extremely difficult. Identify the one piece of information/content/data you want to protect. The definition shouldn’t be too broad. For example, “engineering plans” is too broad, but “engineering plans for project X” is acceptable. Using “PCI/NPI/HIPAA” is acceptable, assuming you narrow it down in the next step. Define and model the information you defined in the step above. For totally unstructured content like engineering plans, identify a repository to use for your definition, or any watermarking/labels you are certain will be available to identify and protect the information. For PCI/NPI/HIPAA determine the exact fields/pieces of data to protect. For PCI it might be only the credit card number, for NPI it might be names and addresses, and for HIPAA it might be ICD9 billing codes. If you are protecting data from a database, also identify the source repository. Identify key business units with a stake in the information, and contact them to verify the priority, structure, and repositories for this information. It’s no fun if you think you’re going to protect a database of customer data, only to find out halfway through that it’s not really the important one from a business perspective. That’s it: find a sponsor, identify the category, identify the data/repository, and confirm with the business folks. Define: Technologies None. This is a manual business process and the only technology you need is something to take notes with… or maybe email to communicate. Define: Case Study Billy Bob’s Bait Shop and Sushi Outlet is a mid-sized, multi-site retail organization that specializes in “The freshest seafood, for your family or aquatic friends”. Billy Bob’s consists of a corporate headquarters and a few dozen retail outlets in three states. There are about 1,000 employees, and a growing web business due to their capability to ship fresh bait or sushi to any location in the US overnight. Billy Bob’s is struggling with PCI compliance and wants to avoid a major security breach after seeing the damage caused to their major competitor during a breach (John Boy’s Worms and Grub). They do not have a dedicated security team, but their CIO designated one of their top network administrators (the former firewall manager) to head up security operations. Frank has a solid history as a network administrator and is familiar with security (including some SANS training and a CISSP class). Due to problems with their first PCI assessment, Frank has the backing of the CIO. The category of data is PCI. After some research, Frank decides to go with a multilevel definition – at the top is credit card numbers. Since they are (supposedly) not storing them in a database they could feed to any data protection tools, Frank is starting with a regular expression to identify credit card numbers, and then plans on refining it using customer names (which are stored in the database). He is hoping that whatever tools he picks can use a generic credit card number definition for low-priority alerts, and a credit card (generic) tied with a customer name to trigger higher priority alerts. Frank also plans on using violation counts to help find real problems areas. Frank now has a generic category (PCI), a specific definition (generic regex and customer name from a database) and the repository location (the customer database itself). From the heads of the customer relations and billing, he learned that there are really two databases he needs to worry about: the main transaction processing/records system for the web outlet, and the point of sale transaction processing system for the retail outlets. The web outlet does not store unencrypted credit card numbers, but the retail outlets currently do, and they are working with the transaction processor to fix that. Thus he is adding credit card numbers from the retail database to his list of data sources. Fortunately, they are only stored in the central processing database, and not at the individual retail outlets. That’s the setup – in our next post we will cover the Discovery process to figure out where the heck all that data is. Share:

Share:
Read Post

Security Strategies for Long-Term, Targeted Threats

After writing up the Advanced Persistent Threat in this week’s FireStarter, a few people started asking for suggestions on managing the problem. Before I lay out some suggestions, it’s important to understand what we are dealing with here. APT isn’t some sort of technical term – in this case the threat isn’t a type of attack, but a type of attacker. They are advanced – possessing strong skills and capabilities – and persistent, in that if you are a target they will continue to attempt attacks until they succeed or the costs are greater than the potential rewards. You don’t just have to block them once so they move on – they will continue to probe and strike until they achieve their goal. Thus my recommendations will by no means “eliminate” APT. I can make a jazillion recommendations on different technology solutions to block this or that attack technique, but in the end a persistent threat actor will just shift tactics in response. Rather, these suggestions will help detect, contain, and mitigate successful attacks. I also highly suggest you read Andrew Jaquith’s post, with this quote: If you fall into the category of companies that might be targeted by a determined adversary, you probably need a counter-espionage strategy – assuming you didn’t have one already. By contrast, thinking just about “APT” in the abstract medicalizes the condition and makes it treatable by charlatans hawking miracle tonics. Customers don’t need that, because it cheapens the threat. If you believe you are a target, I recommend the following: Segregate your networks and information. The more internal barriers an attacker needs to traverse, the greater your chance to detect. Network segregation also improves your ability to tailor security controls (especially monitoring) to the needs of each segment. It may also assist with compartmentalization, but if you allow VPN access across these barriers, segregation won’t help nearly as much. The root cause of many breaches has been a weak endpoint connecting over VPN to a secured network. Invest heavily in advanced monitoring. I don’t mean only simple signature-based solutions, although those are part of your arsenal. Emphasize two categories of tools: those that detect unusual behavior/anomalies, and those with extensive collection capabilities to help in investigations once you detect something. Advanced monitoring changes the playing field! We always say the reason you will eventually be hacked is that when you are on defense only, the attacker only needs a single mistake to succeed. Advanced monitoring gives you the same capability – now the attacker needs to execute with greater perfection, over a sustained period of time, or you have a greater chance of detection. Upgrade your damn systems. Internet Explorer 6 and Windows XP were released in 2001; these technologies were not designed for today’s operating environment, and are nearly impossible to defend. The anti-exploitation technologies in current operating systems aren’t a panacea, but do raise the barrier to entry significantly. This is costly, and I’ll leave it to you to decide if the price is worth the risk reduction. When possible, select 64 bit options as they include even stronger security capabilities. No, new operating systems won’t solve the problem, but we might as well stop making it so damn easy for the attackers. Longer term, we also need to pressure our application vendors to update their products to utilize the enhanced security capabilities of modern operating systems. For example, those of you in Windows environments could require all applications you purchase to enable ASLR and DEP (sorry Adobe). By definition, advanced persistent threats are as advanced as they need to be, and won’t be going away. Compartmentalization and monitoring will help you better detect and contain attacks, and are fairly useful no matter what tactics your opponent deploys. They are also pretty darn hard to implement comprehensively in current operating environments. But again, nothing can “solve” APT, since we’re talking about determined humans with time and resources, who are out to achieve the specific goal of breaking into your organization. Share:

Share:
Read Post

FireStarter: APT—It’s Called “Espionage”, not “Information Warfare”

There’s been a lot of talk on the Interwebs recently about the whole Google/China thing. While there are a few bright spots (like anything from the keyboard of Richard Bejtlich), most of it’s pretty bad. Rather than rehashing the potential attack details, I want to step back and start talking about the bigger picture and its potential implications. The Google hack – Aurora or whatever you want to call it – isn’t the end (or the beginning) of the Advanced Persistent Threat, and it’s important for us to evaluate these incidents in context and use them to prepare for the future. As usual, instead of banding together, parts of the industry turned on each other to fight over the bones. On one side are pundits claiming how incredibly new and sophisticated the attack was. The other side insisted it was a stupid basic attack of no technical complexity, and that they had way better zero days which wouldn’t have ever been caught. Few realize that those two statements are not mutually exclusive – some organizations experience these kinds of attacks on a continuing basis (that’s why they’re called “persistent”). For other organizations (most of them) the combination of a zero-day with encrypted channels is way more advanced than what they’re used to or prepared for. It’s all a matter of perspective, and your ability to detect this stuff in the first place. The research community pounced on this, with many expressing disdain at the lack of sophistication of the attack. Guess what, folks, the attack was only as sophisticated as it needed to be. Why burn your IE8/Win7 zero day if you don’t have to? I don’t care if an attack isn’t elegant – if it works, it’s something to worry about. Do not think, for one instant, that the latest wave of attacks represents the total offensive capacity of our opponents. This is espionage, not ‘warfare’ and it is the logical extension of how countries have been spying on each other since the dawn of human history. You do not get to use the word ‘war’ if there aren’t bodies, bombs, and blood involved. You don’t get to tack ‘cyber’ onto something just because someone used a computer. There are few to no consequences if you’re caught. When you need a passport to spy you can be sent home or killed. When all you need is an IP address, the worst that can happen is your wife gets pissed because she thinks you’re browsing porn all night. There is no motivation for China to stop. They own major portions of our national debt and most of our manufacturing capacity, and are perceived as an essential market for US economic growth. We (the US and much of Europe) are in no position to apply any serious economic sanctions. China knows this, and it allows them great latitude to operate. Ever vendor who tells me they can ‘solve’ APT instantly ends up on my snake oil list. There isn’t a tool on the market, or even a collection of tools, that can eliminate these attacks. It’s like the TSA – trying to apply new technologies to stop yesterday’s threats. We can make it a lot harder for the attacker, but when they have all the time in the world and the resources of a country behind them, it’s impossible to build insurmountable walls. As I said in Yes Virginia, China Is Spying and Stealing Our Stuff, advanced attacks from a patient, persistent, dangerous actor have been going on for a few years, and will only increase over time. As Richard noted, we’ve seen these attacks move from targeting only military systems, to general government, to defense contractors and infrastructure, and now to general enterprise. Essentially, any organization that produces intellectual property (including trade secrets and processes) is a potential target. Any widely adopted technology services with private information (hello, ISPs, email services, and social networks), any manufacturing (especially chemical/pharma), any infrastructure provider, and any provider of goods to infrastructure providers are on the list. The vast majority of our security tools and defenses are designed to prevent crimes of opportunity. We’ve been saying for years that you don’t have to outrun the bear, just a fellow hiker. This round of attacks, and the dramatic rise of financial breaches over the past few years, tells us those days are over. More organizations are being deliberately targeted and need to adjust their thinking. On the upside, even our well-resourced opponents are still far from having infinite resources. Since this is the FireStarter I’ll put my recommendations into a separate post. But to spur discussion, I’ll ask what you would do to defend against a motivated, funded, and trained opponent? Share:

Share:
Read Post

Some APT Controls

Now, all of that said, the world isn’t coming to an end. Just because we can’t eliminate a threat doesn’t mean we can’t contain it. The following strategies aren’t specific to any point technology, but can help reduce the impact when your organization is targeted: Segregate your networks and information. The more internal barriers an attacker needs to traverse, the greater your likelihood of detection. Network segregation also improves your ability to tailor security controls, especially monitoring, to the needs of each segment. Invest heavily in advanced monitoring. I don’t mean only simple signature-based solutions, although those are part of your arsenal. Emphasize two categories of tools- those that detect unusual behavior/anomalies, and those will extensive collection capabilities to help in investigations once you detect something. Advanced monitoring changes the playing field! We always say the reason you will eventually be hacked is that when you are on defense only, the attacker only needs you to make a single mistake to succeed. Advanced monitoring gives you the same capability- now the attacker needs to execute with near-perfection, over a sustained period of time, or you have a greater chance of detection. Upgrade your damn systems. Internet Explorer 6 and Windows XP were released in 2001; these technologies were not designed for today’s operating environment, and are nearly impossible to defend. The anti-exploitation technologies in current operating systems aren’t a panacea, but do raise the barrier to entry significantly. This is costly, and I’ll leave it to you to decide if the price is worth the risk reduction. When possible, select 64 bit options since they include even stronger security capabilities. Longer term, we also need to pressure our application vendors to update their products to utilize the enhanced security capabilities of modern operating systems. For example, those of you in Windows environments could require all applications you purchase to enable ASLR and DEP (sorry Adobe). By definition, advanced persistent threats are as advanced as they need to be, and won’t be going away. APT the logical extension of all of human history, let’s not pretend it is anything more or less. Share:

Share:
Read Post

Pragmatic Data Security: The Cycle

Back in Part 1 of our series on Pragmatic Data Security we covered some of the guiding concepts of the process, and now it’s time to dig in and show you the process itself. Before I introduce the process cycle, it’s important to remember that Pragmatic Data Security isn’t about trying to instantly protect everything – it’s a structured, straightforward process to protect a single information type, which you then expand in scope incrementally. It’s designed to answer the question, “How can I protect this specific content at this point in time, in my existing environment?” rather than, “How can I protect all my sensitive data right now?” Once we nail down one type of data, then we can move on to other sensitive information. Why? Because as we mentioned in Part 1, if you start with too broad a scope you dramatically increase your chance of failure. I previously covered the cycle in another post, but for continuity’s sake here it is, slightly updated: Define what information you want to protect (specifically – not general data classification). I suggest something very discrete, such as private customer data (specify which exact fields), or engineering documents for a specific project. Discover where it’s located (using any of various tools/techniques, preferably automated, such as DLP, rather than manually). Secure the data where it’s stored, and/or eliminate data where it shouldn’t be (access controls, encryption). Monitor data usage (various tools, including DLP, DAM, logs, & SIEM). Protect the data from exfiltration (DLP, USB control, email security, web gateways, etc.). For example, if you want to protect credit card numbers you’d define them in step 1, use DLP content discovery in step 2 to locate where they are stored, remove them or lock the repositories down in step 3, use DAM and DLP to monitor where they’re going in step 4, and use blocking technologies to keep them from leaving the organization in step 5. For the rest of this series we’ll walk through each step, showing what you need to do and tying it all together with a use case. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.