Securosis

Research

Friday Summary: August 12, 2011

Believe it or not, I’m not the biggest fan of travel. Oh, I used to be, maybe 10+ years ago when I was just starting to travel as part of my career. Being in your 20’s and getting paid to literally circle the globe isn’t all bad… especially when you’re single. But the truth is I got tired of travel long before I started a family. Traveling every now and then is a wonderful experience that can change the lens with which you view the world. Hitting the airport once or twice a month, on the other hand, does little more than disrupt your life (and I know plenty of people who travel even more than that). I miss being on a routine, and I really miss the strong local social bonds I used to have. Travel killed my chances of moving up to my next Black Belt. It wrecked my fitness consistency (yes, I still work out a ton, but not so much with other people, and bad hotel gyms and strange roads aren’t great for the program). It killed my participation in mountain rescue, although for a couple years it did let me ski patrol in Colorado while I lived in Phoenix. That didn’t suck. It mostly hurt my relationships with my “old” friends because I just wasn’t around much. Folks I basically grew up with, as we all congregated in Boulder (mostly) as we started college, and learned to rely on each other as surrogate family. Complete with Crazy Uncle Wade at the head of the Thanksgiving table (Wade is now in the Marshall Islands, after working as an electrician in Antarctica). On the other hand, I now have a social group that’s scattered across the country and the world. I see some of these people more than my local friends here in Phoenix, and we’re often on expense accounts without a curfew. I was sick last week at Black Hat and DefCon, but managed to spend a little quality time with folks like Chris Hoff, Alex Hutton, Martin and Zach from the Podcast, two good friends from Gartner days, Jeremiah, Ryan, Mike A., and the rest of the BJJ crew, and even some of these people’s spouses. Plus so many more that going to DefCon (in particular) now feels more like a week of summer camp than a work conference. With beer. And parties in the biggest clubs in Vegas (open bar). And… well, we’re not 13 anymore. What’s amazing and awesome is almost none of us work together, and most of us don’t live anywhere near each other. And it isn’t unusual to roll into some random city (for a client gig, not even a conference), and find out someone else is also in town. We live strange lives as digital nomads who combine social media and frequent flyer miles to create a personal network that’s different from seeing the same faces every weekend at the Rio (Boulder thing), but likely as strong. I don’t think this could exist without both the technical and physical components. I still miss the consistency of life with a low-travel job. But in exchange I have the kinds of adventures other people write books about, and get to share them with a group of people I consider close friends, even if I can’t invite them over for a BBQ without enough time to get through their personal gropings at the airport. -Rich On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian quoted on tokenization. Favorite Securosis Posts Mike Rothman: NoSQL and No Security. Nothing like poking a big security hole in an over-hyped market. Who needs DB security anyway? Adrian Lane: Data Security Lifecycle 2.0: Functions, Actors, and Controls. Why? Because the standard data security lifecycle fails when applied to cloud services – you need to take location and access into account. Our goal is to make the model simple to use, so please give us your feedback. David Mortman: Use THEIR data to tell YOUR story. Rich: Words matter: You stop attacks, not breaches. I know, I know, we should stop thinking marketing will ever change. But everyone has their windmill. Other Securosis Posts Say Hello to Chip and Pin. Incite 8/10/2011: Back to the Future. Introducing the Data Security Lifecycle 2.0. Data Security Lifecycle 2.0 and the Cloud: Locations and Access. Fact-Based Network Security: Defining ‘Risk’. Incite 8/3/2011: The Kids Are Our Future. Words matter: You stop attacks, not breaches. Cloud Security Training: August 16-18, Washington DC. Security has always been a BigData problem. New Blog Series: Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Favorite Outside Posts Mike Rothman: Marcus Ranum: Dangerous Cyberwar Rhetoric. Ranum can pontificate with the best of them, but this perspective is dead on. Attribution is harder, and even more important, as the lines between “cyber” and physical war inevitably blur. Adrian Lane: Comments about the $200,000 BlueHat prize. ErrataRob clarifies the security bounty program. David Mortman: Metricon 6 Wrap-Up. Chris Pepper: Badass of the Week: Abram A. Heller. Totally badass without being an ass. Rich: Sunset of a Blog. Glenn is a good friend and one of the people who helped launch my writing career, especially on the Mac side (via TidBITS). This post shows the difference between a blogger and a writer. Research Reports and Presentations Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Top News and Posts Microsoft Security Program & Vulnerability Data Now Available. Did Airport Scanners Give Boston TSA Agents Cancer? TSA says that’s BS. Survey Finds Smartphone Apps Store Too Much Personal Data. What? No way! 22 Reasons to Patch Your Windows PC via Krebs. Cameron Threatens To Shut Down UK Social Networks.

Share:
Read Post

Data Security Lifecycle 2.0: Functions, Actors, and Controls

In our last post we added location and access attributes to the Data Security Lifecycle. Now let’s start digging into the data flow and controls. To review, so far we’ve completed our topographic map for data: This illustrates, at a high level, how data moves in and out of different environments, and to and from different devices. It doesn’t yet tell us which controls to use or where to place them. That’s where the next layer comes in, as we specify locations, actors (‘who’), and functions: Functions There are three things we can do with a given datum: Access: View/access the data, including copying, file transfers, and other exchanges of information. Process: Perform a transaction on the data: update it, use it in a business processing transaction, etc. Store: Store the data (in a file, database, etc.). The table below shows which functions map to which phases of the lifecycle: Each of these functions is performed in a location, by an actor (person). Controls Essentially, a control is what we use to restrict a list of possible actions down to allowed actions. For example, encryption can be used to restrict access to data, application controls to restrict processing via authorization, and DRM storage to prevent unauthorized copies/accesses. To determine the necessary controls; we first list out all possible functions, locations, and actors; and then which ones to allow. We then determine what controls we need to make that happen (technical or process). Controls can be either preventative or detective (monitoring), but keep in mind that monitoring controls that don’t tie back into some sort of alerting or analysis merely provide an audit log, not a functional control. This might be a little clearer for some of you as a table: Here you would list a function, the actor, and the location, and then check whether it is allowed or not. Any time you have a ‘no’ in the allowed box, you would implement and document a control. Tying It together In essence what we’ve produced is a high-level version of a data flow diagram (albeit not using standard programming taxonomy). We start by mapping the possible data flows, including devices and different physical and virtual locations, and at which phases in its lifecycle data can move between those locations. Then, for each phase of the lifecycle in a location, we determine which functions, people/systems, and more-granular locations for working with the data are possible. We then figure out which we want to restrict, and what controls we need to enforce those restrictions. This looks complex, but keep in mind that you aren’t likely to do it for all data within an entire organization. For given data in a given application/implementation you’ll be working with a much more restrictive subset of possibilities. This clearly becomes more involved with bigger applications, but practically speaking you need to know where data flows, what’s possible, and what should be allowed, to design your security. In a future post we’ll show you an example, and down the road we also plan to produce a controls matrix which will show you where the different data security controls fit in. Share:

Share:
Read Post

Introducing the Data Security Lifecycle 2.0

Four years ago I wrote the initial Data Security Lifecycle and a series of posts covering the constituent technologies. In 2009 I updated it to better fit cloud computing, and it was incorporated into the Cloud Security Alliance Guidance, but I have never been happy with that work. It was rushed and didn’t address cloud specifics nearly sufficiently. Adrian and I just spent a bunch of time updating the cycle and it is now a much better representation of the real world. Keep in mind that this is a high-level model to help guide your decisions, but we think this time around we were able to identify places where it can more specifically guide your data security endeavors. (As a side note, you might notice I use “data security” and “information-centric security” interchangeably. I think infocentric is more accurate, but data security is more recognized, so that’s what I tend to use.) If you are familiar with the previous model you will immediately notice that this one is much more complex. We hope it’s also much more useful. The old model really only listed controls for data in different phases of the lifecycle – and didn’t account for location, ownership, access methods, and other factors. This update should better reflect the more complex environments and use cases we tend to see these days. Due to its complexity, we need to break the new Lifecycle into a series of posts. In this first post we will revisit the basic lifecycle, and in the next post we will add locations and access. The lifecycle includes six phases from creation to destruction. Although we show it as a linear progression, once created, data can bounce between phases without restriction, and may not pass through all stages (for example, not all data is eventually destroyed). Create: This is probably better named Create/Update because it applies to creating or changing a data/content element, not just a document or database. Creation is the generation of new digital content, or the alteration/updating of existing content. Store: Storing is the act committing the digital data to some sort of storage repository, and typically occurs nearly simultaneously with creation. Use: Data is viewed, processed, or otherwise used in some sort of activity. Share: Data is exchanged between users, customers, and partners. Archive: Data leaves active use and enters long-term storage. Destroy: Data is permanently destroyed using physical or digital means (e.g., cryptoshredding). These high-level activities describe the major phases of a datum’s life, and in a future post we will cover security controls for each phase. But before we discuss controls we need to incorporate two additional aspects: locations and access devices. Share:

Share:
Read Post

Data Security Lifecycle 2.0 and the Cloud: Locations and Access

In our last post we reviewed the Data Security Lifecycle, but other than some minor wording changes (and a prettier graphic thanks to PowerPoint SmartArt) it was the same as our four-year-old original version. But as we mentioned, quite a bit has changed since then, exemplified by the emergence and adoption of cloud computing and increased mobility. Although the Lifecycle itself still applies to basic, traditional infrastructure, we will focus on these more complex use cases, which better reflect what most of you are dealing with on a day to day basis. Locations One gap in the original Lifecycle was that it failed to adequately address movement of data between repositories, environments, and organizations. A large amount of enterprise data now transitions between a variety of storage locations, applications, and operating environments. Even data created in a locked-down application may find itself backed up someplace else, replicated to alternative standby environments, or exported for processing by other applications. And all of this can happen at any phase of the Lifecycle. We can illustrate this by thinking of the Lifecycle not as a single, linear operation, but as a series of smaller lifecycles running in different operating environments. At nearly any phase data can move into, out of, and between these environments – the key for data security is identifying these movements and applying the right controls at the right security boundaries. As with cloud deployment models, these locations may be internal, external, public, private, hybrid, and so on. Some may be cloud providers, other traditional outsourcers, or perhaps multiple locations within a single data center. For data security, at this point there are four things to understand: Where are the potential locations for my data? What are the lifecycles and controls in each of those locations? Where in each lifecycle can data move between locations? How does data move between locations (via what channel)? Access Now that we know where our data lives and how it moves, we need to know who is accessing it and how. There are two factors here: Who accesses the data? How can they access it (device & channel)? Data today is accessed from all sorts of different devices. The days of employees only accessing data through restrictive applications on locked-down desktops are quickly coming to an end (with a few exceptions). These devices have different security characteristics and may use different applications, especially with applications we’ve moved to SaaS providers – who often build custom applications for mobile devices, which offer different functionality than PCs. Later in the model we will deal with who, but the diagram below shows how complex this can be – with a variety of data locations (and application environments), each with its own data lifecycle, all accessed by a variety of devices in different locations. Some data lives entirely within a single location, while other data moves in and out of various locations… and sometimes directly between external providers. This completes our “topographic map” of the Lifecycle. In our next post we will dig into mapping data flow and controls. In the next few posts we will finish covering background material, and then show you how to use this to pragmatically evaluate and design security controls. Share:

Share:
Read Post

Cloud Security Training: August 16-18, Washington DC

Hey everyone, Just a quick announcement that we are holding another CCSK training class in a few weeks. This one is in the DC area (Falls Church) and includes the Basic, Plus, and Train the Trainer options. The Basic class is a day of lecture covering the CSA Guidance and cloud security basics. The second day is all hands-on, and you’ll launch instances, build an application stack, encrypt stuff, and get really confused by federated identity. Hope to see you there, and you can register. Share:

Share:
Read Post

Donate Your Bone Marrow

I’m going to keep this short. Dave Lewis (@gattaca)’s wife was diagnosed with leukemia yesterday. Dave is one of our Contributing Analysts and a hell of a great guy, and while I haven’t met her, everyone says his wife is even better (seems to be a common trend). James Arlen (@myrcurial) posted with details over at Liquidmatrix. You may not know this, but Dave’s wife is the second person in our security community suffering from a blood-related disease (the other being Barkode, a fellow Defcon goon). If you aren’t signed up as a bone marrow donor, do it now. Only 1 in about 540 people in the registry are ever matched, so they need massive numbers. A lot of people used to tell me how cool it was that I was “saving lives” when I was working fire/rescue/ambulance. Donating your bone marrow is a far more effective and direct way to save someone. I’m not sure I actually saved too many lives in those days, but I do know that if I’m a match the odds are damn high someone will live who nature tried to take out. Do it. Now. Share:

Share:
Read Post

Friday Summary: July 14, 2011

Some days I think that in fitness, I’m getting wrong everything I advise people in security. I’ve been an athlete all my life – including some stints competing at a reasonably high (amateur) level. Like the time I went to nationals for my martial art. Cool, eh? Other than the part about getting my butt whipped by a 16-year-old. It seems cutting weight in a sport where knockouts aren’t the goal isn’t necessarily a good thing (me strong… me slow… puny teenager stand still so Hulk can kick in head, pleeze?). But running a startup and having kids seriously crimps my workout style. No more 20 hours of training a week, with entire weekends spent climbing or skiing some mountain. Here are a few of the ways in which I’m an idiot: I’m addicted to the toys. I currently use the Rolex of heart rate monitors (the Polar RS800CX). This thing connects to up to 4 external sensors at once to track my heart rate, position, and (I think) the fungus level of my little toe. Does it make me faster? Er… nope. So I’m spending for capabilities far beyond my needs. But damn, I really want that watch that counts my swimming laps. I bet I’d really use that one every day. I promise – now can I buy it? I’m a binge/purge sort of athlete. Rather than hitting a steady state of training and sticking with it, I’m on and off my program like a child actor at rehab. Oh, I always have great excuses like kids and travel, but as much time as I dedicate to working out, I tend to blow it with a bad month here or there. In other words some days I feel like I flit around worse than a horny butterfly with a narcissism problem. I get hurt. A lot. Then instead of fixing the root cause I freak out that I’m getting out of shape, jump back in at full speed, and get hurt again. I suppose I’m consistent (I have been on this cycle since I was a kid). On the upside, I get my money’s worth from insurance. I have delusions of grandeur. If some dude passes me on the bike I take it personally. Which is inconvenient, since most folks pass me on the bike. Or the run. Or… whatever. So I try to keep up, ignoring the fact that I train in places that attract professional athletes. Yeah, that doesn’t last too long. What really sucks is that as easy as it is to identify these problems, and much as I do (sometimes) work on them, I still make the same mistakes over and over. Okay, age has mellowed me a bit, but I’d quit my job and work out 8 hours a day in a heartbeat… … which I can measure with extreme accuracy thanks to my watch. And heck, after blowing out my knee by hour 6 I can go start work again. This is depressing. I think I’ll go sign up for a race… On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian quoted on DAM market trends. Rich quoted in eWeek Europe. Rich on NetSecPodcast. Adrian’s Dark Reading Post on Federated Data. Mike’s monthly post on Dark Reading: Low And Slow, Persistence, Loud And Proud, And The Fundamentals. Favorite Securosis Posts Mike Rothman: Friction and Security. Wouldn’t it be great if we had KY Jelly for making everyone in IT work better together? Adrian Lane: Incite: The King of the House. Chicken McNuggets for vegetarians. Priceless. Rich: Call off the (Attack) Dogs. Other Securosis Posts (for 2 weeks because we skipped last week’s summary) Security Marketing FAIL: Claims of Risk Reduction. Tokenization vs. Encryption: Healthcare Data Security. Tokenization vs. Encryption: Personal Information Security. How to Encrypt or Tokenize for SaaS (and Some PaaS). Smart Card Laggards. Simple Isn’t Simple. Social Media Security 101. Incite 7/6/2011: Reading Between the Lines. Favorite Outside Posts Mike Rothman: Space Shuttle: good riddance. Count on Rob Graham to look at the situation, not the nostalgia, then bring it around to security. Compelling arguments about complexity and risk. Adrian Lane: How Digital Detectives Deciphered Stuxnet. Best article documenting Stuxnet I have read. Very entertaining. Rich: While not security specific, James Staten at Forrester has a good summary of this week’s cloud announcements. These are all pretty big developments that will affect your datacenter operations. Eventually. Pepper: Evgeny Kaspersky interviewed by Spiegel. Wide ranging and pretty interesting. Research Reports and Presentations Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Top News and Posts Anti-Sec is not a cause, it’s an excuse. Azeri Banks Corner Fake AV, Pharma Market via Krebs. SIEM Montage. Gotta have a montage! Anonymous Declares War on .mil. Microsoft Patches Bluetooth Hole in July’s Patch Tuesday. Intego Releases iPhone Malware Scanner. Jury’s still out. Google Removes All .CO.CC Subdomains Over Phishing, Spam Concerns. A Journey to the Cloud (Part 2). Inside the Chinese Way of Hacking. Police: Internet providers must keep user logs. Sony Exec Calls PlayStation Network Hack ‘A Great Experience’. In other news, he’s also really into S&M. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Michael, in response to Incomplete Thought: HoneyClouds and the Confusion Control. We will not be able to tell if the effectiveness of these Proteus tactics actually works, although I would welcome it. I do actually believe these tactics will work against certain people / bots. I am a big believer in time, the longer time it takes the more a person / bot is prone to give up and move

Share:
Read Post

How to Encrypt or Tokenize for SaaS (and Some PaaS)

A few weeks ago I posted on different methods for encrypting IaaS volumes, which tends to be one of the top questions I get about data security in the cloud. Also high on that list is encrypting (or tokenizing) for SaaS and (some) PaaS. I call this the “Salesforce.com Problem”, because more often than not I’m talking to someone on the larger side, specifically about Salesforce.com. Before I go into options, I need to explain why I’m only talking about some PaaS. PaaS covers a very wide range of technologies – from Database as a Service, to things like Google APIs, to full-on application environments like CloudFoundry and Elastic Beanstalk. For this post I’m mostly restricting myself to SaaS-related PaaS like Force.com. In other words, API interfaces to things you can also run completely via a web interface. I know this is a grey line, and in some future post I’ll go more into detail on encrypting for the rest of PaaS. Just recognize that the core architecture described here works for cases beyond this scope, but some of the issues & details may not apply. There are only two options for SaaS encryption: Encrypt it at the SaaS provider. Encrypt it before you send it. To review quickly, when analyzing encryption systems we look at the locations of three components: the data, the encryption engine, and key management. If your SaaS provider handles the encryption on their side, they hold all three components, so this option requires trust in your provider. Yes, there are many subtleties and options that dramatically affect security, but at the core the provider needs the key and the data at some point. The advantage (for you) is simplicity and flexibility. But if you don’t trust your SaaS provider, you’ll need to encrypt on your side… which means increased cost and complexity. If you encrypt it before you send it, there are two options: Encrypt in a client application before uploading the data. Proxy connections and encrypt at the proxy. The first option is common for things like backup applications, but as I mentioned that’s more PaaS – the part we aren’t talking about here. Espcially because the vast majority of the apps I am talking about today are web-based. So most organizations I know which are looking to do this are evaluating proxy-based solutions such as CipherCloud, PerspecSys (maybe – their website sucks and doesn’t mention how they work), and Navajo Systems. These are application-aware web proxies that intercept browser calls to the SaaS provider and replace sensitive data with encrypted or tokenized values. Instead of connecting directly to the SaaS provider, users go through the proxy. You configure it to encrypt or tokenize sensitive data, although instead of defining every field on every form you should be able to say “account number” and have the product automagically replace it everywhere. In some future post I’ll delve into this architecture in more depth, but there are three main challenges to this approach: The product needs to stay totally up to date with any changes with the SaaS provider UI/application. When you are intercepting and rewriting HTML fields on the fly, you really need to know exactly where they are. Users need to connect back through your enterprise, or a trusted web-based host (e.g., running the proxy at Rackspace). For your internal network, this means you’re back to running VPNs. If you host on the outside, you have another party to trust but can handle it with bookmarks or such. If you use a cloud-based web proxy for URL filtering and content security, you might be able to map it up there. You might break application functionality/usefulness. This requires a lot of translation, which affects SaaS features that rely on the protected data. This becomes more of an issue as you protect more fields and data types – the more you obfuscate, the less your SaaS app can process. (It can still process the un-tokenized data). Because of these challenges I tend to regard this proxy approach as a band-aid for SaaS. It’s definitely not ideal, and a heck of a lot of work for the vendor to keep up and running. I believe it makes more sense for PaaS, where you rely more on APIs than HTML interfaces. In all cases I think the web proxy approach is best used for very discrete and limited data – otherwise there is too much potential loss of core application functionality, at which point you might as well stick to internal systems. Share:

Share:
Read Post

Simple Isn’t Simple

I have to admit that some days I have no idea what will resonate with readers. For example, my latest column over at Dark Reading seems to be generating a lot more interest than I expected. For a few months now I’ve been bothered by all the pile-ons every time some organization gets hacked. Sure, some of them really are negligent, and others are simply lazy or misguided, but the rest really struggle to keep the bad guys out. There’s never any shortage of experts with hindsight bias ready to say X attack would have been stopped if they only used Z security best practice. It’s like a bunch of actors sitting around going “I could have done it better”. Frequently this ‘advice’ is applied to a large organization which “should know better”. But these critics consistently fail to account for the cost and complexity of doing anything at scale, or for (universal) resource constraints. This was the inspiration behind Simple Isn’t Simple. Here’s a quote: This isn’t one of those articles with answers. Sure, I can talk all day about how users need to operationalize security more, and vendors need to simplify, consolidate, and improve functionality. But in the end those problems are every bit as hard as everything else I’m talking about and won’t be solved anytime soon. Especially since the economics aren’t overly favorable. But we can recognize that we rely on complex solutions to difficult problems, and blaming every victim for getting hacked isn’t productive. Especially since you’re next. Security is hard. It’s even harder at scale. And we need to stop pretending that even the most basic of practices are always simple, and start focusing on how to make them more effective and easier to manage in a messy, ugly, real world. I thought is was the usual analyst BS, but I guess there’s something more to it… Share:

Share:
Read Post

Social Media Security 101

It won’t surprise any of you to learn that I don’t follow Fox News on Twitter. I know, I can see the shock in your eyes, but I’m not the biggest fan of our friends on the right. Actually, I hate all 24 hour news stations – Fox biased to the right, MSNBC to the left, and CNN to the stupid. So I missed their announcement of to the demise of our commander in chief. It seems one of their Twitter accounts was hacked, and the attackers had a little fun with some bogus tweets. If you read this blog you probably know everything I’m about to write, but it’s probably a good time to review it anyway. If you use these services for business purposes, there are a few precautions to put in place: If you use social media in your business, make sure you set up accounts (or use your personal accounts) to monitor your official account. Be very cautious in how you handle your account credentials (who you give them to, how they are secured, etc.). The list of people with access should definitely be very short. Use an OAuth-based service or application to allow employees to tweet to your account without having to give them your account password. This is how most Twitter clients work today, for example. If you are large enough, talk to your provider ahead of time to understand how to report problems, and who to report them to. The last thing you want to be doing is hanging out waiting for a help desk person to see your request in the queue. Make contact, get a name, and establish a validation process to prove you are the owner of the account in an incident. You’ll also use this process if an employee goes rogue. Simple stuff, but I suspect very few businesses follow these basics. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.