Securosis

Research

The Blame Game

Get hacked? Blame China. Miss a quarter? Blame China. Serve malware to everyone visiting your site? Don’t take responsibility, just blame your anti-ad-blocking vendor. Or China. Or both. Look, we really can’t keep track of these things, but in this episode Mike and Rich talk about the lack of accountability in our industry (and other industries). One warning… a particular analogy goes a little too far. Maybe we need the explicit tag on this one. Watch or listen: Share:

Share:
Read Post

Critical Security Capabilities for Cloud Providers

Between teaching classes and working with clients, I spend a fair bit of time talking about particular cloud providers. The analyst in me never wants to be biased, but the reality is there are big differences in terms of capabilities, and some of them matter. Throwing out all the non-security differentiators, when you look at cloud providers for enterprises there are some critical security capabilities you need for security and compliance. Practically speaking, these quickly narrow down your options. My criteria are more IaaS-focused, but it should be obvious which also apply to PaaS and SaaS: API/admin logging: This is the single most important compliance control, a critical security control, and the single biggest feature gap for even many major providers. If there isn’t a log of all management activity, ideally including that by the cloud provider itself, you never really know what’s happening with your assets. Your only other options are to constantly snapshot your environment and look for changes, or run all activity through a portal and still figure out a way to watch for activity outside that portal (yes, people really do that sometimes). Elasticity and autoscaling: If it’s an IaaS provider and it doesn’t have autoscaling, run away. That isn’t the cloud. If it’s a PaaS or SaaS provider who lacks elasticity (can’t scale cleanly up or down to what you need), keep looking. For IaaS this is a critical capability because it enables immutable servers, which are one of the cloud’s best security benefits. For IaaS and PaaS it’s more of a non-security advantage. APIs for all security features: Everything in the cloud should be programmatically manageable. Cloud security can’t scale without automation, and you can’t automate without APIs. Granular entitlements: An entitlement is an access right/grant. The provider should offer more than just ‘admin’. Ideally down to each feature or API call, especially for IaaS and PaaS. Good, easy, SAML support that maps to the granular entitlements: Federated identity is the only reasonable way to manage all your users in the cloud. Fortunately, we nearly always see this one available. Unfortunately, some cloud providers make it a pain in the ass to set up. Multiple accounts and cross-account access: One of the best ways to compartmentalize cloud deployments is to use entirely different accounts for different projects and environments, then connect them together with granular entitlements when needed. This limits the blast radius if someone gets into the account and does something bad. I frequently recommend multiple accounts for a single cloud project, and this is considered normal. It does, however, require security automation, which ties into my API requirement. Software Defined Networking: Most major IaaS providers give you near complete control over your virtual networks. But some legacy providers lack an SDN, leaving you are stuck with VLANs or other technologies that don’t provide the customization you need to really make things work. Read my paper on cloud network security if you want to understand more. Regions/locations in different countries: Unless the cloud provider only want business in their country of origin, this is required for legal and jurisdictional reasons. Thanks to Brian Honan for catching my omission. This list probably looks a hell of a lot different than any of the other ones you’ve seen. That’s because these are the foundational building blocks you realize you need once you start working on real cloud projects. I’m probably missing some, but if I break this out all I’m really talking about are: Good audit logs. Decent compartmentalization/segregation at different levels. Granular rights to enforce least privilege. A way to manage everything and integrate it into operations. Please let me know in the comments or via Twitter if you think I’m missing anything. I’m trying to keep it relatively concise. Share:

Share:
Read Post

Summary: Refurbished

The grout in my shower isn’t merely cracking, it’s starting to flake out in chunks, backed by the mildew it spent years defending from my cleansing assaults. Our hallway walls downstairs are streaked like the protective concrete edges around a NASCAR track. Black, gray, and red marks left behind from hundreds of minor impacts with injection-molded plastic vehicles. The carpet in our family room, that little section between the sliding glass door to our patio and the kitchen, looks like it misses its cousins at the airport. In other words, our house isn’t new anymore. This is the second home I have owned. Well, it’s the second home a bank has owned with my name attached to it. The first was an older condo back in Boulder, but this is the house my wife and I custom ordered after we ere married. I still have the pictures we took the day we moved in, before we filled the space with our belongings and furniture. Plus all the minor things that lay waste to the last of your post-home disposable income, like window treatments and light fixtures. It was clean. It was exciting. A box of wood and drywall, filled with the future. That was about 9 years ago. A year before I left Gartner, and near when I started Securosis as a blog. Since then the house isn’t the only thing that’s a little rougher around the edges. Take me, for instance. I’m running a little light on hair, some days I can barely read my Apple Watch, and I’ve never recovered the upper body strength I lost after that rotator cuff surgery. I won’t even mention the long-term effects of a half-decade of sleep deprivation, thanks to having three kids in four years. Even Securosis shows its age. Despite our updates and platform migrations, I know the time is coming when I will finally need to break down and do a full site refresh. Somehow without losing 90 research papers and 19,000 blog posts. No, those aren’t typos. We also haven’t seen significant blog comments since Twitter entered the scene, and while we know a ton of people read our work, the nature of engagement is different. But that’s fine – it’s the nature of things. We are busy. Busier than ever since my personal blog first transformed into a company. And the nature of the work is frankly the most compelling of my career. We don’t really write as much, although we still write more than anyone else short of full-time news publications. Pretty soon I need to have the house painted, fix some cracking drywall, and replace some carpet. This house isn’t full of potential anymore – it’s full of life. It’s busy, messy, and sometimes broken. That only means it’s well used. So the next time you find a blog post with a broken image, or our stupid comment system snaps, drop us a line. We aren’t new, exciting, or shiny anymore, but sure as hell we still get shit done. Even if it takes an extra week or so. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich is presenting a webinar on cloud network security next week Securosis Posts Critical Security Capabilities for Cloud Providers Massive, Very Bad Java 0-Day (and, Sigh, Oracle). The Power of Immutable. The Economist Hack: Good Intentions, Bad Execution. Summary: Distract and Deceive. CSA Guidance V4 Content on GitHub. Favorite Outside Posts Rich: Trey Ford’s SecTor Keynote – Maturing InfoSec: Lessons from Aviation on Information Sharing. Trey is a pilot. Although I considered not putting this link in until he takes me up for a hop next time he’s in town. But that would be selfish. Research Reports and Presentations Pragmatic Security for Cloud and Hybrid Networks. EMV Migration and the Changing Payments Landscape. Network-based Threat Detection. Applied Threat Intelligence. Endpoint Defense: Essential Practices. Cracking the Confusion: Encryption and Tokenization for Data Centers, Servers, and Applications. Security and Privacy on the Encrypted Network. Monitoring the Hybrid Cloud: Evolving to the CloudSOC. Security Best Practices for Amazon Web Services. Securing Enterprise Applications. Top News and Posts Apple user anger as Mac apps break due to security certificate lapse. You had one job… Latest Android phones hijacked with tidy one-stop-Chrome-pop. You had one job… Tor Project claims FBI paid university researchers $1m to unmask Tor users. This is an interesting situation. There have always been close ties between academic researchers and law enforcement and defense. But if you cross the line from generic research to specific targets, or it involves ‘human’ testing that typically requires an IRB approval, it certainly crosses an academic boundary. And if law enforcement hires civilians to perform actions they are legally restricted from, that also seems more like garbage you would see on a CBS police procedural. I’ll leave this one for the lawyers. With just a password needed to access police databases, the FBI got basic security wrong. I was talking with a client today who asked if they had to use MFA on their (SAML authenticated) cloud accounts because they didn’t require it internally for admins. I told them that’s a great way to end up in the headlines. And, oh yeah, also turn it on for cloud. Comodo Issues Eight Forbidden Certificates. You had one… oh, nevermind. November Patch Tuesday Brings 12 Bulletins, Four Critical. Massive Hack of 70 Million Prisoner Phone Calls Indicates Violations of Attorney-Client Privilege. Guess who needs to write a post on data retention? Share:

Share:
Read Post

Massive, Very Bad Java 0-Day (and, Sigh, Oracle)

Last Friday my wife and I were out at a concert when, thanks to social media, I learned there is a major vulnerability in a common component of Java. I planned to write it up, but spent most of Monday dealing with a 6+ hour flight delay, and all day yesterday in a meeting. I’m glad I waited. First, if you are technical at all read the original post at Foxglove Security. Then read Mike Mimoso’s piece at Threatposst. The short version is this is a full, pre-authentication remote code execution vulnerability in a component that isn’t built into Java, but is nearly always installed and used in applications. Including things like WebSphere and JBoss. What’s fascinating is that this one has been floating around for a while but no one really paid attention. It was even reported to Oracle, who (according to Threatpost) didn’t pass the information on to the team that maintains that component! While Apache Commons has told Breen and Kennedy that a patch is being developed, there had been debate within the bowels of the Java community as to who should patch the bug: Apache Commons? Affected vendors? Oracle? Breen and Kennedy said Oracle was notified in July but no one had disclosed the issue to the Apache Commons team until recently. Jenkins has already mitigated the issue on its platform. … “We talked to lots of Java researchers and none of us had heard of [the vulnerability]. It was presented at the conference and made available online, but no one picked it up,” Breen said. “One thing it could be is that people using the library may not think they’re affected. If I told you that Apache Commons has an unserialize vulnerability, it probably wouldn’t mean much. But if I tell you JBoss, Jenkins and WebSphere have pre-authentication, remote code execution vulnerabilities, that means a lot more to people. The way it was originally presented, it was an unserialize vulnerability in Commons.” I harp on Oracle a lot for their ongoing failures in managing vulnerabilities and disclosures, going back to my Gartner days. In this case I don’t know how they were informed, which team it hit, or why it wasn’t passed on to the Apache Commons team. These things happen, but they do seem to happen more to Oracle than other major vendors responsible for foundational software components. This does seem like a major internal process failure, although I need to stress I’m basing that off one quote in an article, and happy to correct if I’m wrong. I’m trying really hard not to be a biased a-hole, but, well, you know… I don’t blame Oracle for all the problems in Java. Those started long before they purchased Sun. And this isn’t even code they maintain, which is one of the things that really complicates security for Java – or any programming framework. Java vulnerabilities are also a nightmare to patch because the software is used in so many different places, and packaged in so many different ways. If you use any of the major affected products, go talk to your vendor. If you write your own applications with Java, it’s time to pull out the code scanner. Share:

Share:
Read Post

The Power of Immutable

I wrote up a post over at the RSA Conference blog this week introducing the idea of immutable infrastructure to security professionals. It is a concept that really highlights some of the massive security benefits when you combine cloud computing and DevOps principles. Here’s a snippet: A simple example is when you use autoscaling in a cloud provider. You have a standard image of a server, and when you need more capacity the cloud service starts new instances behind a load balancer. When you don’t need that much capacity anymore (based on preset rules) the cloud service shuts down instances. This is exactly how elasticity in the cloud works. … No live patching. No remote logins. No antivirus needed (maybe). Any change, at all, to a running server easily detectable and indicative of an attack. I skipped a lot… go read the full article. Share:

Share:
Read Post

The Economist Hack: Good Intentions, Bad Execution

The Economist used a tool on their site to block collect stats and serve ads to visitors using ad blockers. I will avoid diving into the ad-blocking debate, but I will note that my quick check showed 16 ad trackers and beacons on the page. I don’t mind ads, but I do mind tracking. It turns out that tool, called PageFair, was compromised by attackers to serve malware to Economist readers. The Economist is one of the few publications I still respect, so this made me more than a little sad. This one is a good learning case. Ryan Naraine and I discussed it on Twitter. Both of us were critical of The Economist’s hack response, Ryan a bit more than me. I see the seeds of good intent here, but flawed execution. Let’s use this as a learning opportunity. Good: They detected the situation (or, more likely, someone else did and told them) and responded within 6 days. Good: They put up a dedicated page with information on the attack and what people should do. Good: They didn’t say “we care very deeply about the security and privacy of our customers”. I hate that crap. Good: The response page pops up when you visit the home page. Bad: The response page only pops up when you visit the home page from certain browsers (probably the ones they think are affected), and could be stopped if you use certain blockers. That’s a real problem if people use multiple systems, or if the attackers decide to block the popup. Bad: They don’t specify the malware to look for. They mention it was packaged as a fake Adobe update, but that’s it. No specificity, so you cannot know if you cleaned up the right badness. Bad: They recommend you change passwords before you clean the malware. VERY BAD. Thanks to @hacks4pancakes and @malwrhunterteam for finding that and letting me know. Bad: They recommend Antivirus, without confirm recommended tools would really find and remove this particular malware. That should be explicitly called out. It looks like an even split, but I’d give this response a C-. Right intention, poor execution. They should have used an in-page banner (not a popup) and a popup to grab attention. They should have identified the malware and advised people to clean it up before changing banking passwords. There is one issue of contention between myself and Ryan. Ryan said, “No one should ever rely on free anti-malware for any kind of protection”. I often recommend free AV, especially to consumers (usually Microsoft). It’s been many years since I used AV myself. Yes, Ryan works for an AV vendor, but he’s also someone I trust, who actually cares about doing the right thing and providing good advice. I don’t want to turn this into an AV debate, and Ryan and I both seem to agree that the real questions are: Would the AV they recommend have stopped this particular attack? Would the AV they recommend clean an infection? But they don’t provide enough detail, so we cannot know. Even just a line like, “we have tested these products against the malware and confirm it will completely remove the infection” would be enough. I’m not a fan of blaming the victim, but this is the risk you always face when embedding someone else’s code in your page. Hell, I talked about that when I was at Gartner over 10 years ago. You have a responsibility to your customers. The Economist seems to have tried to make the right moves, but made some pretty critical mistakes. Let’s not lambaste them, but we should certainly use this as a learning opportunity. Share:

Share:
Read Post

Summary: Distract and Deceive

Today I was sitting in my office, window open, enjoying the cold front that finally shoved the summer heat out of Phoenix. I had an ice pack on my leg because my achilles tendon has been a little twitchy as I go into the last 8 weeks of marathon training. My wife was going through the mail, walked in, and dropped a nice little form letter from the United States Office or Personnel Management onto my desk. It’s no secret I’m still an active disaster responder on a federal team. And, as previously mentioned, my data was lost in the OPM hack. However, my previous notification was for the part where they hacked the employment information database. This notification is for the loss of all security investigation records. Which is cool, because I don’t even have a security clearance. What was on there? Aside from my SSN, every address I’ve lived at (once going back to childhood, but I think the most recent form was only 7 years), most of my jobs, all my relatives, and (I think) my wife’s SSN. I’m not sure about that because I can’t remember exactly what year I most recently filled out that form, but I’m pretty sure it was after we were married. Here’s the fun part. The OPM just offered me 3 years of identity theft protection. Three. Years. Which I can only assume means my SSN will expire in that time and I’ll be safe afterwards. And it must mean China wasn’t responsible, because they would go after me as espionage, not identity theft. Right? RIGHT?!? It’s just another example of the old distract and deceive strategy to placate. No one involved in intelligence or security thinks for half a second that ID theft protection for three years is meaningful when an SSN is lost – never mind when it (and all my personal info) is lost to a foreign intelligence adversary. But it sounds good in the press and distracts the many millions of federal workers who don’t work in security and understand the implications. People who trust the government, their employer. This isn’t limited to the OPM hack – it’s really a shitty playbook for the security industry overall. Been hacked? Call it “advanced and persistent” and then announce you hired a top-tier incident response firm. It doesn’t matter that you used default admin passwords, it’s all about looking like you take security seriously, when you don’t. Well, didn’t. Really. Look at all the breach announcements from the past couple of years. Cut and paste. And then there are our security tools. Various point technologies, each designed to stop one particular type of attack during a particular time window. Some of them really work. But we don’t acknowledge that security is really about stopping adversaries (Gunnar Peterson constantly hammers on this), and then the window for that particular tech closes. This throws the vendors into a spin cycle because, let’s be honest, their entire livelihood is on the line. Distract. Deceive. Lather. Rinse. Repeat. Admitting failure is hard. Addressing root causes is hard. Realizing something you built is no longer as valuable as it once was is even harder. Hell, we here at Securosis once spent two years and a couple hundred thousand dollars building something that we had to walk away from because the market shifted. That was cash out of my personal pocket – I get it. This isn’t a security industry problem, it’s basic human behavior. I don’t have an issue with someone covering their ass, but when you deceive and distract to protect yourself, and put others at greater risk? Not cool. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich wrote an article at TidBITS on the million dollar iOS exploit. Rich quoted at Wired on using AI to stop malware. Favorite Securosis Posts Rich: Incite 11/4/2015 – The Taper. I’m training for my first marathon right now. Well, second time training, because I got stomach flu the week of my planned first and had to miss it. My entire life right now is focused on starting my taper on December 6th. Other Securosis Posts CSA Guidance V4 content on GitHub. DevOps’ed To Death. Why I design for one cloud at a time. Million Dollar iOS Exploit? Maybe. Get Your Marshmallows. Summary: Edumacation. Favorite Outside Posts Rich: Fast, flexible and free, Linux is taking over the online world. But there is growing unease about security weaknesses. A big WaPo piece on the security state of Linux? I sh*t you not. This is an important article that highlights some of the fundamental tensions at the heart of information security. Adrian: How Carders Use eBay as Virtual ATM. A very clever way to launder money through PayPal. This shouldn’t work – the various merchants should match the Zip code of the recipient to the Zip code associated with the credit card. Gas stations and automated kiosks ask for Zip codes for this reason. But I guess some merchants aren’t checking. Research Reports and Presentations Pragmatic Security for Cloud and Hybrid Networks. EMV Migration and the Changing Payments Landscape. Network-based Threat Detection. Applied Threat Intelligence. Endpoint Defense: Essential Practices. Cracking the Confusion: Encryption and Tokenization for Data Centers, Servers, and Applications. Security and Privacy on the Encrypted Network. Monitoring the Hybrid Cloud: Evolving to the CloudSOC. Security Best Practices for Amazon Web Services. Securing Enterprise Applications. Top News and Posts If the UK government collects browsing data, one day it will be public. How long until you need to register to use the Internet like they do in cybercafes in China? What an astoundingly bad idea. Pentagon Farmed Out Its Coding to Russia. That’s cool. Maybe they outsourced by identity protection to Russia because China supposedly hacked it, and we can let those two fight it out. Chinese Mobile Ad Library Backdoored to Spy on iOS Devices. Xen Patches ‘Worst’-Ever Virtual Machine Escape Vulnerability. I wonder which cloud providers this affects? Mozilla Embraces Private Browsing with Tracking Protection in Firefox 42. Safari? Chrome? Not-IE-but-can’t-remember-the-name? Blog Comment of the Week This week’s best comment

Share:
Read Post

CSA Guidance V4 Content on GitHub

A while back we announced that we were contracted by the Cloud Security Alliance to write the next version of the CSA Guidance. This is actually a community project, not us off writing by ourselves in a corner. The plan is to: Collect feedback on version 3.0 (complete). Post outlines for the updated domains and collect public feedback. Post first drafts for the updated domains and collect more feedback. Post near-final drafts for the last feedback, then complete the final versions. I’m happy to say the content is now going up on the project site at GitHub. The first draft of the architecture section is up, as is the outline for Domain 5 (data governance). Things will start moving along more quickly from here. The best way to use GitHub at this point is to submit Issues rather than Pull Requests. Issues we can use like comments. Pull requests are actual edits we would need to merge, and they will be difficult to handle at scale, especially if we don’t get consensus on a suggested change. I will periodically update things here on the blog, but you can watch all the real-time editing and content creation on GitHub. Share:

Share:
Read Post

DevOpsed to Death

Alan Shimmel asks have we beat “What is DevOps” to death yet? Alan illustrates his point by using the more-than-beaten-to-death, we-wish-it-would-go-away-right-now of Chuck Norris meme. Those of us who have talked about DevOps for a while are certainly beginning to tire of explaining why it is more than automation. But Alan’s question is legit, and I have to say the answer is “No!” We are in the top of the second inning of a game that will be playing out for years. I know no amount of coffee will stifle a yawn when practitioners are confronted with yet another DevOps definition. People who are past simple automated builds and moving down the pathway to continuous integration do not need to be told what DevOps is. What they need help with is practice in how to do it better. But DevOps is still a small portion of the IT and development community, and the rest of the folks out there may still need to hear it a dozen times more before its importance sinks in. There are very good definitions, which do not always resonate with developers. Try getting a definition to stick with people who believe they’ll be force chocked to death by a Sith Lord before code auto-deploys in an afternoon – not an easy task. To put this into context with other development trends, you can compare it to Agile. Within the last year I have had half a dozen inquiries on how to start with Agile development. Yes, I have lost count of how many years ago Agile and Scrum were born. Worse, during the RSA conference this year, I discussed failed Agile deployments with a score of firms. Most fell flat on their faces because they missed one or two of the most basic requirements of what it means to be Agile. If you think you will run a development cycle based on a 200-page specification document and still be Agile, you’re a failure waiting to happen. They failed on the basics, not the hard stuff. From a security perspective I have been talking about Database Activity Monitoring and its principal use cases for the last decade. Still, every few months I get asked “How does DAM work?” And don’t even bother asking Rich about DLP – he gets questions every week. We have repetitive strain injuries from slapping our foreheads in disbelief at the same basic questions; but firms still need help with mature technologies like encryption, firewalls, DAM, DLP, and endpoint security. DevOps is still “cutting edge” for Operations at large, and people will be asking about how DevOps works for a very long time to come.r Share:

Share:
Read Post

Why I design for one cloud at a time

Putting all your eggs in one basket is always a little disconcerting. Anyone who works with risk is always wary of reducing options. So I am never surprised when clients ask about alternative cloud providers and try to design cloud-agnostic applications. Personally I take a different view. Designing cloud-agnostic applications is like building an entirely self-sufficient home because you don’t want to be locked into the local utilities, weather conditions, or environment. Sure, you could try, but the tradeoffs would be immense. Especially cost. The key for any such project is to understand the risk of lock-in, and then select appropriate techniques to minimize the risk while still providing the most benefit from the platform you are using. The only way to really get the cost savings and performance advantages of the cloud is to design specifically for the cloud you are working on. For example use their load balancers and auto scale groups rather than designing your own. (Don’t worry, I’ll get to containers in a second). If you are building or bringing all your own software to the cloud platform, at a certain point, why move to the cloud at all? Practically speaking you will likely reduce your agility, resiliency, and economic benefits. I am talking in generic terms, but I have designed and reviewed some of these deployments so this isn’t just analyst handwaving. For example one common scenario is data transfer for batch analysis. The cloud-agnostic way is to set up a file server at your cloud provider, SFTP the data in, and then send that off to analysis servers. The file server becomes a major weak point (if it goes down, so does everything), and it likely uses the the cloud provider’s most expensive storage (volumes). And all the analysis servers probably need to be running all the time (the file server certainly does), also racking up charges. The cloud-native approach is to transfer the data directly to object storage (e.g., Amazon S3) which is typically the cheapest storage option and highly resilient. Amazon even has an option to transfer that data into its ridiculously cheap Glacier long-term storage when you are done. Then you can use a tool like Lambda to launch analysis servers (using spot instance pricing, which can shave off another 40% or more) and link everything together with a cloud message queue, where you only pay when you actually pump data through. Everything spins up when data appears and shuts down when it’s finished; you can load as many simultaneous jobs as you want but still pay nearly nothing when you have no active jobs. That’s only one example. But I get it – sometimes you really do need to plan for at least some degree of portability. Here’s my personal approach. I tend to go all-in on native cloud features (these days almost always on AWS). I design apps using everything Amazon offers, including SQS, SNS, KMS, Aurora, DynamoDB, etc. However… My core application logic is nearly always self-contained, and I make sure I understand the dependency points. Take my data processing example: the actual processing logic is cloud-agnostic. Only the file transfer and event-driven mechanisms aren’t. Worst case, I could transfer to another service. Yes, there would be overhead, but no more than designing for and running on multiple providers. Even if I used native data analysis services, I’d just ensure I’m good at documenting my logic and code so I could redo it someplace else if needed. But what about containers? In some cases they really can help with portability, but even when using containers you will likely still lock into certain of your cloud provider’s proprietary features. For example it’s just about suicidal to run your database inside containers. And containers need to run on top of something anyway. And certain capabilities simply work better in your provider than in a container. Be smart in your design. Know your lock-in points. Have plans to move if you need to. Micro or mini services is a great design pattern for knowing your dependency points. But in the end if you aren’t using nearly every little tweak your cloud provider offers, you are probably spending more, more prone to breakage, and slower than the competition who does. I can’t move my house, but as long as I hit a certain square footage, my furniture fits just fine. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.