Securosis

Research

Amazon RDS Announced

Amazon announced a Relational Database Service today: Amazon RDS gives you access to the full capabilities of a familiar MySQL database. This means the code, applications, and tools you already use today with your existing MySQL databases work seamlessly with Amazon RDS. Amazon RDS automatically patches the database software and backs up your database, storing the backups for a user-defined retention period. It was natural to choose the most popular open source database, MySQL 5.1, at least in the short term. With this introduction they have effectively filled out their cloud offering for database infrastructure services. To go along with the existing capabilities of Amazon’s Simple DB and a generic Amazon Machine Image that provide logical instances of any of the major database platforms, you have just about every option you could want as an application developer. There is a list of pricing options based upon tiers of memory and computational capacity for your web service. Storage is equally flexible, with the ability to select from 5GB to 1TB of storage capacity. Snapshotting, rollbacks, resource monitoring, automated backup, and pretty much everything needed for basic database setup and maintenance. What Amazon is doing is very cool, but this is a security blog so I need to make a few comments on security and not just act like an RDS fanboi. Which I sometimes hate because I feel like the guy who’s yelling “Hey kid, stop running around with that sharp stick! You’ll poke your eye out!” With the AMI variants, as Amazon takes care of patching and configuration, and the user takes care of access control and identity management. While the instances most likely have security patches applied on a consistent basis, there is a lot more to security than patching IDM. I have no evidence that these database instances are insecure, but no one gets the benefit of the doubt in this case. For most relational database platforms I look at about 125 different database settings in an assessment sweep, most of these are to ensure the factory defaults have been changed. There is no reason to believe that Amazon is doing the same, so protection against SQL injection falls on the shoulders of client developers. With MySQL databases for RDS, the situation appears to be a little different, as the user has some configuration options. The RDS Developer Guide shows that we can alter port settings and enforce SSL connections. But the API is limited and far more focused on programming than administration. The security guides don’t offer any details on usage of service accounts, default passwords, stored procedure access, networking agents, or other features that are not necessarily masked by the Amazon APIs. Many important security topics are simply not addressed. And odds are, if someone is going after your data, they are going to use SQL injection, default account access, or external stored procedures – all of which are your responsibility to secure. I would have a tough time putting any sensitive data out there until you can verify the security setup. Use caution or you might… oh, never mind. Share:

Share:
Read Post

IDM: Identity?

For Adam after harassing me on irc: Calling ‘accounts’ ‘identities’ is broken. Discuss. Share:

Share:
Read Post

Friday Summary – October 23, 2009

The First 90 Days. When you take a new position, what is it you will do in the first 90 days? What do you want to learn? What do you wish to accomplish? Is it enough to plan a course of action or do you immediately need to fix something? “What is your plan for your first 90 days?” is a common interview question for executives. The candidate’s answer tells the prospective employer a few things about the person’s grasp of the challenges ahead, how they operate typically, the efficiency of their approach, and how well their expectations align. Most candidates are under no illusion about taking a new role. In the best case they are filling a gap in a growing company, but more often than not they are there to fix something broken. The question cements in the mind of the candidate what is expected of them stepping in the door. And more than any other point during your tenure with a company, your first 90 days sets your boss’ and coworkers’ impressions of your effectiveness. Never in my career has fixing security been in my top 3 challenges for the first 90 days. It’s always been quality of service, failed process, a broken, product or a dysfunctional development team. I have never been a CISO or security officer so in the context of security, I don’t really know how I would answer the question “What would my first 90 days look like?” If you are a security practitioner, how would you answer the question? Or perhaps it is more interesting to ask non-security professionals what their 90-day plan for security is? What challenges could you hope to accomplish? Do you think you could come up with a security program in that amount of time? I am interested in your thoughts on this subject. Is research on the establishment of a security program interesting to you? Let us know what you think. On to the Friday Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s presentation on Creating a Data Classification Procedure for BusinessWeek. Rich’s TidBITS article on his trip to the Microsoft Store in Scottsdale, Arizona. Adrian’s Dark Reading post on Database Activity Monitoring. Rich presented Pragmatic Data Security and Pragmatic Database Security at TechTarget’s Information Security Decisions show in Chicago. Favorite Securosis Posts Rich: Mort’s post on IDM. Adrian: Splunk and Unstructured Data. David Meier: The First Phishing Email I Almost Fell For. David Mortman: Hacking Envelopes. Other Securosis Posts Rapid7 Acquires Metasploit Favorite Outside Posts Rich: Amrit’s post on Gartner, and working for Gartner. For the record, analysts are very well insulated from financial considerations that could affect research. That said, people who pay to speak to analysts get more time with them, and that can subtly affect opinions. Adrian: My favorite post was also Amrit’s, both for his honest quadrant diagram and for the commentary. To be honest, I felt for ZL as Gartner has the power to cut a company’s sales in half, but I agree with their assessments more than I disagree. My favorite tweet was from @securityincite: “@rmogull Would someone please give Rich some work to do? He’s loitering in shopping malls now. Next he’ll be upgrading to Windows Mobile”. Mortman: @RSnakes on a Plane. (Mort sent this in Monday, he was so convinced). Meier: Two out of five at risk from Wi-Fi Hijacking – Interesting that Talk Talk (the ISP in the UK) is taking this stance to protect end users from heavy-handed plans to tackle Internet piracy by Lord Mandelson. Chris Pepper: Time Warner Cable Exposes 65,000 Customer Routers to Remote Hacks. Top News and Posts ChoicePoint breach. Yeah, those guys. Yes, it happened again. Yeah, they claim it’s not their fault. Shostack is a little more forceful with his analysis and received a reply from (I assume) a company rep. Love Jack’s post calling out OCABR in Holding a grudge. Russell Thomas on How to Value Digital Assets. Long post, but reasonably practical methodology. Metasploit sale to Rapid7 from a developer perspective. Do the Evolution. Public Google Voice mails are searchable. Duh. But Google changed the policy to stop this anyway. Joanna’s Evil Maid encryption attack via USB stick. Another analysis of the Metasploit acquisition. I still think this will be good for Core Security. Blog Comment of the Week This week’s best comment comes from Erik Swan (a Splunk employee -Adrian) in response to Splunk and Unstructured Data: Thanks for mentioning Splunk, and your post brings up interesting points. We recommend that people dump “everything” into splunk and just keep it. I’d go further and say that i’d bet that far less than 1% of that data is ever looked-at/reported on/etc. As you point out, its likely harder and more risky to remove data than keep it. This clearly changes when you talk about multiple T per day ( average large system these days ), where even for a wealthy company, the IO required is very expensive and not sure the data has value/risk. My gut is that data generation growth is clearly outpacing the size/price curve per GB, and will likely do so until massively more scaleable and cost effective media is available. For the time being, keeping everything is likely the best starting point. At the same time, we have seen models that look a lot like email spam filtering, where “uninteresting” data is routed to different instances that have shorter retention policies. Summarization is used to capture and compress the data hopefully with no information loss. Not a great practice for compliance, but for trouble shooting and analytics can work. Longer term its an interesting area for research and something that due to the size of data we deal with needs to be solved. Share:

Share:
Read Post

Rapid7 Acquires Metasploit

Rapid7 acquires Metasploit, the open source penetration testing platform. Wow. All I can say is ‘Wow’. I had been hearing rumors that Rapid7 was going to make an acquisition for weeks, but this was a surprise to both Rich and myself. Still coming to terms with what it means, and I have no clue what the financial terms look like, but almost certainly this is a cash+stock deal. On the surface, it is a very smart move for Rapid7. Metasploit is considerably better known than Rapid7. Metasploit is a fixture in the security research world and there are far more people using Metasploit than Rapid7 has customers. If nothing else, this gets Rapid7 products in the hands of the people who are shaping web application security, and defining how penetration testing and vulnerability management will be conducted. In a quickly evolving market like pen testing, access to that community is invaluable for a commercial vendor. Plus they get H D Moore on staff, which is a huge benefit. Metasploit is a well-architected framework that provides for easy extensibility and can be customized in innumerable ways. If you want to test anything from smart phones to databases, this platform will do it, from targeted exploits to fuzzing. Sure, there is work on your part and accessibility to people other than security researchers is low compared to commercial products like Core Security’s Impact, but it’s a solid platform and the integration of the two should not be difficult. It’s more a question of how best to allow Metasploit to continue its open source evolution while leveraging scans into meaningful vulnerability chaining, as well as risk scoring. Neither is exactly an ‘enterprise ready’ product. That’s not a slam, as NeXpose performs its primary function as well as most. But Rapid7’s platform is just now breaking ground into larger companies. They have a long way to go in UI, ease of use, pragmatic analysis, integration of risk scoring, SaaS, exploit chaining, and back-end integration. That said, I am not sure they need to be an enterprise ready product, at least in the short term. It makes more sense to continue their mid-market penetration while they complete the integration. Breadth of function, which is what they now have, has proven to be a major factor in winning deals over the last couple years. They can worry about the advanced non-technical stuff later. Identity in the market is an issue for Rapid7. They have waffled between general assessment, pen testing, and vulnerability management, without a clear identity or differentiator when going toe-to-toe with Qualys, nCircle, Tenable, Secunia, and the like. Sure, ‘compliance scoring’ is a useful marketing gimmick, but Metasploit gives them a unique identity and differentiation. Rather than scan-and-patch for known vulnerabilities, focusing mostly inside the network, they will now be able to go far deeper into externally facing custom applications. Taking a risk score across multiple applications and/or platforms is a better approach. If the two platforms are properly integrated, they’ll be useful to IT, security, and software development. I am sure Rich will chime in with his own take later in the week. Wow. Share:

Share:
Read Post

Splunk and Unstructured Data

“What the heck is up with Splunk”? It’s a question I have been getting a lot lately. From end users and SIEM vendors. Larry Walsh posted a nice article on how Splunk Disrupts Security Log Auditing. His post prodded me into getting off my butt and blogging about this question. I wanted to follow up on Splunk after I wrote the post on Amazon’s SimpleDB as it relates to what I am calling the blob-ification of data. Basically creating so much data that we cannot possibly keep it in a structured environment. Mike Rothman more accurately called it ” … the further decomposition of application architecture”. In this case we collect some type of data from some type of device, put it onto some type of storage, and then we use a Google-esque search tool to find what we are looking for. And the beauty of Google is that it does not care if it is a web page or voice mail transcript – it will find what you are looking for if you give it reasonable search criteria. In essence that is the value Splunk provides a tool to find information in a sea of data. It is easy to locate information within a structure repository with known attributes and data types, and we know where certain pieces of information are stored. With unstructured data we may not know what we have or where it is located. For some time normalization techniques were used to introduce structure and reduce storage requirements, but that was a short-lived/low performance approach. Adding attributes to raw data and just linking back to those attributes is far more efficient. Enter Splunk. Throw the data into flat files and index those files. Techniques of tokenization, tagging, and indexing help categorize data with the ultimate goal of correlating events and reporting on unstructured data of differing types. Splunk is not the only vendor who does – several SIEM and Log Management vendors do the same or similar. My point is not that one vendor is better than another, but point out the general trend. It is interesting that Splunk’s success in this area has even taken their competitors by surprise. Larry’s point … “The growth Splunk is achieving is due, in part, to penetrating deeper into the security marketplace and disrupting the conventional log management and auditing vendors.” … is accurate. But they are are able to do this because of the increased volume of data we are collecting. People are data pack-rats. From experience, less than 1% of the logged data I collect has any value. Far too, often organizations do not invest the time to determine what can be thrown away. Many are too chicken to throw useless data away. They don’t want to discard data, just in case it has value, just in case you need it, just in case it contains the needle in the haystack you need for a forensic investigation. I don’t want to be buried under the wash of useless data. My recommendation is to take the time to understand what data you have, determine what you need, and throw the rest away. The pessimist in me knows that this is unlikely to happen. We are not going to start throwing data away. Storage and computing power are cheap, and we are going to store every possible piece of data we can. Amazon S3 will be the digital equivalent of those U-Haul Self Storage places where you keep your grandmother’s china and all the crap you really don’t want, but think has value. That means we must have Google-like search approaches and indexing strategies that vendors like Splunk provide just to navigate the stuff. Look for unstructured search techniques to be much sought after as the data volumes continue to grow out of control. Hopefully the vendors will begin tagging data with an expiration date. Share:

Share:
Read Post

Hacking Envelopes

This story begins early last week with a phone call from a bank I hold accounts with. I didn’t actually answer the call but a polite voice mail informed me of possible fraudulent activity and stated I should call them back as soon as possible. First and foremost I thought this part of my story was a social engineering exercise, but I quickly validated the phone number as being legit, unless of course this was some fantastic setup that was either man-in-the-middling the bank’s site (which would allow them to publish the number as valid) or the number itself had been hijacked. Tinfoil hat aside, I called the bank. A friendly fraud services representative handled my call and in less than twenty minutes we had both come to the conclusion my card for the account in question was finally compromised. By finally, I mean roughly seven years as being my primary vehicle for payment on a daily basis. But this, ladies and gentlemen, is not where the fun started. No, I had to wait for the mail for that. Fast forward five to seven business days, when a replacement card showed up in my lockbox which, interestingly, is an often-ignored benefit of living in a high density residence. This particular day I received a rather thick stack of mail that included half a dozen similarly sized envelopes. Unfortunately, I quickly knew (without opening any of them) which one contained my new card – and it wasn’t based on feel. One would think a financial institution might go to trivial lengths to protect card data within an envelope, but clearly not in this case. The problem I had was that four of the sixteen numbers were readable because, and I’m assuming here, some automatic feeding mechanism at the post office put enough pressure on the embossed card number to reprint the number on the outside of the envelope. It was like someone had run that part of the card through an old-school carbon card copy machine. At this point my mail turned into a pseudo scratch lottery game and I was quickly to trying household items to finish what had already been started. I was a winner on the second try (the Clinique “smoldering plum – blushing blush powder brush” was a failure – my fiance was not impressed, and clearly I’ve watched too much CSI: Miami). Turns out a simple brass key is all that is needed to reveal the rest of the numbers, name, and expiration. At this point I’m conflicted, with two different ideas: Relief and confusion: The card security code isn’t embossed. So why must the rest of it be? Social engineering: If obtaining card data like this was easy enough, I could devise a scheme where I called recipients new cards with enough data to sound like the bank for many people to give me the security codes. After considerable thought I feel it’s safe to say that the current method of card distribution poses a low but real level of risk, wherein a significant amount of card data can be discerned short of brute force on the envelope itself. Is it possible? Surely. Is it efficient? Not really. Would someone notice the card data on the back of the envelope? Maybe. But damn – now it really makes sense that folks just go after card data TJX-style, considering all the extra effort in this route. Share:

Share:
Read Post

IDM: Roles, Authorization and Data Centric Security

There were some great comments on my last post, which bring to light a serious problem with the way authorization is done today and how roles don’t help as much as we’d like. First we hear from LonerVamp: And even if you get the authentication part down, very few apps that I’ve seen then tie back into whatever is in place for role management. This is an important point that often gets glossed over by IDM vendors. It turns out that while many applications have support for third party authentication mechanisms, very few have support for third party authorization methods. Which means that even if you can centralize your identities for the purposes of account creation/deletion, you still have to manage use inside each application. Furthermore, many of the applications that claim to support third party authorization really turn out to only support third party groups in LDAP or RADIUS, but you still have to map those groups onto roles within the applications. Andrew Yeomans followed up with his own comment that shows that he’s been a dedicated Securosis reader for a while now: I’m starting to think that a data-centric approach may be a way forward. Today, authorizations are generally enforced by applications. Now firstly this leads to high complexity (as you describe) as there is no unifying set of “policy decision points” and “policy enforcement points”. Secondly, it allows for authorization restrictions to be bypassed by other applications that have access to the same data. Andrew really hits the nail on the head here. We need to continue our shift towards Data-centric Security. The Data Security Lifecycle explicitly assumes that you can properly assign and control rights to who has what data, which is why IDM is so important. I’ve said it before and I’ll say it again: If you don’t know who is accessing the data, how can you possibly tell if it is being abused or misused? Finally Omie asked: I’ve been hearing too much about identity management recently and how the move to roles will solve our compliance problems. And I’ve been wondering and asking how we plan to keep the roles maintained over time. Of course I’ve also been under the impression that every other organization has figured that out except ours, but your post is making me rethink that assumption. If there are some best practices/examples of how to approach role maintenance, I would love to learn about them. Roles can definitely help you out with compliance, but you are correct – role maintenance is definitely a challenge. There is often an implicit assumption that roles, like the rest of the application configuration, are static, when in reality roles tend be dynamic so you absolutely need a process for adapting roles as necessary. Often the complexity of the application causes admins to add roles rather then edit the existing ones because it is easier in the short term. But in the long run this causes extra complexity. I’ll go into more details on this issue and how to deal with it in a later post, so stay tuned. In the meantime, NIST recently published some documents from their recent Privilege (Access) Management Workshop. In particular, you should check out A Survey of Access Control Models, to give you an idea of some ways that role based access control is problematic. Share:

Share:
Read Post

The First Phishing Email I Almost Fell For

Like many of you, I get a ton of spam/phishing email to my various accounts. Since my email is very public, I get a little more than most people. It’s so bad I use 3 layers of spam/virus filtering, and still have some messages slip through (1 cloud based filter [Postini, which will probably change soon], one on-premise UTM [Astaro], and SpamSieve on my Mac). If something gets through all of that, I still have some additional precautions I take on my desktop to (hopefully) help against targeted malware. Despite all that, I assume that someday I’ll be compromised, and it will probably be ugly. This morning I got the first phishing email in a very long time that almost tricked me into clicking. It came from “Administrator” at one of my hosts and read: Attention! On October 22, 2009 server upgrade will take place. Due to this the system may be offline for approximately half an hour. The changes will concern security, reliability and performance of mail service and the system as a whole. For compatibility of your browsers and mail clients with upgraded server software you should run SSl certificates update procedure. This procedure is quite simple. All you have to do is just to click the link provided, to save the patch file and then to run it from your computer location. That’s all. http://updates.[cut for safety] Thank you in advance for your attention to this matter and sorry for possible inconveniences. System Administrator Two things tipped me off. First, that system is a private one administered by a friend. While he does send updates like this out, he always signs them with his name. Second, the URL is clearly not really that domain (but you have to read the entire thing). And finally, it leads to an Active Server Pages domain, which that administrator never uses since our system is *nix based. But it was early in the morning, I hadn’t had coffee yet, and we often need to upgrade our SSL after a system update on this server, so I still almost clicked on it. According to Twitter this is a Zbot generated message: SecBarbie: RT @mikkohypponen ZBot malware being spammed out right now in emails starting “On October 22, 2009 server upgrade will take place” Ignore it. Thanks Erin! It’s interesting that despite multiple obvious markers this was malicious, and be being very attuned to these sorts of things, I still almost clicked on it. It just goes to show you how easy it is to screw up and make a mistake, even when you’re a paranoid freak who really shouldn’t be let out of the house. Share:

Share:
Read Post

Friday Summary – October 16, 2009

All last week I was out of the office on vacation down in Puerto Vallarta. It was a trip my wife and I won in a raffle at the Phoenix Zoo, which was pretty darn cool. I managed to unplug far more than I can usually get away with these days. I had to bring the laptop due to an ongoing client project, but nothing hit and I never had to open it up. I did keep up with email, and that’s where things got interesting. Before heading down I added the international plan to my iPhone, for about $7, which would bring my per-minute costs in Mexico down from $1 per minute to around $.69 a minute. Since we talked less than 21 minutes total on the phone down there, we lose. For data, I signed up for the 20 MB plan at a wonderfully fair $25. You don’t want to know what a 50 MB plan costs. Since I’ve done these sorts of things before (like the Moscow trip where I could never bring myself to look at the bill), I made sure I reset my usage on the iPhone so I could carefully track how much I used. The numbers were pretty interesting – checking my email ranged from about 500K to 1MB per check. I have a bunch of email accounts, and might have cut that down if I disabled all but my primary accounts. I tried to check email only about 2-3 times a day, only responding to the critical messages (1-4 a day). That ate through the bandwidth so quickly I couldn’t even conceive of checking the news, using Maps, or nearly any other online action. In 4 days I ran through about 14 MB, giving me a bit more space on the last day to occupy myself at the airport. To put things in perspective, a satellite phone (which you can rent for trips – you don’t have to buy) is only $1 per minute, although the data is severely restricted (on Iridium, unless you go for a pricey BGAN). Since I was paying $3/minute on my Russia trip, next time I go out there I’ll be renting the sat phone. So for those of you who travel internationally and want to stay in touch… good luck. -rich On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on Getting Around Vertical Database Security. Favorite Securosis Posts Rich, Mort, and Adrian: Which Bits Are the Right Bits. We all independently picked this one, which either means it’s really good, or everything else we did this week sucked. Meier: It Isn’t Risk Management If You Can’t Lose. Other Securosis Posts Where Art Thou, Security Logging? IDM: Reality Sets In Barracuda Networks Acquires Purewire Microsoft Security Updates for October 2009 Personal Information Dump Favorite Outside Posts Rich: Michael Howard’s post on the SMBv2 bug and the Microsoft SDL. This kind of analysis is invaluable. Adrian: Well, the entire Protect the Data series really. Mortman: Think, over at the New School blog. Meier: Security Intelligence: Attacking the Kill Chain. (Part three of a series on security principles in network defense.) Top News and Posts Mozilla Launches Plugin Checker. This is great, but needs to be automatic for Flash/QuickTime. Adobe recommends turning off JavaScript in Acrobat/Reader due to major vulnerability. I recommend uninstalling Acrobat/Reader, since it’s probably the biggest single source of cross platform 0-days. Air New Zealand describes reason for outage. Not directly security, but a good lesson anyway. I’ve lost content in the past due to these kinds of assumptions. Google to send information about hacked Web sites to owners. Details of Wal-Mart’s major security breach. Greg Young on enterprise UTM and Unicorns. Microsoft fixes Windows 7 (and other) bugs. Delta being sued over email hack. 29 Bugs fixed by Adobe. California County hoarding data. Mozilla Plugin Check. Paychoice Data Breach. Blog Comment of the Week This week’s best comment comes from Rob in response to Which Bits are the Right Bits: Perhaps it is not well understood that audit logs are generally not immutable. There may also be low awareness of the value of immutable logs: 1) to protect against anti-forensics tools; 2) in proving compliance due diligence, and; 3) in providing a deterrent against insider threats. Share:

Share:
Read Post

It Isn’t Risk Management If You Can’t Lose

I was reviewing the recent Health and Human Services guidance on medical data breach notifications and it’s clear that the HHS either was bought off, or doesn’t understand the fundamentals of risk assessment. Having a little bit of inside experience within HHS, my vote is for willful ignorance. Basically, the HHS provides some good security guidance, then totally guts it. Here’s a bit from the source article with the background: The American Recovery and Reinvestment Act of 2009 (ARRA) required HHS to issue a rule on breach notification. In its interim final rule, HHS established a harm standard: breach does not occur unless the access, use or disclosure poses “a significant risk of financial, reputational, or other harm to individual.” In the event of a breach, HHS’ rule requires covered entities to perform a risk assessment to determine if the harm standard is met. If they decide that the risk of harm to the individual is not significant, the covered entities never have to tell their patients that their sensitive health information was breached. You have to love a situation where the entity performing the risk assessment for a different entity (patients) is always negatively impacted by disclosure, and never impacted by secrecy. In other words, the group that would be harmed by protecting you gets to decide your risk. Yeah, that will work. This is like the credit rating agencies, many aspects of fraud and financial services, and more than a few breach notification laws. The entities involved face different sources of potential losses, but the entity performing the assessment has an inherent bias to mis-assess (usually by under-assessing) the risk faced by the target. Now, if everyone involved is altruistic and unbiased this all works like a charm. Hell, even in Star Trek they don’t think human behavior that perfect. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.