Securosis

Research

“PIN Crackers” and Data Security

Really excellent article by Kim Zetter on the Wired Threat Level site in regards to “PIN cracking”, and some of the techniques being employed to gather large amounts of consumer financial data. I know Rich referenced this post earlier today, but since I already wrote about it and have a few other points I think should be mentioned, hopefully you will not mind the duplicated reference. Before I delve into some of the technical points, I want to say that I am not certain if the author desired a little sensationalism to raise interest, or if the security practitioners interviewed were not 100% straight with the author, or if there was an attempt to disguise deployment mistakes by hyping the skills of the attacker, but the headline and some of the contents are misleading. The attackers are not ‘cracking’ the ATM PINs, as the encryption is not what is being attacked here. Rather they are ‘scraping’ the memory of the security devices, looking for unencrypted data or the encryption keys. In this case by grabbing the data when it is unencrypted and vulnerable (in a cryptographic sense if not the physical one) within the Hardware Security Device/Module/Unit for electronic funds transfers, hackers are in essence sniffing unencrypted data. The attack is not that sophisticated, nor is it new, as various eavesdropping methods have been employed for years, but that does not mean that it is easy. Common tactics include altering the device’s operating system or ‘attaching’ to the hardware bus to access keys and passwords stored in memory, thus bypassing intended interfaces and protections. Some devices of this type are even constructed in such a way that physical tampering will destroy the machine and make it apparent someone was attempting to monitor information. Some use obfuscation and memory management technologies to thwart these attacks. Any of these requires a great deal of study and most likely trial and error to perfect. Unless of course you leave the HSM interface wide open, and your devices were infected with malware, and hackers had plenty of time to scan memory locations to find what they wanted. I am going to maintain my statement that, until proven otherwise, this is exactly what was going on with the Heartland breach. For the attack to have compromised as many accounts as they did without penetrating the Heartland facility would require this kind of compromise. It implies that the attackers have access to the HSM, most likely exploiting negligent security of the command and control interface, and infecting the OS with malicious code. Breaking into the hardware or breaking the crypto would have been a huge undertaking, requiring specialized skills and access. Part of the reason for the security speed-bump post was to illustrate that any type of security measure should be considered a hindrance; with enough time, skill and access, the security measure can be broken. Enough hindrances in place can provide good security. Way back when in my security career, we used to perform hindrance surveys of our systems to propose how we might break our own systems, under what circumstances this could be done, and what skills and tools would be required. Breaking into an HSM and scraping memory is a separate and distinct skill from cracking encryption (keys), and different from writing SQL and malware injection code. Each attack has a cost in time and skill required. If you had to employ all of them, it would be very difficult for a team of people to accomplish. Some of the breaches, both public as well as undisclosed breaches I am aware of, have involved exploitation of sloppy deployments, as well as the other basic exploitation techniques. While I agree with Rich’s point that our financial systems are under a coordinated multi-faceted attack, the attackers had unwitting help. Criminals are only slightly less lazy than system administrators. Security people like to talk about thinking like a criminal as a precursor to understanding security, and we pay a lot of lip service to it, but it is really true. We are getting to watch as hackers work through the options, from least difficult to more difficult, over time. Guessing passwords, phishing, and sniffing unencrypted networks are long since pase, but few are actively attacking the crypto systems as they are usually the strongest link in the chain. I know it sounds really obvious to say that attackers are looking at easy targets, but that is too simplistic. Take a few minutes to think about the problem: if your boss paid you to break into a company’s systems, how would you go about it? How would you do it without being detected? When you actually try to do it, the reality of the situation becomes apparent, and you avoid things that are really freakin’ hard and find one or two easy things instead. You avoid things that are easily detectable and being watched. You learn how to leverage what you’re given and figure out what you can get, given your capabilities. When you go through this exercise, you start to see the natural progression of what an attacker would do, and you often see trends which indicate what an attacker will try and why. Despite the hype, it’s a really good article and worth your time. Share:

Share:
Read Post

Oracle CPU for April 2009

Oracle released the April 2009 Critical Patch Update; a couple serious issues are addressed with the database, and a couple more that concern web application developers. For the database server, there are two vulnerabilities that can be remotely exploited without user credentials. As is typical, some of information that would help provide enough understanding or insight to devise a workaround is absent, but a couple are serious enough that you really do need to patch, and I will forgo a zombie DBA patching rant here. If you are an Oracle 9.2 user, and there are a lot of you out there still, there is a vulnerability with the resource manager. Basically, any user with create session privileges, and as all users are required to have this in order to connect to the database, it is only going to take one “Scott/Tiger”, default account or brute forced user account to exercise the bug and take control of the resource manager. Very few details are being published, and the CVSS “Base Score” system is misleading at best, but a score of 9 indicates a takeover of the resource manager, which is often used to enforce polices to stop DoS and other security/continuity policies, and possibly leveraged into other serious attacks I am not clever enough to come up with in my sleep deprived state. If this can be implemented by any valid user, it is likely a hacker will locate one and take advantage. The second serious issue, referenced in CVE-2009-0985, is with the IMP_FULL_DATABASE procedure created by catexp.sql, which runs automatically when you run catalog.sql after the database installation. This means you probably have this functionality and role installed, and have a database import tool that runs under admin privileges- which a hacker can use on any schema. Attack scenarios over and above a straight DoS may not be obvious, but this would be pretty handy for surreptitious alteration and insertion, and the hacker would be able to then exercise this imported database. As I have mentioned in previous Oracle CPU posts, these packages tend to be built with the same set of assumptions and coding behaviors, so I would not be surprised if we discover that EMP_DATABASE_FULL and EXECUTE_CATALOG_ROLE have similar exploits, but this is conjecture on my part. This is serious enough that you need to patch ASAP! And if you have not already done so, you’ll want to review separation of user responsibilities across admin roles as well. I know it is a pain in the @$$ for smaller firms, but it avoids cascaded privileges in the event of a breach/hack. Finally, CVE-2009-1006 for JRockit and CVE-2009-1012 for the WebLogic Server are in response to complete compromises (Base Score 10) to the system, and should be considered emergency patch items if you are using either product/platform. If we get enough information to provide any type of WAF signature I will, but it will be faster and safer to download and patch. Red Database Security has been covering many of the details on these attacks, and there are some additional comments on the Tech Target site as well. Share:

Share:
Read Post

RSA Conference: For Real?

Did anyone else get this email? You are receiving this email because you are registered for RSA® Conference 2009. Your account information needs to be activated so that you can take full advantage of all the Conference activities including access to the Conference Personal Scheduler and access to the Conference wireless network while on-site. … Please take a moment now to log-in and complete your account activation athttps://sso.rsaconference.com/sso/LogIn.jsp   using the following temporary password – %_DWqwet(M. You will then be prompted to confirm your profile information and reset your password. Your username is not included in this email for security purposes. If you are unsure of what your username is, you can retrieve it online at https://sso.rsaconference.com/sso/RetrieveUserName.jsp. You can log in to your account anytime at https://sso.rsaconference.com/sso/LogIn.jsp. … For more information on RSA Conference Single Sign-on, please visit our website or contact us atloginhelp@rsaconference.com.  Sincerely, RSA® Conference Team Wow, is this a phishing attempt out to the RSA list? Awesome! Share:

Share:
Read Post

Friday Summary, April 3, 2009

The big news at Securosis this week centered around the Conficker worm. As Rich blogged earlier in the week, he got a call from Dan Kaminsky on Saturday with the outline of what was going on. Rich and I scrambled Saturday to reach as many AV vendors as we could to get the word out. While some were initially a little annoyed at getting called on their cell phones Saturday afternoon, everyone was really eager to see what Tillmann Werner and Felix Leder had discovered and get their scanning tools updated. I expected things to be quiet on April 1st. A lot of security researchers have been watching and studying the worm’s behavior, and devising plans for detecting and containing the threat. I imagine the authors of the worm are reading every bit of news they can get their hands on and learning how to improve their code in response. This has been fascinating to watch. Thanks again to the Honeynet Project and Dan Kaminsky for doing a great job, and for involving us in the effort. On a more personal note, you probably have noticed that neither Rich nor I have been blogging as much lately, partially due to our desire to not create more work for ourselves prior to the new site launch; partially because, well, family comes first. For those of you who know me, you know I have dogs. When people ask me if I have kids, I typically say “No, I have dogs.” What I mean to say is “Yes, several; of the four legged variety.” March has been a terrible month for me because in the first few days one of my puppies went into kidney failure as she had been prescribed the wrong pain medication and dosage. I spent 5 days at the emergency vet clinic with her, even signing the DNR papers as we did not think she would make it. Happy to say she did, and is slowly recovering her ability to walk and some of the 30 lbs. she lost. A couple of days after I got back from Source Boston, her brother, and our all time favorite, started having trouble breathing. To make a long story short, we found cancer everywhere, and he only made it five days after his first visible symptoms, dying in my lap Tuesday morning. We know even several of you hardened veterinarians and long time breeders who have “seen it all” shed a tear over this one, and Emily and I understand and appreciate your heartfelt condolences. Looking forward to a much brighter and happier April. And now for the week in review… at least what little of it I managed to notice: Webcasts, Podcasts, Outside Writing, and Conferences: Rich presented “Building a Web Application Security Program” at the Phoenix SANS training. We’ll get it posted once we transfer over to the new site. Rich’s article on Search Security on Data Loss Prevention Benefits in the Real World is available. Rich and Martin hosted episode 184 144 of The Network Security Podcast this week, covering not only Conficker news, but also a ton of stuff regarding security on the Mac platform with Dino Dai Zovi. Even recommended by the Macalope! Favorite Securosis Posts: Rich: Looking forward to getting ASS Certification. Adrian: Rich’s post on Detecting Conficker Favorite Outside Posts: Adrian: Know Your Enemy: Containing Conficker was a fascinating paper. Rich: From Anton Chuvakin’s Blog: Thoughts and Notes from PCI DSS Hearing in US House of Representatives. Top News and Posts: Microsoft Security Advisory 969136 for MS Office PowerPoint. Internet too dangerous? I think most people just do not appreciate how dangerous it is. Conficker ‘eye-chart’. This is a great idea and works for several malware variants. One topic I really wanted to blog on this week was the Internet Crime Complaint Center report that incidents (discovered and reported, of course) were up 33% year over year. Mini-Botnets. Smaller, just as much of a problem. The Open Cloud Manifesto. Ugh. Too many grandstanders with too little to say. If Hoff wants to fight that fight, fine, but it feels like yelling at the wind to me. Just not worth the time jumping into this mess until there is a bit more of a market. Don’t get me wrong- Rich and I will cover cloud and virtualization security in the future, maybe even this year. But not in response to this, and when we do, will will try to have something to say that does not suck. Blog Comment of the Week: This week’s best comment was from ‘Anonymous’: @Andre, I think once the Institute store makes its exclusive gear available, you should be the first to buy an ASS hat. We are working on the merchandise page for the new site … we will be sure to stock those hats. Share:

Share:
Read Post

Comments on “Containing Conficker”

As you have probably read, a method for remotely detecting systems infected with the Conficker worm was discovered by Felix Leder and Tillmann Werner. They have been working with Dan Kaminisky, amongst others, to come up with a tool to detect the worm and give IT organizations the ability to protect themselves. This is excellent news. The bad news is how unprepared most applications are to handle threats like this. Earlier this morning, the guys at The Honeynet Project were kind enough to forward Rich and myself a copy of their Know Your Enemy: Containing Conficker paper. This is a very thorough analysis of how the worm operates. I want keep my comments on this short, and simply recommendation strongly that you read the paper. If you are in software development, you need to read this paper. Their analysis of Conficker illustrates that the people who wrote it are far ahead of your typical application development team in their understanding of application security. Developers need to understand the approach that attackers are taking, understand the dedication to their craft these guys are exhibiting, and increase their own knowledge and dedication if they are going to have a chance of producing code that can counter these types of threats. Is Conficker a well-written piece of code? Is it architected well? No idea. But it is clear that each iteration has advanced their three core functions (find & infect, maintain, & defend) and had this flexibility in mind from the begining. Look at how Conficker uses identification techniques to protect itself in avoid downloading the wrong/malicious patches to their worm. And check out the examination of incoming requests to help protect their now infected system from other viruses. This should serve as an example of how to write internal monitoring code to detect exploit attempts (see section 4), either in lieu of a full blown patch, or as self-defending code at critical points, or both. And it is done in a manner that gives them a generic tool that, when updated, will be an effective anti-malware tool. Neat, huh? The authors have a pretty good understanding of randomness and used multiple sources, not only to get better randomness, but to avoid an attack on any one- smart. These are really good application security practices that very few software authors actually put into practice. Heck, most web applications trust everything that comes in, and it looks like the authors of Conficker understand that you must trust nothing! Once again, if you are a software developer or IT practitioner, read the paper. The research that Felix and Tillmann have put into this is impressive. They have proof points for everything they believe to be true about the worm’s behavior, and have stuck with the facts. This is really time consuming, difficult work. Excellent job, guys! Share:

Share:
Read Post

Security Speed-bumps

Reading yet another comment on yet another blog about “what good is ABC technology because I can subvert the process” or “we should not use XYZ technology because it does not stop the threats” … I feel a rant coming on. I get seriously annoyed when I hear these blanket statements about how some technologies are no good because they can be subverted. I appreciate zeal in researchers, but am shocked by people’s myopia in applied settings. Seriously, is there any technology that cannot be compromised? I got a chance to chat with an old friend on Friday and he reminded me of a basic security tenet … most security precautions are nothing more than ‘speed bumps’. They are not fool-proof, not absolute in the security that they offer, and do not stand unto themselves without support. What they do is slow attackers down, make it more difficult and expensive in time, money, and processing power to achieve their goals. While I may not be able to brute force and already encrypted file, I can subvert most encryption systems, especially if I can gain access to the host. Can I get by your firewall? Yes. Can I get spam through your email filter? Absolutely. Can I find holes in your WAF policy set? Yep. Write malware that goes undetected, escalate user privileges, confuse your NAC, poison your logs, evade IDS, compromise your browser? Yep. But I cannot do all of these things at the same time. Some will slow me down while others detect what I am doing. With enough time and attention there are very few security products or solutions that would not succumb to attack under the right set of circumstances, but not all of them at one time. We buy anti-spam, even if it is not 100% effective, because it makes the problem set much smaller. We try not to click email links and visit suspect web sites because we know our browsing sessions are completely at risk. When we have solid host security to support encryption systems, we drop the odds of system compromise dramatically. If you have ever heard me speak on security topics, you will have heard a line that I throw into almost every presentation: embrace insecurity! If you go about selecting security technologies thinking that they will protect you from all threats under all circumstances, you have already failed. Know that all your security measures are insecure to some degree. Admit it. Accept it. Understand it. Then account for it. One of the primary points Rich and I were trying to make in our Web Application Security paper was that there are several ways to address most issues. And it’s like fitting pieces of a puzzle together to get reasonable security against your risks in a cost effective manner. What technologies and process changes you select depend upon the threats you need to address, so adapt your plans such that you cover for these weaknesses. Share:

Share:
Read Post

CanSecWest Highlights

I have been reading about the highlights of the CanSecWest show all over the net, and it seems like there were a lot of really cool presentations. TippingPoint’s ‘Pwn2Own’ contest at CanSecWest that started late last week concluded over the weekend. The contest awarded $5,000 to each hacker would could uncover an exploit for any of the major browser platforms (Firefox, Internet Explorer, Chrome, & Safari). Firefox, IE, & Safari were all exploited at least once during the contest, with Chrome the only browser to make it through the trials. Perhaps that is to be expected given its newness. Lots more wrap-up details on the DV Labs site. I know a lot of security researchers have a bitter taste from the way companies behave when a security flaw is revealed; still, I am always interested in seeing these types of contests as they are great demonstrations of creativity, and the ability to share knowledge amongst experts is great for all of the participants. If this method of “No Free Bugs” works to get discoveries back in the public eye, I think that’s great. I would have much like to have seen the presentation “Sniff keystrokes with lasers/voltmeters: Side Channel Attacks Using Optical Sampling of Mechanical Energy Emissions and Power Line”. Having previously witnessed what information can be gleaned from power lines, and things like over-the-air Tempest attacks, I would like to see how the state of the art on physical side channels has progressed. One of the other show highlights was covered by Dennis Fisher over on Threatpost- it appears that the Core Security Technologies team has demonstrated a persistent BIOS attack. There are next to now details on this one, but if they are able to perform this trick without the assistance of a secondary device and only obtaining admin access, this is a really dangerous attack. If you have access to the physical platform, all bets are pretty much off. Looking forward to seeing the details. Share:

Share:
Read Post

Friday Summary, March 20th, 2009

Happy Friday! Rich is off with the family today and probably sneaking in some time to play with his new Mac Pro as well. If I know him, at the first opportunity he will be in the garage, soldering iron in hand, making his own 9’ mini-DVI cable to hook up his new monitor. Family, new baby, and cool new hardware mean I have Friday blog duties. But as I just got back from the Source Boston show, there is much to talk about this week. Across the board, the presentations at Source were really excellent, and some of the finest minds in security were in attendance, so Stacy Thayer and her team get very high marks from me for putting on a great event. Starting out with a bang, Peter Kuper gave a knockout keynote presentation on the state of financial markets and venture funding for startups. A no-nonsense, no-spin, honest look at where we are today was both a little scary and refreshing for its honesty. He has a post here if you want to read more of his work. David Mortman kicked off the morning sessions with I Can Haz Privacy, updating us on a lot of the privacy issues and legislation going on today. He highlighted the natural link between personal privacy and LOLCATS like no one else, and kept audience participation high by rewarding questions with some awesome homemade wheat bread. The always thought-provoking Adam Shostack gave a presentation on The Crisis in Information Security. I am in complete agreement that despite the hype, breached businesses will continue to function and operate as they always have, and the sky is not falling. And as always, his points are backed by solid research. Even if individual companies generally do not fall, I do still wonder about broader risk to the entire credit card system given its ease of (mis)use, its poor authentication, the millions of stolen credit card numbers floating around, and demonstrated capabilities to automate fraud. Hoff had his best presentation yet with The Frogs Who Desired a King. While you may or may not be interested in security or cloud computing, this is a must-see presentation. Even if you have been reading his Rational Survivability blog posts on the subject, the clarity of the vision he presented regarding the various embodiments of cloud computing and the security challenges of each is more than compelling, and he has backed it up with a staggering amount of research. I’ve got to say, Chris has raised the bar for all of us in the security field for the quality of our presentations. After almost missing the show because of a number of issues on the home front, including spending 4 days at the emergency vet clinic as someone accidentally poisoned one of my dogs, I got on the plane and I am glad I made it. I gave a presentation on Data Breaches and Encryption, examining where encryption technologies help and, just as importantly, where they don’t. My personal “Shock and Awe” award went to Mr. James Atkinson of Granite Island Group TSCM for his presentation on “Horseless Carriage Exploits and Eavesdropping Defenses”. I had no idea that all of these devices were in full effect in most automobiles today, nor that it was this easy to do. Having now given it some thought, though, I think I may have run into some of the devices he discussed. I will be looking through my car this weekend. It was good to see Dennis Fisher again … and he is just launching a new security news network called Threatpost. This effort is sponsored by Kaspersky and they have started off with a ton of stuff, so it’s worth checking out. Now I am off to try and enjoy the weekend, so here it is- the week in review: Webcasts, Podcasts, Outside Writing, and Conferences: Rich and Adrian presented Building a Business Justification for Data Security through SANS. We co-presented with Chris Parkerson of McAfee … and apologies to Chris as Rich and I ran a little long. Adrian chatted with Amrit Williams on the subject of Information Centric Security on the Beyond The Perimeter podcast this week. It should be posted soon. Adrian presented Data Breaches and Encryption last week during the Source Boston event. Rich joined Martin McKeay on the Network Security Podcast this week, talking about Google behavioral ad targeting, Comcast passwords exposed, and the new DNS trojan. They were joined by Bill Brenner of CSO Online so you’ll want to check this one out! Favorite Securosis Posts: Rich: Adrian’s post on Immutable Log Files. Adrian: My post on Sprint Data Leak… I try not to post on breaches as there are so many, but this has been so bad for so long that I could not help myself. Favorite Outside Posts: Adrian: Rafal’s post on the Fox News Fail … not for the original post, but the dialog afterwards. Rich: Sure, we’re suckers for a plug, but Jeremiah posts a good list of recent web security related topics. Top News and Posts: Comcast usernames and passwords leaked. Oracle releases multiple Linux security patches. Wikileak exposes Blacklist The PCI compliance shell game …. compliant with the standard right up until the nano-second after they were breached. Blog Comment of the Week: From Ariel at CoreSecurity … Actually, Kelsey reinvented an idea that was previously exposed and published by Futoransky and Kargieman from Core ([1]) and implemented in the msyslog package ([2]) since 1996. I learn something new every day! Now, if so many great security minds think this is a good idea, why does no one want this technology? Share:

Share:
Read Post

Immutable Log Files

I have been working on a project lately that I don’t really get to talk about much, but it is a technology that I am quite fond of: Immutable Log Files. For those of you who do not know what these are, immutable logs are log files protected from tampering and erroneous insertion. Depending upon the implementation, the files can have additional protections from poisoning and fictional recreation/forgery as well. There are many other names for this type of technology, such as content integrity verification, court admissible evidentiary data, incontrovertible data, and even “signed and sequenced” data. Regardless of name, the intent is to create a tamper-resistant archive of events. A high level overview of the process might look like the following: Take a log entry, syslog for example, and add a time stamp and/or sequence number to that entry. Create a digital hash of the log entry to ensure integrity, and cryptographically sign it so you know the hash was produced by whatever authority is entrusted with managing the log. Now the log entry contains self-validating information as well. Each subsequent log entry would be bundled with one or more data points from previous log entries prior to creating the hash, to ensure that the sequence of events has not been altered. What you end up with is a chain of events that can be verified for data integrity. There are many variants to this process that offer additional assurances, but that is the gist of it. I had the opportunity back in 1998 to implement a variant of this technology based upon what I consider to be ground-breaking work by John Kelsey, then of Counterpane. We had a specific problem with dispute resolution we needed to address in our e-Commerce system, and this paper describes both a generic approach to solving the problem, but also includes some references that were specific to our technology and not applicable to most needs. There are a few vendors that have advanced the state of the art in this area, but they largely go unnoticed by the security community at large. While this is a valuable technology for solving certain problems, it remains a rare feature. I am writing this post as I have a request from both the security and the IT practitioner community. I am interested in knowing if you or your organization uses this type of technology today, or if it is something you have considered? If you are a product vendor and you are thinking about implementing such a technology as a competitive differentiator, I would greatly appreciate a heads-up. I am seeing some indications that this may be a requirement for government based upon the recent draft for tamper resistant syslog files by John Kelsey of NIST, J Callas of PGP, and A Clemm from Cisco, but the status of this draft work remains elusive. I have spoken with a half dozen security strategists who consider this a compelling solution to several different data integrity problems in the areas of eDiscovery and electronic data archival. If this is something you have interest in, please take a minute and post a comment or shoot me an email at alane at securosis with the obligatory dot and com postfix. I would very much like to know what your thoughts are. Share:

Share:
Read Post

Sprint Customer Data Leaked … again

Brian Krebs posted last week that Sprint is claiming an employee has stolen customer data, including pin numbers and the “security question” you can use to recover a password. This is a vendor I have been following for a long time, and I’m surprised we have not seen this type of activity before. From Brian’s blog: “It appears this employee may have provided customer information to a third party in violation of Sprint policy and state law. We have terminated this employee. The information that may have been compromised includes your name, address, wireless phone number, Sprint account number, the answer to your security question, and the name of the authorized point of contact on your account.” I wonder if they ever managed to remove the customer’s social security number as the primary key for their customer care database? It would appear that they did at least remove CC# and SSN# from the customer care application UI, which was my primary beef with them: “We implemented a billing platform about a year ago that has advanced security features designed to catch things like an employee accessing information that they shouldn’t be,” Sullivan said. “That platform limits information that employees can access, such as Social Security numbers, and any sort of payment information.” I have always considered Sprint lax in regards to their data security practices. They exposed my information before any breach notification laws were in effect, with my personal and billing information going to a third party. Worse, the person who obtained the data called customer care and was subsequently provided my SSN# and was able to shut off my account. Not sure what these “advanced security features” are exactly, but I would need to concede that the improvement must be working if the credit card numbers that they require for account creation were not stolen as well. I really do wonder if (hope) this will prompt some form of internal investigation, and I always wonder if Sprint could be considered a contributor in this breach case if they provided employees far more data that was necessary to do their jobs. Think of it this way: If it was “thousands” of accounts, clearly the employee must have had access and been able to copy them electronically. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.