Securosis

Research

Introducing Chris Pepper

I’d like to take a moment and introduce a new contributor to Securosis. Chris Pepper is a senior systems administrator at Rockefeller University in NYC and longtime contributor to TidBITS and various other publications. Chris is one of the most knowledgeable sysadmins I’ve ever known and the first person I turn to when I need command-line support on various *nix flavors or Mac. Chris and I have been friends since sometime near the end of high school (we went to different schools). I was insanely jealous of his Apple Newton and after years of debate he’s the one that finally convinced me to give Macs a shot (I mostly use Macs these days). He’s often reviewed my work before publication and his skills as an editor are frightening. I’ve asked Chris to join Securosis as a contributor due to his perspective as an end-user. He’s not a security vendor, out-of-date industry analyst (that’s me), full-time security professional, or even product vendor. Chris works at a major university dealing with security issues on a daily basis as an adminsrator. Over the years Chris has hosted my personal domains out of his apartment and his attention to detail with regards to security is far beyond most of the professional services I’ve used. He’s even discovered a few vulnerabilities in the course of his admin duties. Chris’ personal blog is located at http://www.reppep.com/weblog/pepper/ He’s here to keep us security pundits honest and bring a little bit of the real world into our discussions… Share:

Share:
Read Post

Just a Spoonful of Obscurity Makes the DefCon Level Go down!

Rich, It feels heretical, but I can agree that obscurity can provide some security. The problem comes when people count on secrecy as their only or primary security. Jim: “Oh, we don’t have to encrypt passwords. Sniffing is hard!” Bob: “Hey, thank you for those credit card numbers!” Jim: “What?” Bob: “Ha ha, my friend Joe got a job at your ISP about a year ago, and started looking for goodies.” Vendor: “Nobody will ever bother looking in the MySQL DB for the passwords.” Cracker: “0WNED! Thank you, and let’s see how many of your users use the same passwords for their electronic banking!” Vendor: “But nobody else has access to the server!” Cracker: “But I found a hole in your PHP bulletin board. Game over, man!” GeniousDood: “I just invented a perfect encryption algorithm! Nobody will ever break it!” Skeptic: “How do you know?” GeniousDood: “I checked. It’s unbeatable.” Skeptic: “Thanks, but I’ll stick with something at least one disinterested person has confidence in – preferably Schneier.” IT Droid: “Check out our new office webcam! It’s at http://camera.example.com ” Paranoid: “What’s the password?” IT Droid: “Password? No-one’s ever going to find it.” Paranoid: “Google did.” I can accept that obscurity makes cracking attempts more difficult. This additional difficulty in breaking into a system might be enough to discourage attackers. Remember – you don’t have to outrun the bear, just your slowest friend! Also, if you have a short period before the fix is available, during which there is a gaping hole in your defenses, obviously it’s going to be easier for people to exploit if they have full details, so it’s hard to see how full disclosure could ever look like a good thing to a commercial vendor. On the other hand, open source projects are more likely to benefit from full disclosure, as it substantially widens the pool of people who can provide a patch, and open source communities attract people who want to deal with security problems themselves (certainly many more Linux & FreeBSD admins want to patch Sendmail or BIND, than Windows users want to patch IE or their DLLs). Security companies are like this too – they want enough info to protect their customers. Restricted access information is fine, as long as the security companies are on the list – such access becomes another asset for the security vendor. But back to obscurity: it can be used as one component in a layered defense, or it can be used as the only defense. One is useful, the other is dumb. Alternatively, obscurity can be used as a temporary barrier: “It will take them a few days to figure out how to break IE, so we’ll get a chance to distribute the patch before they can start attacking our users.” This is a very different proposition than “permanent obscurity” as (hopefully part of) a defense. The problem, of course, is that not everybody gets the patch immediately. Some people don’t because they don’t know about it, others because they have important systems which they can’t change – perhaps because the risk of breakage is unacceptable, or the “fix” is unacceptable. This may last a few days, or forever. Some people don’t have the bandwidth (full dot upgrades for Mac OS, and Service Packs for Windows, are large downloads), and may or may not get the upgrades another way. Some just don’t want to be bothered, and believe they’re invulnerable. Others cannot afford the upgrades. So those people may have no defense aside from obscurity, and they are vulnerable; on the Windows side, they tend to get hacked. Obscurity is just not a good long-term defense, since most secrets leak eventually, and patches can be reverse-engineered to find the hole. This leads into the issue of vendors who don’t patch in a timely manner, but I have to leave something for Rich to rant about… Share:

Share:
Read Post

Encryption is Cheaper than Destruction

I like to think Richard Stiennon and I are good friends. He was at my wedding in Mexico. I took him and his son skiing up at Copper Mountain where I used to patrol. For a time he even rented space in my condo in Boulder while I was slowly moving to Phoenix. We’d swap my car out at the airport parking lot; it was very convenient. But I never suspected he was so violent. Goes to show you that you can never really know someone. It all started with this post on his blog where he advocates smashing old hard drives rather than taking the risk of the data being later recovered. I thought, “okay, he’s just trying to make a point”. But yesterday, over at Emergent Chaos, he expanded his violent tendencies towards cell phones in this post. Now I’m worried. I mean this is a man I’ve left in my home, who spent evenings in Mexico drinking with my family members. I’ve even loaned him my cellphone for the occasional call! I feel lucky it came back in one piece. Maybe because I had it in silent mode or something. But here’s some food for thought. I was talking to a client a while back about old hard drives. They were considering encryption since their SAN (Storage Area Network) was managed by their reseller who frequently swapped out failed drives. They looked at degaussing or smashing the drives- just as RIchard suggested. The cost? $8,000,000.00 a year. $8M a year. Wow. That’s a fair amount of cash, even with the weakened dollar. And those cell phones? The pollutants in them are pretty potent and many recycled phones end up in needy hands. So Richard might want to consider other options. We estimated that client would only need to pay $100,000-$200,000 to encrypt that SAN. Keys are stored externally so the data is unrecoverable. And portable devices? If there’s something sensitive on them you should really be encrypting them anyway. People lose those things you know. Richard- I know a good anger management therapist. Call me, your friends are worried. Share:

Share:
Read Post

Dealing with Security Vendor Exaggerations

I generally don’t discuss “industry” issues here since that’s what I get paid to do at my day job. And if I start offering for free here, what I get paid to do over there, I may find myself offered the opportunity to do it for free on a permanent basis. Mike Rothman runs one of the better industry-oriented blogs. He and I used to sit across the table when he ran marketing for one of the vendors I cover. I like Mike a lot better as an analyst. He’s running an interesting debate on the problems with the security market. The debate started with an article in Dark Reading, moves to Mike’s blog here, Alan Shimel responds, then Mike gets the last word (for now). At the crux of their debate is the honesty of vendors and the aggressiveness of their sales and marketing tactics. My opinion? I work with many excellent security vendors who are out to protect their customers and fairly make a little money on the way. But, every single day, either directly to me, or relayed by my clients, vendors misrepresent their products or outright lie about capabilities. Usually it’s the marketing or sales teams, not the product teams. Do all vendors lie? No, but the good vendors out there are frequently forced into bad positions by their less scrupulous competition. Yes, vendors lie. So does your Mom (remember the tooth fairy) but that doesn’t make her the embodiment of pure evil. Probably. And some of this is simply passion for their products. Everyone thinks their baby is the best looking, smartest, most talented in the world, but there are still a lot of dumb, ugly, couch potatoes. If you don’t believe in what you do you shouldn’t be doing it. So how do you cut through the crap? My self serving answer is use your friendly neighborhood analyst. The biggest part of our job, at least for those of us who are end user focused, is to help make appropriate buying decisions and separate hype from reality. Our testing lab is the production environment of our end user clients- if a product doesn’t work, we’ll eventually hear about it. But if you don’t trust or can’t afford an analyst firm just do what we do. Ask your vendors for customer references in production deployments; if a feature isn’t in production, with a reference-able client, it isn’t real. Then talk to your network and see what other companies like yours are doing and if any have deployed the product. Let’s be honest- most of you readers are either security-types, or at least have a passing interest in security. It’s not like we trust anyone anyway. Share:

Share:
Read Post

What I Really Meant About Security Through Obscurity

I’ve been publishing for in various formats for nearly 10 years now, and I have to admit I’m really enjoying some of the features of blogging. Aside from writing in a more personal voice, I actually appreciate the near instant feedback- from anyone- anywhere- of the blogosphere. I actually enjoy having my ideas challenged and debated. A couple days ago I posted a somewhat lengthy rant on disclosure. Not that I think disclosure is bad, but that we aren’t always willing to discuss the deeper motivations of those involved, on all sides, and admit that in many cases the process can favor the bad guys. In the information security world we often state that “security through obscurity” never works and secrets always leak. I stated: But in the world of traditional security, obscurity sure as hell works. Not all bad guys are created equal, and the harder I make it for them to find the hole in my security system, the harder it is for a successful attack. Especially if I know where the hole is and fix it before they find it. Secrets can be good. And Martin Mckeay called me on it here. So did the ever-present Mike Rothman here. Martin stated: One more minor issue I have with the article is the use of security through obscurity: while this works for a while, security through obscurity is the most brittle of all types of security. All it takes is one hacker releasing his notes on your security vulnerability and what little security you had because of the lack of knowledge is gone. I sure don’t want my bank relying on security through obscurity to protect my bank account. Not that they’d get much right now, a couple of days before the end of the month I agree completely. Martin’s bank funds are running a little low Security through obscurity only works for a limited amount of time. Eventually someone will reverse engineer the patch or figure out the vulnerability on their own. Also while it might now be important for every sysadmin to know the details of a flaw, it’s sure important for security vendors to get a peek before the bad guys so the good guys can try and shield any attacks. Mike says, Since most of the bad guys would just as soon take the path of least resistance, obscuring information about vulnerabilities is a short term strategy that works. And that’s the point I meant to make. These days a few weeks can mean the difference between completely shielding and patching your environment, or getting nailed by the early exploits. This wasn’t true a few years ago, but it’s true today. Automated tools are making exploit development much easier and faster- we need to start dropping some obstacles. We’re just trying to slow down the mass exploits and the script kiddies long enough to give us a fighting chance. That said product vendors need to work more with security vendors on “staged disclosure” (I like to make up phrases, later I’ll make up an acronym just for the fun of it). Security vendors need more detailed vulnerability details to better tune their products before exploits appear. They shouldn’t have to reverse engineer product patches to do this. This also means those security vendors need to share vulnerability details instead of treating them like their own IP. Finally, product vendors need to provide their customers enough information for them to make an appropriate risk decision. Too much information helps the bad guys, but too little hurts the good guys. Then again, perhaps that’s just responsible disclosure… (edited 9/1 ) Just to clarify- I, in no way, think security through obscurity alone is a meaningful security control on its own. I think it can be a useful tool to buy us time, but we should never rely on it. It’s just too fragile. Share:

Share:
Read Post

Security is Like Dentistry

Guess where I spent the day? I’ll warn you now, I have a bad habit of taking metaphors too far. Security is like dentistry: It costs more than you think it should. It’s more painful than the providers ever tell you. If you don’t keep up with ongoing maintenance it costs A LOT more and is WAY more painful. It’s really hard to find a good provider. Most vendors prey on fear. Some vendors sell a pretty smile, not that their products actually work. If you make decisions based only on financial Return On Investment you’ll really screw things up. and finally, < p style=”text-indent:20pt;”>8. No matter how many times you strap someone to a chair, shine a light in their face, and poke them with sharp objects until they bleed you can’t make them any smarter. Time to go rinse… (edited 8/31 adding point 7) Share:

Share:
Read Post

The 3 Dirty Little Secrets of Disclosure No One Wants to Talk About

As a child one of the first signs of my budding geekness was a strange interest in professional “lingo”. Maybe it was an odd side effect of learning to walk at a volunteer ambulance headquarters in Jersey. Who knows what debilitating effects I suffered due to extended childhood exposure to radon, the air imbued with the random chemicals endemic to Jersey, and the staccato language of the early Emergency Medical Technicians whose ranks I would feel compelled to join later in life. But this interest wasn’t limited to the realm of lights and sirens; it extended to professional subcultures ranging from emergency services, to astronauts, to the military, to professional photographers. As I aged and even joined some of these groups I continued to relish the mechanical patois reserved for those earning expertise in a domain. Lingo is often a compression of language; a tool for condensing vast knowledge or concepts into a sound byte easily communicated to a trained recipient, slicing through the restrictive ambiguity of generic language. But lingo is also used as a tool of exclusion or to mask complexity. The world of technology in general, and information security in particular, is as guilty of lingo abuse as any doctor, lawyer, or sanitation specialist. Nowhere is this more apparent than in our discussions of “Disclosure”. A simple term evoking religious fervor among hackers, dread among vendors, and misunderstanding among normal citizens and the media who wonder if it’s just a euphemism for online dating (now with photos!). Disclosure is a complex issue worthy of full treatment; but today I’m going to focus on just 3 dirty little secrets. I’ll cut through the lingo to focus on the three problems of disclosure that I believe create most of the complexity. After the jump that is… “Disclosure” is a bizarre process nearly unique to the world of information technology. For those of you not in the industry, “disclosure” is the term we use to describe the process of releasing information about vulnerabilities (flaws in software and hardware that attackers use to hack your systems). These flaws aren’t always discovered by the vendors making the products. In fact, after a product is released they are usually discovered by outsiders who either accidentally or purposely find the vulnerabilities. Keeping with our theme of “lingo” they’re often described as “white hats”, “black hats”, and “agnostic transgender grey hats”. You can think of disclosure as a big-ass product recall where the vendor tells you “mistakes were made” and you need to fix your car with an updated part (except they don’t recall the product, you can only get the part if you have the right support contract and enough bandwidth, you have to pay all the costs of the mechanic (unless you do it yourself), you bear all responsibility for fixing your car the right way, if you don’t fix it or fix it wrong you’re responsible for any children killed, and the car manufacturer is in no way actually responsible for the car working before the fix, after the fix, or in any related dimensions where they may sell said product). It’s really all your fault you know. Conceptually “disclosure” is the process of releasing information about the flaw. The theory is consumers of the product have a right to know there’s a security problem, and with the right level of details can protect themselves. With “full disclosure” all information is released, sometimes before there’s a patch, sometimes after; sometimes the discoverer works with the vendor (not always), but always with intense technical detail. “Responsible disclosure” means the researcher has notified the vendor, provided them with details so they can build a fix, and doesn’t release any information to anyone until a patch is released or they find someone exploiting the flaw in the wild. Of course to some vendors use the concept of responsible disclosure as a tool to “manage” researchers looking at their products. “Graphic disclosure” refers to either full disclosure with extreme prejudice, or online dating (now with photos!). There’s a lot of confusion, even within the industry, as to what we really mean by disclosure and it it’s good or bad to make this information public. Unlike many other industries we seem to feel it’s wrong for a vendor to fix a flaw without making it public. Some vendors even buy flaws in other vendors products; just look at the controversy around yesterday’s announcement from TippingPoint. There was a great panel with all sides represented at the recent Black Hat conference. So what are the dirty little secrets? Full disclosure helps the bad guys It’s about ego, control, and competition We need the threat of full disclosure or vendors will ignore security There. I’ve said it. Full disclosure sucks, but many vendors would screw their customers and ignore security without it. Some of full disclosure originates with the concept that “security through obscurity” always fails. That if you keep a hole secret, the bad guys will always discover it eventually so it’s better to make it public so good guys can protect themselves. We find the roots of the security through obscurity concept in cryptography (early information security theory was dominated by cryptographers). Secret crypto techniques were bad, since they might not work; opening the mathematical equations to public scrutiny reduces the chances of flaws and improves security. As with many acts of creation it’s nearly impossible to accurately proof your own work [as my friend and unofficial editor Chris just pointed out]. But in the world of traditional security, obscurity sure as hell works. Not all bad guys are created equal, and the harder I make it for them to find the hole in my security system, the harder it is for a successful attack. Especially if I know where the hole is and fix it before they find it. Secrets can be good. The more we disclose, the easier we make life for the bad guys. “Full disclosure” means we release all the little details. It

Share:
Read Post

Off Topic: A Little Perspective

This has nothing to do with security other than the fact Mike Rothman is a security analyst. Sometimes it’s worth sitting back and evaluating why you’re in the race in the first place. It’s all too easy to get caught up in the insanity of day-to-day demands or the incredibly deceptive priorities of the corporate and government rat races. A few months ago I took a step back and decided to reduce travel, stay healthy, and start this blog. I wanted a more-personal outlet for writing on topics and in a style that’s inappropriate at my day job (in other words, more fun). My challenge is running this site in a way that doesn’t create a conflict of interest with my employer, and thus I don’t publish anything here that I should be publishing there. Mike just went off and started his own company to support his real priorities. You should really read this. Share:

Share:
Read Post

Experiences with FileVault- Mac Encryption

Believe it or not, despite accusations that that my coverage of the Mac wireless hack is all part of some anti-Apple black PR conspiracy, I’m a Mac user. One that’s so addicted I bought my Mom one and had it shipped to me so I could “configure” it. Okay, really I had to send mine in for service and I needed another Intel Mac so I could run it off an external hard drive with an image of my MacBook Pro. I mean I might have been without it for, like, 5-7 days and that’s just not acceptable. How can I carry out my anti-Apple black PR conspiracy without a Mac to write my blog entries on? But I have something I need to admit. It’s sort of embarrassing. But it’s time to share. You see, I’m a security professional. Not just a security professional, but one that focuses on data security. The kind that gets paid to run around telling the media how stupid everyone is for not protecting their data and doing things like, uh, encrypting their hard drives. Not that I… um… was encrypting my laptop. You see I was in a bit of denial. At first it was because I still used my corporate PC and didn’t have access to good encryption software that wouldn’t mess up my configuration. Which was really me just lying to myself. Later I told myself I was so good at physical security, and paranoid in general, that I’d never let my laptop get stolen. Yep, another lie. Finally the ultimate in self deception, “well, I really don’t have anything sensitive on there in the first place”. Right. None of those “not for disclosure” Powerpoint presentations from vendors are really sensitive, are they? I mean how much personal stuff like social security numbers or credit card info could really be hiding in Outlook (in my Parallels virtual machine) or Mail.app? I mean really! When I decided to attend Black Hat and Defcon (home of the world’s most hostile network) right after an international trip to Australia and China I figured it might be a good time to get off my ass and finally encrypt my laptop. For those of you not familiar with Macs, Apple’s included encryption in the OS X operating system for a few years know in a feature called FileVault. But there’s been a lot of debate on how “safe” FileVault is; not from a security standpoint, but from a reliability/recovery standpoint. But in a recent thread in the TidBITS mailing list it didn’t seem to many people had much experience with FileVault, and perhaps some of the rumors were unfounded. Or not. Eventually the guilt caught up and it was time to take the encryption plunge. And so far FileVault is working like a 128-bit AES charm. (details after the jump) FileVault isn’t the whole-drive encryption I typically recommend to enterprise clients. Rather than encrypting the entire hard drive FileVault encrypts the entire home directory of the user. It’s a model well suited for Unix-style operating systems like OS X where nearly any personal file or setting is in the home directory, as opposed to Windows systems where data tends to be more distributed throughout the OS. OS X also includes an option to encrypt the memory cache so even temporary files are protected. The combination of encrypting the home directory and all virtual memory isn’t perfect security, but good for most of us mere mortals worried about losing our laptop or hard drive. FileVault works by creating an encrypted disk image for your home folder (an encrypted sparse image file). When you log in the image mounts and data is transparently encrypted and decrypted using 128 bit AES (Advanced Encryption Standard) as it moves to and from disk. Log out and it unmounts, appearing as one big encrypted file. That’s where most people’s fears arise- your entire home directory, every file including photos, music, video, email, and everything else is all on one big file just waiting for a few corrupt bits to make it unreadable. If your system suddenly crashes and corrupts the image (yes, even Macs crash) there’s the possibility of losing everything. For more details on the inner workings of FileVault check out this article at macdevcenter.com. After doing some research I took a few steps to prep my system. To help with performance I moved my iTunes library to /Users/Shared so they’d be out of my home directory and keep the image file smaller. My photos were already on an external drive and I only have a few videos. That dropped around 30 GB from my home directory. I then created a new user account for running backups. I use the excellent SuperDuper to backup my Macs to external drives. By using a separate backup account the entire encrypted disk image is backed up and thus protected even on the external drive. Since SuperDuper creates bootable copies of hard drives you get the nice option of being able to run completely off the external drive, on any Mac, should you lose your primary drive or even the entire computer. No restore needed. At this point I also committed to backing up nightly instead of weekly. From there it was a simple matter of going into the Security Preference Pane, setting a master password (just in case I forget/screw up my primary), enabling virtual memory encryption, and turning on FileVault. An hour or so later it finished encrypting the drive and I was good to go. So how did it work? Good. Maybe even great. I’ve been up for about 6 weeks now and haven’t had any problems. Performance seems as good as before, although I do have 2 GB of memory and a 7200 rpm hard drive. Even with a few system crashes I haven’t experienced any corruption. What’s also nice is since I do most of my work-related computing in a Windows XP Parallels virtual computer so now even my Windows

Share:
Read Post

Voting Machine Idiocy- and a Proposal for a Reasonable Standard

Ah Diebold, how we’ve missed you. In yet another example of gross negligence with our most sacred political process we find our favorite manufacturer of ATMs and voting machines yet again in the news. This time with a series of failures in the Alaskan primary. From Slashdot: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/15859396/article.pl From Engadget: http://www.engadget.com/2006/08/24/diebold-machines-fail-in-alaska-primary/ For those of you that don’t follow the twist and turns of this seriously shady company, Diebold has a long history of insecure voting machines, battling any attempt to regulate better voting security, and attacking anyone that suggests they might have any teensy-weensy wittle problem that might let someone, you know, hijack an election. For more on the past check out the work by Black Box Voting and the very respected Avi Rubin. This really pisses me off. Voting, whatever your political party (except maybe you anarchists and fascists) is the ultimate expression of a democracy. If we can’t protect the voting process, we might as well give up and just sell the country to the highest bidder (and yes, I feel the same way about poll taxes, gerrymandering, and anything else that interferes with the right to vote). I have two simple suggestions to resolve this idiocy: Require a voter verified paper trail with random audits at the federal level for all elections (right now only certain states require it). Hold voting machines to the same security standards as gambling machines! Think about how highly secure gambling machines are. I first heard this suggestion from Ray Wagner (a fellow analyst at my day job) and it was so simple in concept it amazes me every time someone claims higher standards are just too hard. Heck, we already have testing labs, protocols, and procedures in place. I’m not a conspiracy theorist, despite many hours dedicated to watching the X-Files, but sometimes ya just gotta wonder…. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.