Securosis

Research

Almost Forgot to (Virtually) Smash That Hard Drive

A few months ago I picked up a Western Digital external hard drive at Costco since my MacBook’s internal drive was a bit stuffed with digital photos. The WD drive is a pretty nice USB drive and really portable. The problem? I started having some intermittent failures on the drive. Since this is where I now keep my wedding photos (backed up somewhere else, of course) I decided to return it before it totally died on me. I got the replacement drive, packed up the original, and heading to the shipping store… … where I realized I hadn’t wiped the drive. While it’s just photos, and none of them are of an embarrassing nature, I still don’t relish the idea of seeing them “enhanced” and posted on MySpace. Lucky for me I use a Mac and safe erasing is an integral part of Disk Utility. I ran the program, clicked the security options button, and chose the 7 times overwrite option. 7x might be overkill for some non-sensitive photos, but I figured it would be a good test to see how long it takes. The answer is about 7 hours for a 120 GB USB 2.0 external drive. For the record. Oh well, I guess it isn’t going out today. But I’m darn glad I remembered to wipe the drive before shipping it back. I’d really hate to see any pictures of our cat show up on some sick kitty-porn site. And I’m really glad Apple makes it so easy. Microsoft also has secure formatting options, but generally you need a third party tool (or write your own script) to get the same degree of security. Unless the data is encrypted, without overwriting it’s pretty likely someone can recover it. Then again, smashing is probably faster. But Western Digital might not appreciate a smashed return. I’d probably lose my deposit. (edited 9/10 to add disk size of 120 GB) Share:

Share:
Read Post

It’s All About the Users (Interface)

I’m sitting in the Martini Monkey in San Jose airport, by far the best airport bar in history and possibly my favorite bar anywhere in the US. This place is a seriously funky oasis for those of us banished to the purgatory of airport terminals and solitary $10 crap beers in our hotel rooms. Okay, I might be on my 2nd-ish beer. I just spent the past two days working with clients out in the Valley area. Both are security startups, both are in pretty exciting markets, and I’ve worked with both for a while now. One is about to kick serious donkey, the other may fail despite possibly the best technology in the market. What’s the difference? Giving the audience what they want. Many of the vendors I’ve worked with over the years will probably tell you I’m a royal pain in the ass. I consider my objectivity to be the most important asset I bring to the analyst market and I do everything I can to protect it. You’ll never see a custom quote in a press release out of me (any quotes are lifted from published research), I don’t take gifts over $10 (which limits me to t-shirts I’ll never wear or USB drives of dubious capacity), I rarely do dinners, and I tell all vendors, even the ones I like, that I assume nothing they tell me is true until I hear it from a client. I don’t even tell vendors I have this blog, won’t ever discuss a vendor I’m working with, and won’t talk about this site with anyone I cover. But there’s one way you vendors can influence me- it’s by making a good product that meets customer needs. Back to the two vendors (who hopefully aren’t reading this). As egotistical as I am the one point I consistently emphasize with vendors I work with is shame on you if you don’t validate every piece of advice I give you with your users. End users are a mixed blessing. As a former developer, they either save you or destroy you; especially when it comes to interfaces. This is particularly problematic in the security market where we deal with multiple demographics- ranging from highly technical security experts to some dude that’s just off the help desk. Users can drag you through development cycles where you’re constantly adding features or UI widgets to meet the specific needs of one individual, that don’t apply anywhere else. But the best product managers separate the wheat from the chaff, and rather than being distracted, focus efforts on those few fundamental features that appeal to the broadest client base. Why is this important? Because UI is everything. Not just because it makes your product look pretty, but because a good UI increases the productivity of your users. A bad UI can add hours to someone’s workday, hide the best features of your product, and banish you to the shelf. Not that some UI flash compensates for a lack of function, but a bad UI leads to an unmanageable product that’s nearly useless no matter its core functionality. One of the biggest transitions a startup can make is from an engineering-driven product, focused purely on technical functions to a polished product that slides right into an enterprise security arsenal. From “cool” to “useful” to “operational”. I know some of you command line geeks disagree, but today’s security professionals can barely keep up with enterprise demands and an effective management interface makes all the difference. Besides, when you’re looking at two nearly-functionally-identical products odds are you’ll choose the pretty one. My wife is an extremely intelligent and amazing individual, but the fact that she’s attractive sure didn’t hurt. (if she reads this I might be in a bit of trouble- damn Martini Monkey). Back to my vendors- what’s the difference? One of the vendors today showed me the most significant UI advancements in a short time I’ve ever seen- and definitely the biggest advancement I’ve seen in the security market. Aside from making a more marketable product, I believe these changes will seriously impact their user base and increase the usefulness of the product. It’s not perfect, but in one quarter these guys pulled off some hard core advancements- all validated with their user base. It’s not just looks- they now have a serious competitive advantage because the product is more useful. The best function in the world is worthless if the user can’t find it and use it effectively. And just think how much easier the sales cycle will be when clients see the first product demo and all the functionality is right in their face. The other vendor? They’ve also made some very significant product advances and have one of the best technologies in the market, but the UI still needs some big work. Not only is it hard for the users to find all the functions, but the UI limitations make it seriously hard to pull all the value out of the product. My rough estimate is some operations take 2-3 times as long as they need to. It’s an excellent product, functionally superior to most of the competition, but those functions are so hidden it hurts in both sales situations and day to day operations. In rescue work we spend an obsessive amount of time packing and repacking our gear. Our goal is to optimize our ability to operate by making our tools an extension of our body. When I’m hanging off a cliff face 1000 feet off the ground at night I need to know, intuitively, where every piece of gear is hanging off of me and I need to use them effectively blindfolded. Users shouldn’t have to spend weeks in training, and months in operations, to figure out security products. A well designed user interface can hide reams of functionality while increasing user productivity. It’s about helping the users get their day to day jobs done as efficiently as possible. Nothing else matters. Listen to

Share:
Read Post

Security is My Business, and Business is Good

It’s been a while since Richard Stiennon and I worked together, and I’m learning one of the more enjoyable aspects of blogging is the opportunity to pick on him again. In a post today over at Threat-Chaos Richard states, Most of the premise of this week’s Security Standard conference in Boston appears to be that CIO’s, CSO’s and IT security practitioners have to treat security as a business process just like any other. My perspective is that treating IT security like a business process is like treating a tactical military strike force as a business. While maintaining the capability of military forces could be a process open for improvement by applying some business discipline, actually fighting battles and overcoming opposing forces does not have much of the “business process” about it. Security is much more akin to fighting a battle than it is to “aligning business objectives”. I admit I have a penchant for taking analogies a little too far, but I think comparing IT security to a military strike force might be a bit much. Sure, some of us have short haircuts and we like to talk in acronyms, but the whole never-getting-shot-at thing is a pretty significant difference. And the occasional conference t-shirt isn’t nearly as cool as all the free military swag. Richard is trying to make a valid point that tactical operations in security aren’t as amenable to business objectives and process as perhaps some other areas of IT. But I, of course, disagree. Back when I was a paramedic and firefighter we spent an inordinate amount of time optimizing our processes for dealing with crisis situations (I’ve moved onto firefighting instead of the military since my 4 years in NROTC probably don’t qualify as hardened battle experience). It was only by turning crisis (battle) into process that we could manage the challenges of life or death emergencies. It’s all about process. From the algorithms of CPR to the steps of rapid sequence intubation. Without process you have chaos. The more efficient you are at process, the more you can operationalize crisis management, the more effectively you can manage incidents. And these processes are even aligned to business objectives- some small (don’t kill the patient too much) some large (retain capacity for multiple operations, manage resources). Once everyday crisis is process it takes something really extreme to break operations and force you into incident management mode. I define incident management as “what you do when you’ve exceeded regular process”. This definition is stolen from what we refer to in emergency services as a “Mass Casualty Incident”; which is anything that exceeds your current capacities. In IT security the more incidents you can manage through efficient process, the less you spend on a day to day operational basis, and the more resources you have available for “the big one”. Security that isn’t optimized and aligned with the business is really expensive; and unsustainable in the long run. Even the Army can’t treat every battle as a one-off. It’s still all about business objectives… … and business is good. (bonus points to whoever identifies the source of the slaughtered paraphrase I used for the title) Share:

Share:
Read Post

Disclosure Humor

Really amusing considering our current discussions: How to Handle Security Problems in Your Products This is from Thomas H. Ptacek who’s blogging at matasano.com. I’m not sure how old it is. Ptacek seems to think I’m smart (which I’ll never argue with) but have nothing new to say on disclosure. He’s probably right, but since we still don’t have industry consensus around disclosure there’s still words to be written, and old thoughts to be repackaged in new ways. This is a pretty old debate; one where I don’t expect resolution just because Pete Lindstrom, Ptacek, myself, or anyone else drops some blog posts. There’s just too many competing interests… Share:

Share:
Read Post

Mac Wi-FI: Gruber Needs to Let It Go (and Maynor and Ellch Should Ignore the Challenge)

Last Friday I was packing up for a weekend trip with my wife to Tuscon when my faithful RSS reader chased me down with the latest post on Daring Fireball. I ignored it over the weekend, but think it’s time for a response. John Gruber, ever the poker player (his words, not mine) issued an open challenge to Dave Maynor and John Ellch to crack a stock MacBook. If they win, they keep it. If they can’t break in, they pay Gruber the retail price. Today John Gruber followed up with this post, upping the ante a bit and explaining why he feels this is a fair challenge. Adding to the data stream, John Ellch broke silence and released some details of a similar exploit using Centrino drivers (now patched) to the Daily Dave security mailing list. First some full disclosure of my own. I’ve been a fan of Daring Fireball for some time, John and I share a mutual friend, and we’ve traded a few emails over this. But I really wish he had handled this situation differently. I respect John, and hope this post isn’t taken out of context and used for flame bait. Now, why do I think Gruber is making a mistake? Because his challenge is putting good people in bad positions, it isn’t necessarily good for security, and he isn’t playing for the right stakes. Maynor, Ellch, and the security community in general should just ignore the challenge. Check out the original post, but John challenges Maynor and Ellch to take a stock MacBook with a basic configuration and delete a file off the desktop via remote exploit. John’s reason for the challenge? As for the earlier analogy to poker, I’m no fool. I don’t expect to lose this particular bet — but I don’t expect to win it, either. I expect to be ignored. I don’t think Maynor and Ellch have discovered such a vulnerability in the default MacBook AirPort card and driver, and so, if I’m right, they certainly won’t accept this challenge. I think what they’ve discovered — if they’ve in fact discovered anything useful at all — is a class of potential Wi-Fi-based exploit, which they demonstrated on a rigged MacBook to generate publicity at the expense of the Mac’s renowned reputation for security, but that they have not found an actual exploit based on this technique that works against the MacBook’s built-in AirPort. If I’m wrong, and they have discovered such a vulnerability, they may or may not choose to accept this challenge. But it’s a bet that they’ll only accept if they can win. It comes down to this. If I’m wrong, it’d be worth $1099 to know that MacBook users are in fact at risk. And if I’m right, someone needs to call Maynor and Ellch on their bullshit. John’s challenge is misplaced and he should drop it. Why? I know the demonstration from Black Hat is real. Why? Aside from being at the presentation I had a personal demo (over live video) or exactly what they showed in the video. I got to ask detailed questions and walk through each step. Maynor and Ellch haven’t bullshitted anyone- their demo, as shown in the video and discussed in their presentation, is absolutely real. End of story. Want to see for yourself? Read to the end and you’ll have your own opportunity. Using the third-party card for the demo is responsible: Why? Because their goal was to show a class of attack across multiple platforms without disclosing an unpatched vulnerability. By using an anonymous card no single platform is exposed. Why the Mac? Because it demonstrates that a poorly written device driver can expose even a secure system to exploit. The third-party card highlights device drivers, not the OS, as the point of weakness. They could have shown this on Windows but everyone would have assumed it was just another Windows vulnerability. But the Mac? Time to pay attention and demand more from device manufacturers. Responsible disclosure encourages staying silent until a patch is released, or an exploit appears. Why? If responsibility, protecting good guys, or potential legal issues aren’t good enough for you just understand it’s the accepted security industry practice. Some vendors and independent researchers might be willing to act irresponsibly, but I respect Maynor and Ellch for only discussing known, patched vulnerabilities. I won’t pretend there’s full consensus around disclosure; I’ve even covered it here, but a significant portion of the industry supports staying silent on vulnerabilities while working with the vendor to get a patch. The goal is to best protect users. Some vendors abuse this (to control image), as do some researchers (to gain attention), but Maynor and Ellch staying silent is very reasonable to many security experts. Remember- the demonstration was only a small part of their overall presentation and probably wouldn’t have ga ered nearly as much attention if it weren’t for Brian Krebs’ sensationalist headline. That article quickly spun events out of control and is at the root of most of the current coverage and criticism. Just confirming an exploit could hurt Maynor and Ellch: Two words: Mike Lynn. This is between Maynor, Ellch, SecureWorks, and any vendors (including Apple) they may or may not be working with. I like Daring Fireball, but SecureWorks has a history of responsible disclosure and working with affected vendors, and I see no reason for them to change that policy to satisfy the curiosity of bloggers, reporters, or any other outsider. John’s stakes are too low. He’s asking Maynor and Ellch to bet their careers against MacBooks? If John puts Daring Fireball up as his ante the bet might be fair. Besides, Maynor already has a MacBook. This challenge doesn’t help anyone. At all. Is my MacBook Pro vulnerable? I don’t know, but even if it is there’s not a damn thing I can do about it until Apple issues a patch. It’s not like I’m turning off my wireless until I hear there’s some

Share:
Read Post

Totally Off Topic: A Very Sad Day

There are very few genuine, passionate people in this world. Today, with the death of Steve Irwin, there is one less. http://www.cnn.com/2006/SHOWBIZ/TV/09/04/australia.irwin/index.html http://animal.discovery.com/fansites/crochunter/steve/statement.html?clik=www_wh_2 Steve was a personal hero of mine. Not because of any crazy stunts, but because of his integrity, honesty, and utter dedication to his family and what he believed in. This is just a terrible loss and the only ones that matter now are his family. Although I never met Steve I was fortunate enough to visit Australia Zoo twice over the past few years. There’s not a damn thing worth saying I can say on this so I’m just posting some pictures from our last trip to Australia Zoo after the jump. When a memorial fund is established we’ll post the information here. Note: Steve wasn’t there when we’ve visited, that’s his staff in the photos: Share:

Share:
Read Post

Introducing Chris Pepper

I’d like to take a moment and introduce a new contributor to Securosis. Chris Pepper is a senior systems administrator at Rockefeller University in NYC and longtime contributor to TidBITS and various other publications. Chris is one of the most knowledgeable sysadmins I’ve ever known and the first person I turn to when I need command-line support on various *nix flavors or Mac. Chris and I have been friends since sometime near the end of high school (we went to different schools). I was insanely jealous of his Apple Newton and after years of debate he’s the one that finally convinced me to give Macs a shot (I mostly use Macs these days). He’s often reviewed my work before publication and his skills as an editor are frightening. I’ve asked Chris to join Securosis as a contributor due to his perspective as an end-user. He’s not a security vendor, out-of-date industry analyst (that’s me), full-time security professional, or even product vendor. Chris works at a major university dealing with security issues on a daily basis as an adminsrator. Over the years Chris has hosted my personal domains out of his apartment and his attention to detail with regards to security is far beyond most of the professional services I’ve used. He’s even discovered a few vulnerabilities in the course of his admin duties. Chris’ personal blog is located at http://www.reppep.com/weblog/pepper/ He’s here to keep us security pundits honest and bring a little bit of the real world into our discussions… Share:

Share:
Read Post

Just a Spoonful of Obscurity Makes the DefCon Level Go down!

Rich, It feels heretical, but I can agree that obscurity can provide some security. The problem comes when people count on secrecy as their only or primary security. Jim: “Oh, we don’t have to encrypt passwords. Sniffing is hard!” Bob: “Hey, thank you for those credit card numbers!” Jim: “What?” Bob: “Ha ha, my friend Joe got a job at your ISP about a year ago, and started looking for goodies.” Vendor: “Nobody will ever bother looking in the MySQL DB for the passwords.” Cracker: “0WNED! Thank you, and let’s see how many of your users use the same passwords for their electronic banking!” Vendor: “But nobody else has access to the server!” Cracker: “But I found a hole in your PHP bulletin board. Game over, man!” GeniousDood: “I just invented a perfect encryption algorithm! Nobody will ever break it!” Skeptic: “How do you know?” GeniousDood: “I checked. It’s unbeatable.” Skeptic: “Thanks, but I’ll stick with something at least one disinterested person has confidence in – preferably Schneier.” IT Droid: “Check out our new office webcam! It’s at http://camera.example.com ” Paranoid: “What’s the password?” IT Droid: “Password? No-one’s ever going to find it.” Paranoid: “Google did.” I can accept that obscurity makes cracking attempts more difficult. This additional difficulty in breaking into a system might be enough to discourage attackers. Remember – you don’t have to outrun the bear, just your slowest friend! Also, if you have a short period before the fix is available, during which there is a gaping hole in your defenses, obviously it’s going to be easier for people to exploit if they have full details, so it’s hard to see how full disclosure could ever look like a good thing to a commercial vendor. On the other hand, open source projects are more likely to benefit from full disclosure, as it substantially widens the pool of people who can provide a patch, and open source communities attract people who want to deal with security problems themselves (certainly many more Linux & FreeBSD admins want to patch Sendmail or BIND, than Windows users want to patch IE or their DLLs). Security companies are like this too – they want enough info to protect their customers. Restricted access information is fine, as long as the security companies are on the list – such access becomes another asset for the security vendor. But back to obscurity: it can be used as one component in a layered defense, or it can be used as the only defense. One is useful, the other is dumb. Alternatively, obscurity can be used as a temporary barrier: “It will take them a few days to figure out how to break IE, so we’ll get a chance to distribute the patch before they can start attacking our users.” This is a very different proposition than “permanent obscurity” as (hopefully part of) a defense. The problem, of course, is that not everybody gets the patch immediately. Some people don’t because they don’t know about it, others because they have important systems which they can’t change – perhaps because the risk of breakage is unacceptable, or the “fix” is unacceptable. This may last a few days, or forever. Some people don’t have the bandwidth (full dot upgrades for Mac OS, and Service Packs for Windows, are large downloads), and may or may not get the upgrades another way. Some just don’t want to be bothered, and believe they’re invulnerable. Others cannot afford the upgrades. So those people may have no defense aside from obscurity, and they are vulnerable; on the Windows side, they tend to get hacked. Obscurity is just not a good long-term defense, since most secrets leak eventually, and patches can be reverse-engineered to find the hole. This leads into the issue of vendors who don’t patch in a timely manner, but I have to leave something for Rich to rant about… Share:

Share:
Read Post

Encryption is Cheaper than Destruction

I like to think Richard Stiennon and I are good friends. He was at my wedding in Mexico. I took him and his son skiing up at Copper Mountain where I used to patrol. For a time he even rented space in my condo in Boulder while I was slowly moving to Phoenix. We’d swap my car out at the airport parking lot; it was very convenient. But I never suspected he was so violent. Goes to show you that you can never really know someone. It all started with this post on his blog where he advocates smashing old hard drives rather than taking the risk of the data being later recovered. I thought, “okay, he’s just trying to make a point”. But yesterday, over at Emergent Chaos, he expanded his violent tendencies towards cell phones in this post. Now I’m worried. I mean this is a man I’ve left in my home, who spent evenings in Mexico drinking with my family members. I’ve even loaned him my cellphone for the occasional call! I feel lucky it came back in one piece. Maybe because I had it in silent mode or something. But here’s some food for thought. I was talking to a client a while back about old hard drives. They were considering encryption since their SAN (Storage Area Network) was managed by their reseller who frequently swapped out failed drives. They looked at degaussing or smashing the drives- just as RIchard suggested. The cost? $8,000,000.00 a year. $8M a year. Wow. That’s a fair amount of cash, even with the weakened dollar. And those cell phones? The pollutants in them are pretty potent and many recycled phones end up in needy hands. So Richard might want to consider other options. We estimated that client would only need to pay $100,000-$200,000 to encrypt that SAN. Keys are stored externally so the data is unrecoverable. And portable devices? If there’s something sensitive on them you should really be encrypting them anyway. People lose those things you know. Richard- I know a good anger management therapist. Call me, your friends are worried. Share:

Share:
Read Post

Dealing with Security Vendor Exaggerations

I generally don’t discuss “industry” issues here since that’s what I get paid to do at my day job. And if I start offering for free here, what I get paid to do over there, I may find myself offered the opportunity to do it for free on a permanent basis. Mike Rothman runs one of the better industry-oriented blogs. He and I used to sit across the table when he ran marketing for one of the vendors I cover. I like Mike a lot better as an analyst. He’s running an interesting debate on the problems with the security market. The debate started with an article in Dark Reading, moves to Mike’s blog here, Alan Shimel responds, then Mike gets the last word (for now). At the crux of their debate is the honesty of vendors and the aggressiveness of their sales and marketing tactics. My opinion? I work with many excellent security vendors who are out to protect their customers and fairly make a little money on the way. But, every single day, either directly to me, or relayed by my clients, vendors misrepresent their products or outright lie about capabilities. Usually it’s the marketing or sales teams, not the product teams. Do all vendors lie? No, but the good vendors out there are frequently forced into bad positions by their less scrupulous competition. Yes, vendors lie. So does your Mom (remember the tooth fairy) but that doesn’t make her the embodiment of pure evil. Probably. And some of this is simply passion for their products. Everyone thinks their baby is the best looking, smartest, most talented in the world, but there are still a lot of dumb, ugly, couch potatoes. If you don’t believe in what you do you shouldn’t be doing it. So how do you cut through the crap? My self serving answer is use your friendly neighborhood analyst. The biggest part of our job, at least for those of us who are end user focused, is to help make appropriate buying decisions and separate hype from reality. Our testing lab is the production environment of our end user clients- if a product doesn’t work, we’ll eventually hear about it. But if you don’t trust or can’t afford an analyst firm just do what we do. Ask your vendors for customer references in production deployments; if a feature isn’t in production, with a reference-able client, it isn’t real. Then talk to your network and see what other companies like yours are doing and if any have deployed the product. Let’s be honest- most of you readers are either security-types, or at least have a passing interest in security. It’s not like we trust anyone anyway. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.