Securosis

Research

Friday Summary – July 24, 2009

“Hi, my name is Adrian, and, uh … I am a technologist” … Yep. I am. I like technology. Addicted to it in fact. I am on ‘Hack A Day’ almost once a day. I want to go buy a PC and over-clock it and I don’t even use PCs any more. I can get distracted by an interesting new technology or tool faster than a kid at Toys R Us. I have had a heck of a time finishing the database encryption paper as I have this horrible habit of dropping right down into the weeds. Let’s look at a code sample! What does the API look like? What algorithms can I choose from? How fast is the response in key creation? Can I force a synch across key servers manually, or is that purely a scheduled job? How much of the API does each of the database vendors support? Yippee! Down the rabbit hole I go … Then Rich slaps me upside the head and I get back to strategy and use cases. Focus on the customer problem. The strategy behind deployment is far more important to the IT and security management audiences than subtleties of implementation, and that should be the case. All of the smaller items are interesting, and may be an indicator off the quality of the product, but are not a good indicator to the suitability of a product to meet a customers need. I’ll head to the technologist anonymous meeting next week, just as soon as I wrap the recommendations section on this paper. But the character flaw remains. In college, studying software, I was not confident I really understood how computers worked until I went down into the weeds, or in this case, into the hardware. Once I designed and built a processor, I understood how all the pieces fit together and was far more confident in making software design trade-offs. It’s why I find articles like this analysis of the iPhone 3GS design so informative as it shows how all of the pieces are designed and work together, and now I know why certain applications perform they way they do, and why some features kill battery life. I just gotta know how all the pieces fit together! I think Rich has his addiction under control. He volunteers to do a presentation at Defcon/Black Hat each year, and after a few weeks of frenzied soldering, gets it out of his system. Then he’s good for the remainder of the year. I think that is what he is doing right now: bread board and soldering iron out, and making some device perform in a way nature probably did not intend it to. Last year it was a lamp that hacked your home network. God only knows what he is doing to the vacuum cleaner this year! A couple notes: We are having to manually approve most comments due to the flood of message spam. If you don’t see your comment, don’t fret, we will usually open it up within the hour. And we are looking for some intern help here at Securosis. There is a long list of dubious qualities we are looking for. Basically we need some help with some admin and site work, and in exchange will teach you the analyst game and get you involved with writing and other projects. And since our office is more or less virtual, it really does not matter where you live. And if you can write well enough you can help me finish this damned paper and write the occasional blog post or two. We are going to seriously look after Black Hat, but not before, so get in contact with us next month if you are interested. We’re also thinking we might do this in a social media/community kind of way, and have some cool ideas on making this more than the usual slave labor internship. As both Rich and I will be at Black Hat/Defcon next week, there will not be a Friday summary, but we will return to our regularly scheduled programming on the 7th of August. We will be blogging live and I assume we’ll even get a couple of podcasts in. Hope to see you at BH and the Disaster Recovery Breakfast at Cafe Lago! Hey, I geek out more than once a year! I use microcontrollers in my friggen Halloween decorations for Pete’s sake! -rich And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Rich and Martin in Episode 159 of the Network Security Podcast. Rich wrote an article on iPhone 3GS security over at TidBITS. Favorite Securosis Posts Rich: Adrian’s post on the FTC’s Red Flag rules. Adrian: Amazon’s SimpleDB looks like it is going to be a very solid, handy development tool. Other Securosis Posts Electron Fraud, Central American Style Project Quant Posts Project Quant: Partial Draft Report Favorite Outside Posts Adrian: Jack Daniel’s pragmatic view on risk and security. Rich: Techdulla with a short post that makes a very good point. I have a friend in exactly the same situation. Their CIO has no idea what’s going on, but spends a lot of time speaking at vendor conferences. Top News and Posts Get ready for Badge Hacking! RSnake and Inferno release two new browser hacks. I want to be a cyber-warrior, I want to live a life of dang-er-ior, or something like that. A great interview with our friend Stepto on gaming safety. The Pwnie award nominations are up. The dhcpclient vulnerability is very serious, and you shouldn’t read this post. There is another serious unpatched Adobe Flash/PDF vulnerability. George Hulme with some sanity checking on malware numbers. Medical breach reports flooding California. Blog Comment of the Week This week’s best comment comes from Bernhard in response to the Project Quant: Create and Test Deployment Package post: I guess I’m mosty relying on the vendor’s packaging, being it opatch, yum, or msi. So, I’m mostly not repackaging things, and the tool

Share:
Read Post

Sorry, Data Labeling is *Not* the Same as DRM/ERM

First, a bit of a caveat. Andrew Jaquith of Forrester is an excellent analyst and someone I know and respect. This is a criticism of a single piece of his research, and nothing more. Over at the Forrester Security Blog today, Andrew posted a change of policy on their use of two important data security terms. In short, they will now be using the term Data Labeling instead of Enterprise Rights Management: So, here’s what Forrester will do in our future coverage. The ERM (enterprise rights management) acronym will vanish, except as a “bridge” term to jog memories. In the future, we will practice “truth in labeling” and call this ERM thing data labeling. Unfortunately, this is a factually incorrect change since data labeling already exists. I agree with Andrew that ERM is a terrible term – in large part because I’ve covered Enterprise Risk Management, and know there are about a dozen different uses for that acronym. Personally, I refuse to use ERM in this context, and use the term Enterprise DRM (Digital Rights Management). Enterprise Rights Management is a term created to distinguish consumer DRM from enterprise DRM, in no small part because nearly everyone hates consumer DRM. The problem is that data labeling is also a specific technology with an established definition. One we’ve actively criticized in the past. Andrew refers back to the Orange book: Here’s what the Orange Book says about data labeling: “Access control labels must be associated with objects. In order to control access to information stored in a computer, according to the rules of a mandatory security policy, it must be possible to mark every object with a label that reliably identifies the object’s sensitivity level (e.g., classification), and/or the modes of access accorded those subjects who may potentially access the object.” Sounds just like what what ERM is doing, no? No – the difference is under the covers. Data labeling refers to tags or metadata attached to structured or unstructured data to define a classification level. Labels don’t normally include specific handling controls, since those are handled at a layer above the label itself (depends on the implementation). DRM is the process of encrypting data, then applying usage rights that are embedded in the encrypted object. For example, you encrypt a file and define who can view it, print it, email it, or whatever. Any application with access to decrypt the file is designed to respect and enforce those policies… as opposed to regular encryption, which includes no usage rights, and where anyone with the key can read the file. This shows the problem with consumer DRM and why it always breaks – in an enterprise we have more control over locking down the operating environment. In the consumer world, the protected file is always in a hostile environment. Since you have to have the key to decrypt the file, the key and the data are both potentially exposed. Labeling and DRM may work together, but are distinct technologies. You can label an individual record/row in a database, but you can’t apply DRM rights to it (I suppose you could, but it’s completely impractical and there isn’t a single tool on the market for it). You can apply DRM rights to a file without ever applying a classification level. I asked Andrew about this over Twitter, and our conversation went like this (Andrew’s post is first): @rmogull Really? Do you think “ERM” is actually a useful name for that category? Want to discuss alternatives? @arj I use “Enterprise DRM” I also hate ERM and refuse to use it. @rmogull Makes sense. Want to send me an e-mail (or do a blog post) critiquing the post? I’m a pretty good sport. I think we are on the same page now, and thank Andrew for bringing this up, and being willing to take some gentle lumps. Share:

Share:
Read Post

Amazon’s SimpleDB

I have always felt the punctuated equilibrium of database technology is really slow, with long periods between the popularity of simple relational ‘desktop’ databases (Access, Paradox, DBIII+, etc) and ‘enterprise’ platforms (DB2, Oracle, SQL Server, etc). But for the first time in my career, I am beginning to believe we are seeing a genuine movement away from relational database technology altogether. I don’t really study trends of relational database management platforms like I did a decade or so ago, so perhaps I have been slightly ignorant of the progression, but I am somewhat surprised by the rapidity with which programmers and product developers are moving away from relational DB platforms and going to simple indexed flat files for data storage. Application developers need data storage and persistence as much as ever, but it seems simpler is better. Yes, they still use tables, and they may use indices, but complex relational schemata, foreign keys, stored procedures, normalization, and triggers seem to be unwanted and irrelevant. Advanced relational technologies are being ignored, especially by web application developers, both because they want to manage the functions within the application they know (as opposed to the database they don’t), and because it makes for a cleaner design and implementation of the application. What has surprised me is the adoption of indexed flat files for data storage in lieu of any relational engine at all. Flat files offer a lot of flexibility, they can deal with bulk data insertions very quickly, and depending upon how they are implemented may offer extraordinary query response. It’s not like ISAM and other variants ever went away, as they remain popular in everything from mainframes to control systems. We moved from basic flat files to relational platforms because they offered more efficient storage, but that requirement is long dead. We have stuck with relational platforms because they offered data integrity and transactional consistency lacking in the simple data storage platforms, as well as excellent lookup speed on reasonably static data sets, and they provide a big advantage with of pre-compiled, execution ready stored procedure code. However when the primarily requirement is quick collection and scanning of bulk data, you don’t really care about those features so much. This is one of the reasons why many security product vendors moved to indexed flat files for data storage as it offers faster uploads, dynamic structure, and correlation capabilities, but that is a discussion for another post. I have been doing some research into ‘cloud’ service & security technologies of late, and a few months ago I was reminded of Amazon Web Services’ offering, Amazon SimpleDB. It’s a database, but in the classic sense, or what databases were like prior to the relational database model we have been using for the last 25 years. Basically it is a flat file, with each entry having attached name/value attribute pairs. Sounds simple because it is. It’s a bucket to dump data in. And you have the flexibility to introduce as much or as little virtual structure into it as you care to. It has a query interface, with all of the same query language constructs that most SQL languages offer. It appears to have been quietly launched in 2007, and I am guessing it was built by Amazon to solve their own internal data storage needs. In May of this year they augmented the query engine to support comparison operators such as ‘contains’ and several features for managing result sets. At this point, the product seems to have reached a state where it offers enough functionality to support most web application developers. You will be giving up a lot of (undesired?) functionality, but f you just want a simple bucket to dump data into with complete flexibility, this is a logical option. I am a believer that ‘cheaper, faster, easier’ always wins. Amazon’s SimpleDB fits that model. It’s feasible that this technology could snatch away the low end of the database market that is not interested in relational functions. Share:

Share:
Read Post

Premature Cyberjaculation: Security, Skepticism, and the Press

Over the past few weeks we’ve seen yet two more security stories get completely blown out of proportion in the press. The first was, of course, the DDoS attacks that were improperly attributed by most commentators to North Korea. The second, no surprise, was the Great Twitter Hack of 2009, which might also be referred to the Great Cloud Security Collapse. In both cases the stories were not only blown completely out of proportion, but many of the articles devoted more space to hyperbole and innuendo than facts. In the meantime, we had a series of unpatched vulnerabilities being exploited on Internet Explorer and Firefox, placing users at very real risk of becoming a victim. Share:

Share:
Read Post

Electron Fraud, Central American Style

When I was a kid, the catchphrase “Computers don’t lie” was very common, implying that machines were unbiased and accurate, in order to engender faith in the results they produced. Maybe that’s why I am in security – because I found the concept to be very strange. Machines, and certainly computers, do pretty much exactly what we tell them to do, and implicit trust is misguided. As their inner workings are rarely transparent, they are perfectly suited to hiding all sorts of shenanigans, especially when under the control of power hungry despots. It is being reported that Honduran law enforcement has seized a number of computers that contain certified results for an election that never took place. It appears that former President Manuel Zelaya attempted to rig the vote on constitutional reform, and might have succeeded if he had not been booted prior to the vote. I cannot vouch for the quality of the translated versions, but here is an except: The National Direction of Criminal Investigation confiscated computers in the Presidential House in which were registered the supposed results of the referendum on the reform of the Constitution that was planned by former President Manuel Zelaya on last June 28, the day that he was ousted. “This group of some 45 computers, by the appearance that present, they would be used for the launch of the supposed final results of the quarter ballot box”, he explained. The computers belonged to the project ‘Learns’ of the Honduran Counsel of Science and Technology directed towards rural schools. All of the computers had been lettered with the name of the department for the one that would transmit the information accompanied by a document with the headline: “Leaf of test”, that contained all the data of the centers of voting. From the translated articles, it’s not clear to me if these computers were going to be used in the polling places and would submit the pre-loaded results, or if they were going to mimic the on-site computers and upload fraudulent data. You can pretty do anything you want when you have full access to the computer. Had this effort been followed through, it would have been difficult to detect, and the results would have been considered legitimate unless proven otherwise. Share:

Share:
Read Post

FTC Requirements for Customer Data

There was an article in Sunday’s Arizona Republic regarding to the Federal Trade Commission’s requirements for any company handling sensitive customer information. Technically this law went into effect back in January 2008, but it was enforced due to lack of awareness. Now that the FTC has completed their education and awareness program, and enforcement will begin August 1st of this year, it’s time to begin discussing these guidelines. This means that any business that collects, stores, or uses sensitive customer data needs a plan to protect data use and storage. The FTC requirements are presented in two broad categories. The first part spells out what companies can do to detect and spot fraud associated with identity theft. The Red Flags Rule spells out the four required components. Document specific ‘red flags’ that indicate fraud for your type of business. Document how your organization will go about detecting those indicators. Develop guidelines on how to respond when they are encountered. Periodically review the process and indicators for effectiveness and changes to business processes. The second part is about protecting personal information and safeguarding customer data. It’s pretty straightforward: know what you have, keep only what you need, protect it, periodically dispose of data you don’t need, and have a plan in case of breach. And, of course, document these points so the FTC knows you are in compliance. None of this is really ground-breaking, but it is a solid generalized approach that will at least get businesses thinking about the problem. It’s also broadly applied to all companies, which is a big change from what we have today. After reviewing the overall program, there are several things I like about the way the FTC has handled this effort. It was smart to cover not just data theft, but how to spot fraudulent activity as part of normal business operations. I like that the recommendations are flexible, and the FTC did not mandate products or process, only that you document. I like the fact that they were pretty clear on who this applied to and who it does not. I like the way that reducing the amount of sensitive data retention is a shown as a natural way to simplify requirements for many companies. Finally, providing simple educational materials, such as this simplified training video, is a great way to get companies jump started, and gives them some material to train their own people. Most organizations are going to be besieged by vendors with products that ‘solve’ this problem, and to them I can only say ‘Caveat emptor’. What I am most interested in is the fraud detection side, both what the red flags are for various business verticals, and how and where they detect. I say that for several reasons, but specifically because the people who know how to detect fraud within the organization are going to have a hard time putting it into a checklist and training others. For example, most accountants I know still use Microsoft Excel to detect fraud on balance sheets! Basically they import the balance sheet and run a bunch of macros to see if there is anything ‘hinky’ going on. There is no science to it, but practical experience tells them when something is wrong. Hopefully we will see people share their experiences and checklists with the community at large. I think this is a good basic step forward to protect customers and make companies aware of their custodial responsibility to protect customer data. Share:

Share:
Read Post

Friday Summary – July 17, 2009

I apologize to those of you reading this on Saturday morning – with the stress of completing some major projects before Black Hat, I forgot that to push the Summary out Friday morning, we have to finish it off Thursday night. So much for the best laid plans and all. The good news is that we have a lot going on at Black Hat. Adrian and I will both be there, and we’re running another Disaster Recovery Breakfast, this time with our friends over at Threatpost. I’m moderating the VC panel at Black Hat on Wednesday, and will be on the Defcon Security Jam 2: The Fails Keep on Coming panel. This is, by far, my favorite panel. Mostly because of the on-stage beverages provided. Since I goon for the events (that means work), Adrian will be handling most of our professional meetings for those of you who are calling to set them up. To be honest, Black Hat really isn’t the best place for these unless you catch us the first day (for reasons you can probably figure out yourself). This is the one conference a year when we try and spend as much of our time as possible in talks absorbing information. There is some excellent research on this year’s agenda, and if you have the opportunity to go I highly recommend it. I think it’s critical for any security professional to keep at least half an eye on what’s going on over on the offensive side. Without understanding where the threats are shifting, we’ll always be behind the game. I’ve been overly addicted to the Tour de France for the past two weeks, and it’s fascinating to watch the tactical responsiveness of the more experienced riders as they intuitively assess, dismiss, or respond to the threats around them. While the riders don’t always make large moves, they best sense what might happen around the next turn and position themselves to take full advantage of any opportunities, or head off attacks (yes, they’re called attacks) before they post a risk. Not to over-extend another sports analogy, but by learning what’s happening on the offensive side, we can better position ourselves to head off threats before they overly impact our organizations. And seriously, it’s a great race this year with all sorts of drama, so I highly recommend you catch it. Especially starting next Tuesday when they really hit the mountains and start splitting up the pack. -Rich And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Martin interviews Steve Ocepek on this week’s Network Security Podcast (plus we cover a few major news stories). Rich is quoted in a Dark Reading article on implemented least privileges. Rich is quoted alongside former Gartner co-worker Jeff Wheatman on database privileges over at Channel Insider. John Sawyer refers to our Database Activity Monitoring paper in another Dark Reading article. Favorite Securosis Posts Rich: Adrian’s Technology vs. Practicality really hit home. I miss liking stuff. Adrian: Database Encryption, Part 6: Use Cases. Someone has already told us privately that one of the use cases exactly described their needs, and they are off and implementing. Other Securosis Posts Oracle Critical Patch Update, July 2009 Microsoft Patched; Firefox’s Turn Second Unpatched Microsoft Flaw Being Exploited Subscribe to the Friday Summary Mailing List Pure Extortion Project Quant Posts We’re getting near the end of phase 1 and here’s the work in progress: Project Quant: Partial Draft Report Favorite Outside Posts Adrian: Amrit Williams North Korea Cyber Scape Goat of the World. The graphic is priceless! Rich: David and Alex over at the New School preview their Black Hat talk. Top News and Posts Critical JavaScript Vulnerability in Firefox 3.5. Microsoft Windows and Internet Explorer security issues patched. Oracle CPU for July 2009. Goldman Trading Code Leaked. Mike Andrews has a nice analysis on Google Web “OS”. Twitter Hack makes headlines. Lexis-Nexus breached by the mob? Vulnerability scanning the clouds. State department worker sentenced for snooping passports. Casino sign failure (pretty amusing). PayPal reports security blog to the FBI for a phishing screenshot. A school sues a bank over theft due to hacked computer. This is a tough one; the school was hacked and proper credentials stolen, but according to their contract those transfers shouldn’t have been allowed even from the authenticated system/account. Nmap 5 released – Ed’s review. Blog Comment of the Week This week’s best comment comes from SmithWill in response to Technology vs. Practicality: Be weary of the CTO/car fanatic. Over-built engines=over instrumented, expensive networks. But they’re smoking fast! Share:

Share:
Read Post

Oracle Critical Patch Update, July 2009

If you have read my overviews of Oracle database patches long enough, you probably are aware of my bias against the CVSS scoring system. It’s a yardstick to measure the relative risk of the vulnerability, but it’s a generic measure, and a confusing one at that. You have to start somewhere, but it’s just a single indicator, and you do need to take the time to understand how the threats apply (or don’t) to your environment. In cases where I have had complete understanding of the nature of a database threat, and felt that the urgency was great enough to disrupt patching cycles to rush the fix into production, CVSS has only jibed with my opinion around 60% of the time. This is because access conditions typically push the score down, and most developers have pre-conceived notions about how a vulnerability would be exploited. They fail to understand how attackers turn all of your assumptions upside down, and are far more creative in finding avenues to exploit than developers anticipate. CVSS scores reflect this overconfidence. Oracle announced the July 2009 “Critical Patch Update Advisory” today. There are three fairly serious database security fixes, and two more for serious issues for secure backup. The problem with this advisory (for me, anyway) is that none of my contacts know the specifics behind CVE-2009-1020, CVE-2009-1019 or CVE-2009-1963. Further, NIST, CERT, and Mitre have not published any details at this time. The best information I have seen in Eric Maurice’s blog post, but it’s little more than the security advisory itself. Most of us are in the dark on these, so meaningful analysis is really not possible at this time. Still, remotely exploitable vulnerabilities that bypass authentication are very high on my list of things to patch immediately. And compromise of the TNS service in the foundation layer, which two of the three database vulnerabilities appear to be, provides an attacker both a method of probing for available databases and also exploitation of peer database trust relationships. I hate to make the recommendation without a more complete understanding of the attack vectors, but I have to recommend that you patch now. Share:

Share:
Read Post

Technology vs. Practicality

I am kind of a car nut. Have been since I was little when my dad took me to my first auto race at the age of four (It was at Laguna Seca, a Can-Am race. Amazing!). I tend to get emotionally attached to my vehicles. I buy them based upon how they perform, how they look, and how they drive. I am fascinated by the technology of everything from tires to turbos. I am a tinkerer, and I do weird things like change bushings that don’t need to be changed, rebuild a perfectly good motor or tweak engine management computer settings just because I can make them better. I have heavily modified every vehicle I have ever owned except the current one. I acknowledge it’s not rational, but I like cars, and this has been a hobby now for many years. My wife is the opposite. She drives a truck. For her, it’s a tool she uses to get her job done. Like a drill press or a skill saw, it’s just a mechanical device on a depreciation curve. Any minute of attention it requires above filling the tank with gasoline is too many. It’s stock except for the simple modifications I made to it, and is fabulously maintained, both facts she is willfully unaware of. Don’t get me wrong, she really likes her truck because it’s comfortable, with good air and plenty of power, but that’s it. After all, it’s just a vehicle. As a CTO, I was very much in the former camp when it came to security and technology. Love technology and I get very excited about the possibilities of how we might use new products, and the philosophical advantages new developments may bring. It’s common, and I think that is why so many CTOs become evangelists. But things are different as an analyst. I have been working with Rich for a little over a year now and it dawned on me how much my opinion on technology has changed, and how differently I now approach discussing technology with others. We had a conference call with an email security vendor a couple weeks ago, and they have some really cool new technology that I think will make their products better. But I kept my mouth shut about how cool I think it is because, as an analyst, that’s not really the point. I kept my mouth shut because most of their customers are not going to care. They are not going to care because they don’t want to spend a minute more considering email security and anti-spam than they have to. They want to set policies and forget about it. They want to spend a couple hours a month remediating missing email, or investigating complaints of misuse, but that’s it. It’s a tool used to get their job done and they completely lack any emotional attachment their vendor might have. Cool technology is irrelevant. It has been one of my challenges in this role to subjugate enthusiasm to practicality, and what is possible for just what is needed. Share:

Share:
Read Post

Microsoft Patched; Firefox’s Turn

While Microsoft releases patches for various vulnerabilities, including the two active zero day attacks, Firefox is being actively exploited. According to the Mozilla Security Blog, there is a flaw in how Firefox handles JavaScript. We suggest you follow the instructions in that post to mitigate the flaw until they release a patch (which should be soon). Not that we plan to post every time some piece of software is exploited or patched, but this series seems to… bring some balance to the Force. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.