Securosis

Research

If You Had a 3G iPad Before June 9, Get a New SIM

If you keep up with the security news at all, you know that on June 9th the email addresses and the device ICC-ID for at least 114,000 3G iPad subscribers were exposed. Leaving aside any of the hype around disclosure, FBI investigations, and bad PR, here are the important bits: We don’t know if bad guys got their hands on this information, but it is safest to assume they did. For most of you, having your email address potentially exposed isn’t a big deal. It might be a problem for some of the famous and .gov types on the list. The ICC-ID is the unique code assigned to the SIM card. This isn’t necessarily tied to your phone number, but… It turns out there are trivial ways to convert the ICC-ID into the IMSI here in the US according to Chris Paget (someone who knows about these things). The IMSI is the main identifier your mobile operator uses to identify your phone, and is tied to your phone number. If you know an IMSI, and you are a hacker, it greatly aids everything from location tracking to call interception. This is a non-trivial problem, especially for anyone who might be a target of an experienced attacker… like all you .gov types. You don’t make phone calls on your iPad, but any other 3G data is potentially exposed, as is your location. Everything you need to know is in this presentation from the Source Boston conference by Nick DePetrillo and Don Bailey.](http://www.sourceconference.com/bos10pubs/carmen.pdf) Realistically, very few iPad 3G owners will be subject to these kinds of attacks, even if bad guys accessed the information, but that doesn’t matter. Replacing the SIM card is an easy fix, and I suggest you call AT&T up and request a new one. Share:

Share:
Read Post

Draft Data Security Survey for Review

Hey everyone, As mentioned the other day, I’m currently putting together a big data security survey to better understand what data security technologies you are using, and how effective they are. I’ve gotten some excellent feedback in the comments (and a couple of emails), and have put together a draft survey for final review before we roll this out. A couple things to keep in mind if you have the time to take a look: I plan on trimming this down more, but I wanted to err on the side of including too many questions/options rather than too little. I could really use help figuring out what to cut. Everyone who contributes will be credited in the final report. After a brief bit of exclusivity (45 days) for our sponsor, all the anonymized raw data will be released to the community so you can perform your own analysis. This will be in spreadsheet format, just the same as I get it from SurveyMonkey. The draft survey is up at SurveyMonkey for review, because it is a bit too hard to replicate here on the site. To be honest, I almost feel like I’m cheating when I develop these on the site with all the public review, since the end result is way better than what I would have come up with on my own. Hopefully giving back the raw data is enough to compensate all of you for the effort. Share:

Share:
Read Post

Friday Summary: June 4, 2010

There’s nothing like a crisis to bring out the absolute stupidity in a person… especially if said individual works for a big company or government agency. This week alone we’ve had everything from the ongoing BP disaster (the one that really scares me) to the Israeli meltdown. And I’m sure Sarah Palin is in the mix there someplace. Crisis communications is an actual field of study, with many examples of how to manage your public image even in the midst of a major meltdown. Heck, I’ve been trained on it as part of my disaster response work. But it seems that everyone from BP to Gizmodo to Facebook is reading the same (wrong) book: Deny that there’s a problem. When the first pictures and videos show up, state that there was a minor incident and you value your customers/the environment/the law/supporters/babies. Quietly go to full lockdown and try to get government/law enforcement to keep people from finding out more. When your lockdown attempts fail, go public and deny there was ever a coverup. When pictures/video/news reports show everyone that this is a big fracking disaster, state that although the incident is larger than originally believed, everything is under control. Launch an advertising campaign with a lot of flowers, babies, old people, and kittens. And maybe some old black and white pictures with farms, garages, or ancestors who would be the first to string you up for those immoral acts. Get caught on tape or in an email/text blaming the kittens. Try to cover up all the documentation of failed audits and/or lies about security and/or safety controls. State that you are in full compliance with the law and take safety/security/fidelity/privacy/kittens very seriously. As the incident blows completely out of control, reassure people that you are fully in control. Get caught saying in private that you don’t understand what the big deal is. It isn’t as if people really need kittens. Blame the opposing party/environmentalists/puppies/you business partners. Lie about a bunch of crap that is really easy to catch. Deny lying, and ignore those pesky videos showing you are (still) lying. State that your statements were taken out of context. When asked about the context, lie. Apologize. Say it will never happen again, and that you would take full responsibility, except your lawyers told you not to. Repeat. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike Rothman on Tabnapping at SC Magazine. The Network Security Podcast, Episode 199. Rich presented on Data Breaches for whitehatworld.com; it should show up on their archive page soon. Favorite Securosis Posts Rich: NSO Quant: Monitor Process Map. These Quant projects keep getting bigger each time we do one, but it’s nice to do some real primary research. Adrian Lane: The Hidden Costs of Security. Mike Rothman: Understanding and Selecting SIEM/LM: Correlation and Alerting. We are working through the SIEM/Log Management research. Check it out and provide comments, whether you agree or disagree with our perspectives. Other Securosis Posts The Public/Private Pendulum Keeps Swinging. White Paper Released: Endpoint Security Fundamentals. Thoughts on Privacy and Security. Incite 6/2/2010: Smuggler’s Blues. On “Security engineering: broken promises”. FireStarter: In Search of… Solutions. Favorite Outside Posts Rich: Inside the heart of a QSA. As much as we complain about bad PCI assessors are, the good ones often find themselves struggling with organizations that only want a rubber stamp. The bad news is there are very few jobs that don’t end up being driven by rote over time. That’s why I like security – it is one of the few careers with options to refresh yourself every few years.. Pepper: Android rootkit is just a phone call away. It’s actually triggered by a call, not installed by one, but still very cool – in a bad way. Adrian Lane: Detecting malicious content in shell code. Mike Rothman: Windows, Mac, or Linux: It’s Not the OS, It’s the User The weakest link in the chain remains the user. But we can’t kill them, so we need to deal with them. Project Quant Posts DB Quant: Secure Metrics, Part 1, Patch. NSO Quant: Monitor Process Map. DB Quant: Discovery Metrics, Part 4, Access and Authorization. DB Quant: Discovery and Assessment Metrics, Part 3, Assess Vulnerabilities and Configuration. Research Reports and Presentations White Paper: Endpoint Security Fundamentals. Understanding and Selecting a Database Encryption or Tokenization Solution. Low Hanging Fruit: Quick Wins with Data Loss Prevention. Top News and Posts MS plans 10 new patches. Sharepoint and IE are the big ones. Cyber Thieves Rob Treasury Credit Union. Ukrainian arrested in India on TJX data-theft charges These incidents go on for years, rather than days or even months. iPhone PIN code worthless Rich published on this a long time ago, and while it was a known flaw, the automounting on Ubuntu is new and disturbing. Previously it looked like you had to jailbreak the iPhone first. Viral clickjacking ‘Like’ worm hits Facebook users. ATM Skimmers. Another installment from Brian Krebs on ATM Skimmers. 30 vs. 150,000 Adam teaches Applied Risk Assessment 101. Trojan targets Anti-Phish software. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Michael O’Keefe, in response to Code Re-engineering. Re-engineering can work, Spolsky inadvertently provides a great example of that, and proves himself wrong. I guess that’s the downside to blogs, and trying to paint things in a black or white manner. He had some good points, one was that when Netscape open sourced the code, it wasn’t working, so the project got off to a slow start. But the success of Mozilla (complete rewrite of Netscape) has since proved him wrong. Once Bill Gates realized the importance of the internet, and licensed the code from Spyglass (I think) for IE, MS started including it on every new release of Windows. In this typical fashion, they slowly whittled away at Netscape’s market share, so Netscape had to innovate. The existing code base was

Share:
Read Post

Thoughts on Privacy and Security

I was catching up on my reading today, and this post by Richard Bejtlich reminded me of the tension we sometimes see between security and privacy. Richard represents the perspective of a Fortune 5 security operator who is tasked with securing customer information and intellectual property, while facing a myriad of international privacy laws – some of which force us to reduce security for the sake of privacy (read the comments). I’ve always thought of privacy from a slightly different perspective. Privacy traditionally falls into two categories: The right to be left alone (just ask any teenage boy in the bathroom). The right to control what people know about you. According to the dictionary on my Mac, privacy is: the state or condition of being free from being observed or disturbed by other people : she returned to the privacy of her own home. My understanding is that it is only fairly recently that we’ve added personal information into the mix. We are also in the midst of a massive upheaval of social norms enabled by technology and the distribution and collection of information that changes the scope of “free from being observed.” Thus, in the information age, privacy is now becoming as much about controlling information about us as it is about physical privacy. Now let’s mix in security, which I consider a mechanism to enforce privacy – at least in this context. If we think about our interactions with everyone from businesses and governments to other individuals, privacy consists of three components: Intent: What I intend to do with the information you give me, whether it is the contents of a personal conversation or a business transaction. Communication: What I tell you I intend to do with said information. Capability: My ability to maintain and enforce the social (or written) contract defined by my intent and communications. Thus I see security as a mechanism of capability. The role of “security” is to maintain whatever degree of protection around personal information the organization intends and communicates through their privacy policy – which might be the best or worst in the world, but the role of security is to best enforce that policy, whatever it is. Companies tend to get into trouble either when they fail to meet their stated policies (due to business or technical/security reasons), or when their intent is incompatible with their legal requirements. This is how I define privacy on the collection side – but it has nothing to do with protecting or managing your own information, nor does it address the larger societal issues such as changing ownership of information, changing social mores, changes in personal comfort over time, or collection of information in non-contracted situations (e.g., public movement). The real question then emerges: is privacy even possible? As Adam Shostack noted, our perceptions of privacy change over time. What I deem acceptable to share today will change tomorrow. But once information is shared, it is nearly impossible to retract. Privacy decisions are permanent, no matter how we may feel about them later. There is no perfect security, but once private information becomes public, it is public forever. Isolated data will be aggregated and correlated. It used to require herculean efforts to research and collect public records on an individual. Now they are for sale. Cheap. Online. To anyone. We share information with everyone, from online retailers, to social networking sites, to the blogs we read. There is no way all of these disparate organizations can effectively protect all our information, even if we wanted them to. Privacy decisions and failures are sticky. I believe we are in the midst of a vast change in our how society values and defines privacy – one that will evolve over years. This doesn’t mean there’s no such thing as privacy, but does mean that today we do lack consistent mechanisms to control what others know about us. Without perfect security there cannot be complete privacy, and there is no such thing as perfect security. Privacy isn’t dead, but it is most definitely changing in ways we cannot fully predict. My personal strategy is to compartmentalize and use a diverse set of tools and services, limiting how much any single one collects on me. It’s probably little more than privacy theater, but it helps me get through the day as I stroll toward an uncertain future. Share:

Share:
Read Post

Quick Wins with DLP Presentation

Yesterday I gave this presentation as a webcast for McAfee, but somehow my last 8 slides got dropped from the deck. So, as promised, here is a PDF of the slides. McAfee is hosting the full webcast deck over at their blog. Since we don’t host vendor materials here at Securosis, here is the subset of my slides. (You might still want to check out their full deck, since it also includes content from an end user). Presentation: Quick Wins with DLP Share:

Share:
Read Post

Thoughts on Diversity and False Diversity

Mike Bailey highlights a key problem with web applications in his post on diversity. Having dealt with these issues as a web developer (a long time ago), I want to add a little color. We tend to talk about diversity as being good, usually with biological models and discussions of monoculture. I think Dan Geer was the first to call out the dangers of using only a single computing platform, since one exploit then has the capability of taking down your entire organization. But the heterogeneous/homogenous tradeoffs aren’t so simple. Diversity reduces the risk of a catastrophic single point of failure by increasing the attack surface and potential points of failure. Limited diversity is good for something like desktop operating systems. A little platform diversity can keep you running when something very bad hits the primary platform and takes those systems down. The trade off is that you now have multiple profiles to protect, with a great number of total potential vulnerabilities. For example, the Air Force standardized their Windows platforms to reduce patching costs and time. What we need, on the OS side, is limited diversity. A few standard platform profiles that strike the balance between reducing the risk that a single problem will take us completely down, while maintaining manageability through standardization. But back to Mike’s post and web applications… With web applications what we mostly see is false diversity. The application itself is a monolithic entity, but use of multiple frameworks and components only increases the potential attack surface. With desktop operating systems, diversity means a hole in one won’t take them all down. With web applications, use of multiple languages/frameworks and even platforms increases the number of potential vulnerabilities, since exploitation of any one of those components can generally take down/expose the entire application. When I used to develop apps, like every web developer at the time, I would often use a hodgepodge of different languages, components, widgets, etc. Security wasn’t the same problem then it is now, but early on I learned that the more different things I used, the harder it was to maintain my app over time. So I tended towards standardization as much as possible. We’re doing the same thing with our sooper sekret project here at Securosis – sticking to as few base components as we can, which we will then secure as well as we can. What Mike really brings to the table is the concept of how to create real diversity within web applications, as opposed to false diversity. Read his post, which includes things like centralized security services and application boundaries. Since with web applications we don’t control the presentation layer (the web browser, which is a ‘standard’ client designed to accept input from nearly anything out there), new and interesting boundary issues are introduced – like XSS and CSRF. Adrian and I talk about this when we advise clients to separate out encryption from both the application and the database, or use tokenization. Those architectures increase diversity and boundaries, but that’s very different than using 8 languages and widgets to build your web app. Share:

Share:
Read Post

FireStarter: The Only Value/Loss Metric That Matters

As some of you know, I’ve always been pretty critical of quantitative risk frameworks for information security, especially the Annualized Loss Expectancy (ALE) model taught in most of the infosec books. It isn’t that I think quantitative is bad, or that qualitative is always materially better, but I’m not a fan of funny math. Let’s take ALE. The key to the model is that your annual predicted losses are the losses from a single event, times the annual rate of occurrence. This works well for some areas, such as shrinkage and laptop losses, but is worthless for most of information security. Why? Because we don’t have any way to measure the value of information assets. Oh, sure, there are plenty of models out there that fake their way through this, but I’ve never seen one that is consistent, accurate, and measurable. The closest we get is Lindstrom’s Razor, which states that the value of an asset is at least as great as the cost of the defenses you place around it. (I consider that an implied or assumed value, which may bear no correlation to the real value). I’m really only asking for one thing out of a valuation/loss model: The losses predicted by a risk model before an incident should equal, within a reasonable tolerance, those experienced after an incident. In other words, if you state that X asset has $Y value, when you experience a breach or incident involving X, you should experience $Y + (response costs) losses. I added, “within a reasonable tolerance” since I don’t think we need complete accuracy, but we should at least be in the ballpark. You’ll notice this also means we need a framework, process, and metrics to accurately measure losses after an incident. If someone comes into my home and steals my TV, I know how much it costs to replace it. If they take a work of art, maybe there’s an insurance value or similar investment/replacement cost (likely based on what I paid for it). If they steal all my family photos? Priceless – since they are impossible to replace and I can’t put a dollar sign on their personal value. What if they come in and make a copy of my TV, but don’t steal it? Er… Umm… Ugh. I don’t think this is an unreasonable position, but I have yet to see a risk framework with a value/loss model that meets this basic requirement for information assets. Share:

Share:
Read Post

The Laziest Phisher in the World

I seriously got this last night and just had to share. It’s the digital equivalent of sending someone a letter that says, “Hello, this is a robber. Please put all your money in a self addressed stamped envelope and mail it to…” Dear Valued Member, Due to the congestion in all Webmail account and removal of all unused Accounts,we would be shutting down all unused accounts, You will have to confirm your E-mail by filling out your Login Info below after clicking the reply botton, or your account will be suspended within 48 hours for security reasons. UserName: …………………………………… Password:……………………………………. Date Of Birth: ………………………………. Country Or Territory:…………………………. After Following the instructions in the sheet,your account will not be interrupted and will continue as normal.Thanks for your attention to this request. We apologize for any inconvinience. Webmaster Case number: 447045727401 Property: Account Security Share:

Share:
Read Post

Australian Border Security Insanity

Australia is my second-favorite place on the planet to visit (New Zealand is first). But it’s a darn good thing I’m not a porn fiend, since they now require you to declare porn at the border, and, well, here’s a quote: Australian customs officers have been given new powers to search incoming travellers’ laptops and mobile phones for pornography, a spokeswoman for the Australian sex industry says. … Fiona Patten, president of the Australian Sex Party, is demanding an inquiry into why a new question appears on Incoming Passenger Cards asking people if they are carrying “pornography”. They are also working on a big Internet filter. You know, kind of like China and many Middle East countries. Gotta love democracy. (Thanks to Slashdot for the pointer). Share:

Share:
Read Post

Privacy Is (Still) Personal

I want to respond to something Adam wrote about Facebook over at Emergent Chaos, but first I’m going to excerpt my own article from TidBITS: Privacy is Personal – In the Information Age, determining what you want others to know about you isn’t always a simple decision. Aside from the potential tradeoffs of avoiding particular features or services, we all have different thresholds for what we are comfortable sharing. It’s also extremely difficult to control our information even when we do make informed decisions, and often impossible to eradicate information that escaped our control before we realized the rules of the game had changed. For example, I use both Amazon and Netflix, even though those services also collect personal information like my buying and viewing habits. I am trading my data (and money) for a combination of convenience and personalization. I’m less concerned with these services than Facebook since their privacy practices and policies are clearer, my information is compartmentalized within each service, and they have much more consistent and stable records. On the other hand I have minimized my usage of Google services due to privacy concerns. Google’s reach is incredibly expansive, and despite their addition of Google Dashboard to help show some of what they record, and much clearer policies than Facebook, I’m generally uncomfortable with any single company or government having that much potential information on me. I fully understand this is a somewhat emotional response. Facebook is building a similar Internet-wide ecosystem as they expand connections to external Web sites and services. In exchange for allowing them access to your information and activities, Facebook enables new kinds of services and personalization. The question each of us must answer is if those new services and personalization options are worth the privacy tradeoff. Deciding where to draw your own privacy lines is a very personal, complex, and even sometimes arbitrary decision. I trust Amazon and Netflix to a certain extent based on their privacy policies, even though they sometimes make mistakes (I didn’t use Amazon for years after a policy change that they later reversed). Yet I’ve limited my usage of both Google and Facebook due to general concerns (Google) or outright distrust (Facebook). Facebook, to me, is a tool to keep me connected to friends and family I don’t interact with on a daily basis. I restrict what information it has on me, and always assume anything I do on Facebook could be public. I’m willing to trade a little privacy for the convenience of being able to stay connected with an expanded social circle. I manage Facebook privacy by not using it for anything that’s actually private. Adam has a lot in his article, and I think his criticisms of my original post come down to: Your perceptions of your own privacy change within different contexts and over time, so what you are okay with today may not be acceptable tomorrow. If you only use the service to post things you’d want public anyway, why use it at all? I completely agree with Adam’s first point – what you share when you are 19 years old at college is very different than what you might want people to know about you once you are 35. Even things you might share at 35 as a member of the workforce might come back to haunt you when you are 55 and running for political office. But I disagree that this means your only option is to completely opt out of all centralized social media services. I believe we as society are reaching the point where some degree of social networking is the norm. Even “private” communications like email, IM, and SMS are open to potential disclosure and subsequent inclusion in public search results. The same used to be true of the written and spoken word, but clearly the scale and scope are dramatically larger in the Information Age. We are losing the insular layers that created our current social norms of privacy – which already vary around the world. The last time society needed to adapt to such changes in privacy was with the Industrial Age and movement from rural to urban society. Before that, it was probably the change from hunter/gatherers to an agrarian society. I see three possible scenarios that could develop: Society adopts a combination of laws and social mores to better protect privacy. It will be expected that you own your own data, and in the future retain a right to edit your past. Essentially, we work to protect our current expectations of privacy – which will require active effort, as the terrain has already shifted under us, and will continue to do so. Social expectations change. You’ll be able to run for political office and no one will care that you called some chick or dude hot and joined the “I love some stupid emo vampire” movement. We gain better abilities to protect our privacy, but at the same time society becomes more accepting of greater personal information being public – partially through sheer boredom at the inanity and popularity of our embarrassing peccadilloes. There is no privacy. We have many years before these issues resolve, if ever, and it’s going to be a rough road no matter where we are headed. The end result probably won’t match any of my scenarios, but will instead be some mish-mash of those options and others I haven’t thought of. My rough guess is that society will slowly become more accepting of youthful indiscretions (or we won’t have anyone to hire or elect), but we will also gain more control over our personal information. Privacy isn’t dead, but it is definitely changing. We all need to make personal decisions about the level of risk we are willing to accept in the midst of changing social norms, government/business influence, and degrees of control. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.