Securosis

Research

Database Security Webcast Tomorrow

Tomorrow I’ll be giving the first webcast in a three part series I’m presenting for Oracle. It’s actually a cool concept (the series) and I’m having a bit more fun than usual putting it together. The first session is Database Security for Security Professionals. If you are a security professional and want to learn more about databases, this is targeted right between your eyes. Rather than rehashing the same old issues, we’re going to start with an overview of some database principles and how they mess up our usual approaches to security. Then we’ll dig into those things that the security team can control and influence, and how to work with DBAs. Although we are focusing on Oracle, all the core principles will apply to any database management system. And I swear to keep the relational calculus to myself. The next webcast flips the story and we’ll be talking about security principles for DBAs. Yes, you DBAs will finally learn why those security types are so neurotic and paranoid. The final webcast in the series will be a “build your own”. We’ll be soliciting questions and requests ahead of time, and then I’ll crawl into a cave throw it all together into a complete presentation. The webcast tomorrow (December 17th) will be at 11 am PT and you can sign up here. Share:

Share:
Read Post

Securosis Hits Macworld (And San Francisco)

Just a quick note that I’ll be out in San Francisco for Macworld on January 5-8. While most of my time is dedicated to the conference, I will be able to take some meetings in the SF area. You can drop me a line at rmogull@securosis.com. I’m under strict orders to not come home with any new shiny Apple devices. We’ll have to see how that goes. (Last year I came home with an iPhone, totally against orders.) Share:

Share:
Read Post

Building a Web Application Security Program: Part 6, Secure Deployment

In our last episode, we continued our series on building a web application security program by looking at the secure development stage (see also Part 1, Part 2, Part 3, Part 4, and Part 5). Today we’re going to transition into the secure deployment stage and talk about vulnerability assessments and penetration testing. Keep in mind that we look at web application security as an ongoing, and overlapping, process. Although we’ve divided things up into phases to facilitate our discussion, that doesn’t mean there are hard and fast lines slicing things up. For example, you’ll likely continue using dynamic analysis tools in the deployment stage, and will definitely use vulnerability assessments and penetration testing in the operations phase. We’ve also been getting some great feedback in the comments, and will be incorporating it into the final paper (which will be posted here for free). We’ve decided this feedback is so good that we’re going to start crediting anyone who leaves comments that result in changes to the content (with permission, of course). It’s not as good as paying you, but it’s the best we can do with the current business model (for now- don’t assume we aren’t thinking about it). As we dig into this keep in mind that we’re showing you the big picture and everything that’s available. When we close the series we’ll talk prioritization and where to focus your efforts for those of you on a limited budget- it’s not like we’re so naive as to think all of you can afford everything on the market. Vulnerability Assessment In a vulnerability assessment we scan a web application to identify anything an attacker could potentially use against us (some assessments also look for compliance/configuration/standards issues, but the main goal in a VA is security). We can do this with a tool, service, or combination of approaches. A web application vulnerability assessment is very different than a general vulnerability assessment where we focus on network and hosts. In those, we scan ports, connect to services, and use other techniques to gather information revealing the patch levels, configurations, and potential exposures of our infrastructure. Since, as we’ve discussed, even “standard” web applications are essentially all custom, we need to dig a little deeper, examine application function and logic, and use more customized assessments to determine if a web application is vulnerable. With so much custom code and implementation, we have to rely less on known patch levels and configurations, and more on actually banging away on the application and testing attack pathways. As we’ve said before, custom code equals custom vulnerabilities. (For an excellent overview of web application vulnerability layers please see this post by Jeremiah Grossman. We are focusing on the top three layers- third-party web applications, and the technical and business logic flaws of custom applications). The web application vulnerability assessment market includes both tools and services. Even if you decide to go the tool route, it’s absolutely critical that you place the tools in the hands of an experienced operator who will understand and be able to act on the results. It’s also important to run both credentialed and uncredentialed assessments. In a credentialed assessment, the tool or assessor has usernames and passwords of various levels to access the application. This allows them inside access to assess the application as if they were an authorized user attempting to exceed authority. Tools There are a number of commercial, free, and open source tools available for assessing web application vulnerabilities, each with varying capabilities. Some tools only focus on a few kinds of exploits, and experienced assessors use a collection of tools and manual techniques. For example, there are tools that focus exclusively on finding and testing SQL injection attacks. Enterprise-class tools are broader, and should include a wide range of tests for major web application vulnerability classes, such as SQL injection, cross site scripting, and directory traversals. The OWASP Top 10 is a good starting list of major vulnerabilities, but an enterprise class tool shouldn’t limit itself to just one list or category of vulnerabilities. An enterprise tool should also be capable of scanning multiple applications, tracking results over time, providing robust reporting (especially compliance reports), and providing reports customized local needs (e.g., add/drop scans). Tools are typically software, but can also include dedicated appliances. Tools can run either manual scans with an operator behind them, or automatics scans for on a schedule. Since web applications change so often, it’s important to scan any modifications or new applications before deployment, as well as live applications on an ongoing basis. Services Not all organizations have the resources or need to buy and deploy tools to assess their own applications, and in some cases external assessments may be required for compliance. There are three main categories of web application vulnerability assessment services: Fully automatic scans: These are machine-run automatic scans that don’t involve a human operator on the other side. The cost is low, but they are more prone to false positives and false negatives. They work well for ongoing assessments on a continuous basis, but due to their limitations you’ll likely still want more in-depth assessments from time to time. Automatic scans with manual evaluation: An automatic tool performs the bulk of the assessment, followed by human evaluation of the results and additional testing. They provide a good balance between ongoing assessments and the costs of a completely manual assessment. You get deeper coverage and more accurate results, but at a higher cost. Manual assessments: A trained security assessor manually evaluates your web application to identify vulnerabilities. Typically an assessor uses their own tools, then validates the results and provides custom reports. The cost is higher per assessment than the other options, but a good assessor may find more flaws. Penetration Testing The goal of a vulnerability assessment is to find potential avenues an attacker can exploit, while a penetration test goes a step further and validates whether attack pathways result in risk to the organization. In a web

Share:
Read Post

I Do Not Have A Relationship With GDS International Or Business Management Magazine (Updated With GD

It came to my attention today that Business Management Magazine (www.busmanagement.com- not linked on purpose), part of GDS International, is using my name to sell sponsorship of their publication and some roundtable event at the RSA conference. Not only do I have NOTHING to do with them, they were advised over a year ago to stop using my name or the Gartner brand to sell their reports. I participated in an interview nearly 2 years ago, mistakingly thinking they were a valid publication. Reports started coming in that they were using my name to sell themselves, implying endorsement, and I retracted the interview before publication. The editor I worked with quickly left the company afterwards based on seeing the deceptive practices himself. He warned me that his computer was seized and the interview used without permission. It’s over a year later and they are still using my name without permission. They are also implying that they are timing the release of their publication with a major report I’m releasing. This is completely false- I have not revealed my publishing schedule. I don’t even know exactly when the report is coming out. I’m pissed. The only people who can use my name to sell anything are Gartner. If you ever hear anyone else implying my sponsorship, endorsement, or participation, please let me know. Update on 16 December, 2008: For some reason, this post started receiving a large amount of comments about 2 months ago, many of which were inflammatory and inconsistent with this site. GDS then contacted us to discuss the incident.They provided a statement/apology that we agreed to add to this post, and we also offered to just remove all the comments and lock future comments.The incidents occurred years ago, and we see no reason to let this drag on. Here is a response from Spencer Green, Chariman of GDS: Dear Mr Mogull — while it is not my practice to respond to each and every comment on my company, I feel that this thread warrants particular attention. I too have the strange compulsion to defend. GDS International employed a member of staff two years ago who misrepresented our relationship with yourself and Gartner. He was caught before we received your letter and dealt with accordingly — fired for gross misconduct. The editor you mention did not leave the company based on seeing our “deceptive practices”: they too were sacked (for a number of reasons, yours included). No computers were “seized”. We made a full and frank apology to Gartner at the time, which was accepted, and our two companies moved forward. Misrepresentation is completely against GDS policies. It is antithetical to our business model — a short-term act that benefits the individual over long-term thinking that benefits the organisation. GDS is proud of the work we do, of the many long- and short-term business relationships that we maintain, and of our employees, who — this example excluded — consistently perform to our high standards. GDS has been trading for 15 years and currently employs over 450 people. In these last two years, we have grown 50% year-on-year. We are a robust, ambitious company with a solid, proven and scaleable business model — not a house of cards. It is a real shame that the actions of one GDS employee affected you. Hundreds more are working to produce the best business magazines, events and websites. I hope you will take the time to check us out. Thank you for the opportunity to draw a line under this incident. Regards, Spencer Green Chairman, GDS International Share:

Share:
Read Post

Stop Using Internet Explorer 7 (For Now), Or Deploy Workarounds

There is an unpatched vulnerability for Internet Explorer 7 being actively exploited in the wild. The details are public, so any bad guy can take advantage of this. It’s a heap overflow in the XML parser, for you geeks out there. It affects all current versions of Windows. Microsoft issued an advisory with workarounds that prevent exploitation: Set Internet and Local intranet security zone settings to “High” to prompt before running ActiveX Controls and Active Scripting in these zones. Configure Internet Explorer to prompt before running Active Scripting or to disable Active Scripting in the Internet and Local intranet security zone. Enable DEP for Internet Explorer 7. Use ACL to disable OLEDB32.DLL. Unregister OLEDB32.DLL. Disable Data Binding support in Internet Explorer 8. Share:

Share:
Read Post

Totally Transparent Research And Sponsorship

Things seem a little strange over here at Securosis HQ- we’re getting a ton of feedback on an old post from November of 2006, but so far only one person has left us any real comments on our Building a Web Application Security Program series. Just to make it clear, once we are done with the series we will be pulling the posts together, updating them to incorporate feedback, and publishing it as a whitepaper. We already have some sponsorship lined up, with slots open for up to two more. This is a research process we like to call “Totally Transparent Research”. One of the criticisms against many analysts is that the research is opaque and potentially unduly influenced by vendors. The concern of vendor influence is especially high when the research carries a vendor logo on it somewhere. It’s an absolutely reasonable and legitimate worry, especially when the research comes from a small shop like ours. To counter this, we decided from the start to put all our research out there in the open. Not just the final product, but the process of writing it in the first place. With few exceptions, all of our whitepaper research, sponsored or otherwise, is put out as a series of blog posts as we write it. At each stage we leave the comments wide open for public peer review- and we never delete or filter comments unless they are both off topic and objectionable (not counting spam). Vendors, competitors, users, or anyone else can call us on our BS or complement our genius. This is all of our pre-edited content that eventually comes together for the papers. We also require that even sponsored papers always be freely available here on the site. Sponsors may get to request a topic, but they don’t get to influence the content (we do provide them with a rough outline so they know what to expect). We write the contracts so that if they don’t like the content in the end, they can walk without penalties and we’ll publish the work anyway. We do take the occasional suggestion from a sponsor when they catch something we miss, and it’s still objective (hey, it happens). While we realize this won’t fully assuage the concerns of everyone out there, we really hope that by following a highly transparent process we can provide free research that’s as objective as possible. We also find that public peer review is invaluable and produces less insular results than us just reviewing internally. Yes, we take end user and vendor calls like every other analyst, but we also prefer to engage in a direct dialog with our readers, friends, and others. We also like Open Source, kittens, and puppies. Not that we’ll be giving everything away for free- we have some stuff in development we’ll be charging for (that won’t be sponsored). But either we get sponsors, or we have to charge for everything. It’s not ideal, but that’s how the world works. Adrian has something like 12 dogs and I’m about to have a kid on top of 3 cats, and that food has to come from someplace. So go ahead and correct us, insult us, or tell us a better way. We can handle it, and we won’t hide it. And if you want to sponsor a web application security paper… Share:

Share:
Read Post

How The Cloud Destroys Everything I Love (About Web App Security)

On Tuesday, Chris Hoff joined me to guest host the Network Security Podcast and we got into a deep discussion on cloud security. And as you know, for the past couple of weeks we’ve been building our series on web application security. This, of course, led to all sorts of impure thoughts about where things are headed. I wouldn’t say I’m ready to run around in tattered clothes screaming about the end of the Earth, but the company isn’t called Securosis just because it has a nice ring to it. If you think about it a certain way, cloud computing just destroys everything we talk about for web application security. And not just in one of those, “oh crap, here’s one of those analysts spewing BS about something being dead” ways. Before jumping into the details, in this case I’m talking very specifically of cloud based computing infrastructure- e.g., Amazon EC2/S3. This is where we program our web applications to run on top of a cloud infrastructure, not dedicated resources in a colo or a “traditional” virtual server. I also sprinkle in cloud services- e.g., APIs we can hook into using any application, even if the app is located on our own server (e.g., Google APIs). Stealing from our yet incomplete series on web app sec and our discussions of ADMP, here’s what I mean: Secure development (somewhat) breaks: we’re now developing on a platform we can’t fully control- in a development environment we may not be able to isolate/lock down. While we should be able to do a good job with our own code, there is a high probability that the infrastructure under us can change unexpectedly. We can mitigate this risk more than some of the other ones I’ll mention- first, through SLAs with our cloud infrastructure provider, second by adjusting our development process to account for the cloud. For example, make sure you develop on the cloud (and secure as best you can) rather than completely developing in a local virtual environment that you then shift to the cloud. This clearly comes with a different set of security risks (putting development code on the Internet) that also need to be, and can be, managed. Data de-identification becomes especially important. Static and dynamic analysis tools (mostly) break: We can still analyze our own source code, but once we interact with cloud based services beyond just using them as a host for a virtual machine, we lose some ability to analyze the code (anything we don’t program ourselves). Thus we lose visibility into the inner workings of any third party/SaaS APIs (authentication, presentation, and so on), and they are likely to randomly change under our feet as the providing vendor continually develops them. We can still perform external dynamic testing, but depending on the nature of the cloud infrastructure we’re using we can’t necessarily monitor the application during runtime and instrument it the same way we can in our test environments. Sure, we can mitigate all of this to some degree, especially if the cloud infrastructure service providers give us the right hooks, but I don’t hold out much hope this is at the top of their priorities. (Note for testing tools vendors- big opportunity here). Vulnerability assessment and penetration testing… mostly don’t break: So maybe the cloud doesn’t destroy everything I love. This is one reason I like VA and pen testing- they never go out of style. We still lose some ability to test/attack service APIs. Web application firewalls really break: We can’t really put a box we control in front of the entire cloud, can we? Unless the WAF is built into the cloud, good luck getting it to work. Cloud vendors will have to offer this as a service, or we’ll need to route traffic through our WAF before it hits the back end of the cloud, negating some of the reasons we switch to the cloud in the first place. We can mitigate some of this through either the traffic routing option, virtual WAFs built into our cloud deployment (we need new products for it), or cloud providers building WAF functionality into their infrastructure for us. Application and Database Activity Monitoring break: We can no longer use external monitoring devices or services, and have to integrate any monitoring into our cloud-based application. As with pretty much all of this list it’s not an impossible problem, just one people will ignore. For example, I highly doubt most of the database activity monitoring techniques will work in the cloud- network monitoring, memory monitoring, or kernel extensions. Native audit might, but not all database management systems provide effective audit logs, and you still need a way to collect them as your app and db shoot around the cloud for resource optimization. I could write more about each of these areas, but you get the point. When we run web applications on cloud based infrastructure, using cloud based software services, we break much of the nascent web application security models we’re just starting to get our fingers around. The world isn’t over*, but it sure just moved out from under our feet. *This doesn’t destroy the world, but it’s quite possible that the Keanu Reeves version of The Day the Earth Stood Still will. Share:

Share:
Read Post

The Hoff Co-Hosts The Network Security Podcast

Martin was out of town this week and put our fine show into my trustworthy hands. A trust I quickly dashed as I invited Chris Hoff to join the show. We managed to avoid any significantly bad language, and both of use were completely sober. I think. Chris and I started with a discussion of the latest national cybersecurity recommendations, moving on to the CheckFree attack, the DNSChanger trojan, DLP/DRM advances by Microsoft/EMC and McAfee/Liquid Machines, and finishing with one of our pontificating discussions about the cloud. Here’s the show, and the show notes: The Network Security Podcast, Episode 131, December 9, 2008. Show Notes: The Commission on Cyber Security for the 44th Presidency releases their long-awaited report. CheckFree online bill payment redirected to a malicious site. The DNS Changer trojan starts it’s own internal DHCP server. The future of DLP/information-centric security as Microsoft and EMC partner, then McAfee and Liquid Machines (and a few other vendors we talk about). Share:

Share:
Read Post

A Good (Potential) Risk Management IQ Test For Management

It looks like China is thinking about requiring in-depth technical information on all foreign technology products before they will be allowed into China. I highly suspect this won’t actually happen, but you never know. If it does, here is a simple risk related IQ test for management: Will you reveal your source code and engineering documents to a government with a documented history of passing said information on to domestic producers who often clone competitive technologies and sell at lower than the market value you like? Do you have the risk tolerance to accept domestic Chinese abuse of your intellectual property should you reveal it? If the answer to 1 is “yes” and 2 is “no”, the IQ is “0”. Any other answer shows at least as basic understanding of risk tolerance and management. I worked a while back with an Indian company that engaged in a partnership with China to co-produce a particular high value product. That information was promptly stolen and spread to other local manufacturers. I don’t have a problem with China, but not only do they culturally view intellectual property differently than us, there is a documented history of what the western world would consider abuse of IP. If you can live with that, you should absolutely engage with that market. If can’t accept the risk of IP theft, stay away. (P.S.- This is also true of offshore development. Stop calling me after you have offshored and asking how to secure your date. You know, closing barn doors and cows and all). Share:

Share:
Read Post

Mortality, Integrity, and Risk Management

I despise the very concept of mortality. That everything we were, are, and can be comes to a crashing close at some arbitrary deadline. I’ve never been one to accept someone telling me to do something just because “that’s the way it is”, and I feel pretty much the same way about death. Having seen far more than my fair share of it, I consider it nothing but random and capricious. For those that follow Twitter, yesterday afternoon mortality bitch slapped me upside the head. I found out that my cholesterol is two points shy of the thin black line that defines “high”. Being thirty seven, a lifetime athlete, and relatively healthy eater since my early twenties, my number shouldn’t even be on the same continent as “high”, never mind the same zip code. I clearly have my parent’s genes to blame, and since my father passed away many years ago of something other than heart disease, I get to have a long conversation with mother this weekend on her poor gene selection. I might bring up the whole short thing while I’m at it (seriously, all I asked for was 5’9”). I tend to look at situations like this as risk management problems. With potential mitigating actions, all of which come at a cost, and a potential negative consequence (well, negative for me), it slots nicely into a risk-based approach. It also highlights what is the single most important factor in any risk analysis- integrity. If you deceive yourself (or others) you can never make an effective risk decision. Let’s map it out: Asset Valuation – Really fracking high for me personally, $2M to the insurance company (time limited to 20 years), and somewhere between zero and whatever for the rest of the world (and, I suspect, a few negative values circulating out there). Risk Tolerance – Low. Oh sure, I’d like to say “none”, but the reality is if my risk tolerance was really 0, I’d mentally implode in a clash of irreconcilable risk factors as fear of my house burning around me conflicts with the danger of a meteor smashing open my skull like a ripe pumpkin when I walk outside. Since anything over 100 years old isn’t realistically quantifiable (and 80 is more reasonable), I’ll call 85 the low end of my tolerance, with no complaints if I can double that. Risk/Threat Factors – Genetics, lifestyle, and medication. This one is pretty easy, since there are really only 3 factors that effect the outcome (in this dimension, I’m skipping cancer, accidents, and those freaky brain eating bacteria found in certain lakes). I can only change two of the factors, each of which comes with both a financial cost, and, for lack of a better word, a “pleasure” cost. Risk Analysis – I’m going to build three scenarios: Since some of my cholesterol is good to normal (HDL and triglycerides), and only part of it bad (LDL and total serum), I can deceive myself into thinking I don’t need to do anything today and ignore the possibility of slowly clogging my arteries until a piece of random plaque breaks off and kills me in excruciating pain at an inconvenient moment. Since that’s what everyone else tends to do, we’ll call this option “best practices”. I can meet with my doctor, review the results, and determine which lifestyle changes and/or medication I can start today to reduce my long term risks. I can reduce the intake of certain foods, switch to things like Egg Beaters, and increase my intake of high fiber food and veggies. I’ll pay an additional financial cost for higher quality food, a time cost for the extra workouts, and a “pleasure” cost for fewer chocolate chip cookies. In exchange for those french fries and gooey burritos I’ll be healthier overall and live a higher quality of life until I’m disemboweled by an irate ostrich while on safari in Africa. I can immediately switch to a completely heart-healthy diet and disengage from any activity that increases my risk of premature death (and isn’t all death premature?). I’ll never eat another cookie or french fry, and I’ll move to a monastery in a meteor-free zone to eliminate all stress from my life as I engage in whatever the latest medical journals define as the optimum diet and exercise plan. I will lead a longer, lower quality life until I’m disemboweled by an irate monk who is sick of my self righteous preaching and mid-chant calisthenics. We’ll call this option the “consultant/analyst” recommendations. Risk Decision and Mitigation Plan – Those three scenarios represent the low, middle, and high option. In every case there is a cost- but the cost is either in the short term or the long term. None of the scenarios guarantees success. This is where the integrity comes in- I’ve tried to qualify all the appropriate costs in each scenario, and don’t try and fool myself into thinking I can avoid those costs to steer myself towards the easy decision. It would be easy to look at my various cholesterol levels and current lifestyle, then decide that maybe if I read the numbers from a certain angle nothing bad will happen. Or maybe I can just hang out without making changes until the numbers get worse, and fix things then. On the other end, I could completely deceive myself and decide that a bunch of extreme efforts will fix everything and I can completely control the end result, ignoring the cost and all the other factors out there. But if I’m really honest to myself, I know that despite my low tolerance for an early death, I’m unwilling to pay the costs of extreme actions. Thus I’m going to make immediate changes to my diet that I know I can tolerate in the long term, I’ll meet with my doctor and start getting annual tests, and I’ll slip less on my fitness plan when work gets out of control. I’m putting metrics in place

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.