Securosis

Research

External Database Procedures

Just ran across this ‘new’ SQL Server vulnerability in my news feed. This should not be an issue because you should not be using this set of functions. If you are using external stored procedures on a production database, stop. In fact, you want to stop using them altogether by either locking them down or removing them entirely. Not just because of this reported instance. External stored procedures exploits are favorites of database hackers, and have been used to alter database functionality and to run arbitrary code, both externally and internally launched attacks! SQL Server has historically had issues with buffer overflow attacks (See Microsoft Technical Bulletin MS02-020) against the pre-built procedures, and while known issued have been cleared up, XP’s are a complex and powerful extension ripe for exploits. The database vendors in general recommend as a security best practice the restriction of these to administrative use at a minimum. Even then it violates the best practice of segregation of the OS / database functionality required by compliance and operational security. Use of external stored procedures is flagged by all of the database vulnerability assessment tools, as both a security and a compliance issue. And in case you think that I am picking on SQL Server, many similar problems have been reported on Oracle ExtProc as well. The DBA in me loves the ability to run native platform utilities to support database admin efforts. It’s a really handy extension, and I know it is tempting to leave these on the database so you can make admin easier, but you will be relying upon security through obscurity. It is a really big risk in a production environment and one that every database hacker will have scripts to find and exploit. Share:

Share:
Read Post

Database Security Webcast Tomorrow

Tomorrow I’ll be giving the first webcast in a three part series I’m presenting for Oracle. It’s actually a cool concept (the series) and I’m having a bit more fun than usual putting it together. The first session is Database Security for Security Professionals. If you are a security professional and want to learn more about databases, this is targeted right between your eyes. Rather than rehashing the same old issues, we’re going to start with an overview of some database principles and how they mess up our usual approaches to security. Then we’ll dig into those things that the security team can control and influence, and how to work with DBAs. Although we are focusing on Oracle, all the core principles will apply to any database management system. And I swear to keep the relational calculus to myself. The next webcast flips the story and we’ll be talking about security principles for DBAs. Yes, you DBAs will finally learn why those security types are so neurotic and paranoid. The final webcast in the series will be a “build your own”. We’ll be soliciting questions and requests ahead of time, and then I’ll crawl into a cave throw it all together into a complete presentation. The webcast tomorrow (December 17th) will be at 11 am PT and you can sign up here. Share:

Share:
Read Post

Securosis Hits Macworld (And San Francisco)

Just a quick note that I’ll be out in San Francisco for Macworld on January 5-8. While most of my time is dedicated to the conference, I will be able to take some meetings in the SF area. You can drop me a line at rmogull@securosis.com. I’m under strict orders to not come home with any new shiny Apple devices. We’ll have to see how that goes. (Last year I came home with an iPhone, totally against orders.) Share:

Share:
Read Post

Structured Security Program, meet Agile Process

Bryan Sullivan’s thought-provoking post on Streamlining Security Practices for Agile Development caught my attention this morning. Reading it gave me the impression of a genuine generational divide. If you have ever witnessed a father and son talk about music, while they are talking about the same subject, there is little doubt the two are incompatible. The post is in line with what Rich and I have been discussing with the web application series, especially in the area of why the web apps are different, albeit on a slightly more granular level. This article is about process simplification and integration, and spells out a few of the things you need to consider if moving from more formalized waterfall process into Agile with Security. The two nuggets of valuable information are the risk based inclusion of requirements, where the higher risk issues are placed into the sprints, and the second is how to account for lower priority issues that require periodic inspection within a non-linear development methodology. The risk-based approach of placing higher security issues as code gets created in each sprint is very effective. It requires that issues and threats be classified in advance, but it makes the sprint requirements very clear while keeping security as a core function of the product. It is a strong motivator for code and test case re-use to reduce overhead during each sprint, especially in critical areas like input validation. Bryan also discusses the difficulties of fitting other lower priority security requirements extracted from SDL into Agile for Web Development. In fact, he closes the post with the conclusion that retrofitting waterfall based approaches to secure Agile development is not a good fit. Bravo to that! This is the heart of the issue, and while the granular inclusion of high risk issues into the sprint works, the rest of the ‘mesh’ is pretty much broken.  Checks and certifications triggered upon completed milestones must be rethought. The bucketing approach can work for you, but what you label the buckets and when you give them consideration will vary from team to team. You may decide to make them simple elements of the product and sprint backlog. But that’s the great thing about process is you get to change it to however it suits your purpose. Regardless, this post has some great food for though and is worth a read. Share:

Share:
Read Post

Building a Web Application Security Program: Part 6, Secure Deployment

In our last episode, we continued our series on building a web application security program by looking at the secure development stage (see also Part 1, Part 2, Part 3, Part 4, and Part 5). Today we’re going to transition into the secure deployment stage and talk about vulnerability assessments and penetration testing. Keep in mind that we look at web application security as an ongoing, and overlapping, process. Although we’ve divided things up into phases to facilitate our discussion, that doesn’t mean there are hard and fast lines slicing things up. For example, you’ll likely continue using dynamic analysis tools in the deployment stage, and will definitely use vulnerability assessments and penetration testing in the operations phase. We’ve also been getting some great feedback in the comments, and will be incorporating it into the final paper (which will be posted here for free). We’ve decided this feedback is so good that we’re going to start crediting anyone who leaves comments that result in changes to the content (with permission, of course). It’s not as good as paying you, but it’s the best we can do with the current business model (for now- don’t assume we aren’t thinking about it). As we dig into this keep in mind that we’re showing you the big picture and everything that’s available. When we close the series we’ll talk prioritization and where to focus your efforts for those of you on a limited budget- it’s not like we’re so naive as to think all of you can afford everything on the market. Vulnerability Assessment In a vulnerability assessment we scan a web application to identify anything an attacker could potentially use against us (some assessments also look for compliance/configuration/standards issues, but the main goal in a VA is security). We can do this with a tool, service, or combination of approaches. A web application vulnerability assessment is very different than a general vulnerability assessment where we focus on network and hosts. In those, we scan ports, connect to services, and use other techniques to gather information revealing the patch levels, configurations, and potential exposures of our infrastructure. Since, as we’ve discussed, even “standard” web applications are essentially all custom, we need to dig a little deeper, examine application function and logic, and use more customized assessments to determine if a web application is vulnerable. With so much custom code and implementation, we have to rely less on known patch levels and configurations, and more on actually banging away on the application and testing attack pathways. As we’ve said before, custom code equals custom vulnerabilities. (For an excellent overview of web application vulnerability layers please see this post by Jeremiah Grossman. We are focusing on the top three layers- third-party web applications, and the technical and business logic flaws of custom applications). The web application vulnerability assessment market includes both tools and services. Even if you decide to go the tool route, it’s absolutely critical that you place the tools in the hands of an experienced operator who will understand and be able to act on the results. It’s also important to run both credentialed and uncredentialed assessments. In a credentialed assessment, the tool or assessor has usernames and passwords of various levels to access the application. This allows them inside access to assess the application as if they were an authorized user attempting to exceed authority. Tools There are a number of commercial, free, and open source tools available for assessing web application vulnerabilities, each with varying capabilities. Some tools only focus on a few kinds of exploits, and experienced assessors use a collection of tools and manual techniques. For example, there are tools that focus exclusively on finding and testing SQL injection attacks. Enterprise-class tools are broader, and should include a wide range of tests for major web application vulnerability classes, such as SQL injection, cross site scripting, and directory traversals. The OWASP Top 10 is a good starting list of major vulnerabilities, but an enterprise class tool shouldn’t limit itself to just one list or category of vulnerabilities. An enterprise tool should also be capable of scanning multiple applications, tracking results over time, providing robust reporting (especially compliance reports), and providing reports customized local needs (e.g., add/drop scans). Tools are typically software, but can also include dedicated appliances. Tools can run either manual scans with an operator behind them, or automatics scans for on a schedule. Since web applications change so often, it’s important to scan any modifications or new applications before deployment, as well as live applications on an ongoing basis. Services Not all organizations have the resources or need to buy and deploy tools to assess their own applications, and in some cases external assessments may be required for compliance. There are three main categories of web application vulnerability assessment services: Fully automatic scans: These are machine-run automatic scans that don’t involve a human operator on the other side. The cost is low, but they are more prone to false positives and false negatives. They work well for ongoing assessments on a continuous basis, but due to their limitations you’ll likely still want more in-depth assessments from time to time. Automatic scans with manual evaluation: An automatic tool performs the bulk of the assessment, followed by human evaluation of the results and additional testing. They provide a good balance between ongoing assessments and the costs of a completely manual assessment. You get deeper coverage and more accurate results, but at a higher cost. Manual assessments: A trained security assessor manually evaluates your web application to identify vulnerabilities. Typically an assessor uses their own tools, then validates the results and provides custom reports. The cost is higher per assessment than the other options, but a good assessor may find more flaws. Penetration Testing The goal of a vulnerability assessment is to find potential avenues an attacker can exploit, while a penetration test goes a step further and validates whether attack pathways result in risk to the organization. In a web

Share:
Read Post

I Do Not Have A Relationship With GDS International Or Business Management Magazine (Updated With GD

It came to my attention today that Business Management Magazine (www.busmanagement.com- not linked on purpose), part of GDS International, is using my name to sell sponsorship of their publication and some roundtable event at the RSA conference. Not only do I have NOTHING to do with them, they were advised over a year ago to stop using my name or the Gartner brand to sell their reports. I participated in an interview nearly 2 years ago, mistakingly thinking they were a valid publication. Reports started coming in that they were using my name to sell themselves, implying endorsement, and I retracted the interview before publication. The editor I worked with quickly left the company afterwards based on seeing the deceptive practices himself. He warned me that his computer was seized and the interview used without permission. It’s over a year later and they are still using my name without permission. They are also implying that they are timing the release of their publication with a major report I’m releasing. This is completely false- I have not revealed my publishing schedule. I don’t even know exactly when the report is coming out. I’m pissed. The only people who can use my name to sell anything are Gartner. If you ever hear anyone else implying my sponsorship, endorsement, or participation, please let me know. Update on 16 December, 2008: For some reason, this post started receiving a large amount of comments about 2 months ago, many of which were inflammatory and inconsistent with this site. GDS then contacted us to discuss the incident.They provided a statement/apology that we agreed to add to this post, and we also offered to just remove all the comments and lock future comments.The incidents occurred years ago, and we see no reason to let this drag on. Here is a response from Spencer Green, Chariman of GDS: Dear Mr Mogull — while it is not my practice to respond to each and every comment on my company, I feel that this thread warrants particular attention. I too have the strange compulsion to defend. GDS International employed a member of staff two years ago who misrepresented our relationship with yourself and Gartner. He was caught before we received your letter and dealt with accordingly — fired for gross misconduct. The editor you mention did not leave the company based on seeing our “deceptive practices”: they too were sacked (for a number of reasons, yours included). No computers were “seized”. We made a full and frank apology to Gartner at the time, which was accepted, and our two companies moved forward. Misrepresentation is completely against GDS policies. It is antithetical to our business model — a short-term act that benefits the individual over long-term thinking that benefits the organisation. GDS is proud of the work we do, of the many long- and short-term business relationships that we maintain, and of our employees, who — this example excluded — consistently perform to our high standards. GDS has been trading for 15 years and currently employs over 450 people. In these last two years, we have grown 50% year-on-year. We are a robust, ambitious company with a solid, proven and scaleable business model — not a house of cards. It is a real shame that the actions of one GDS employee affected you. Hundreds more are working to produce the best business magazines, events and websites. I hope you will take the time to check us out. Thank you for the opportunity to draw a line under this incident. Regards, Spencer Green Chairman, GDS International Share:

Share:
Read Post

Database Security, Statistics and You

‘Doing some research on business justification stuff for several project Rich and I are working on. Ran across the Aberdeen Group research paper reference on the Imperva Blog,, which talks about business justification for database security spending. You can download a copy for free. It’s worth a read, but certainly needs to be kept in perspective. “Don’t you know about the new fashion honey? All you need are looks and a whole lotta money.” Best-in-Class companies 2.4 times more likely to have DB encryption. Best-in-Class companies are more likely to employ data masking, monitoring, patch management and encryption than Laggards. Hmmm, people who do more and spend more are leaders in security and compliance. Shocker! And this is a great quote: “… current study indicates that the majority of their data is maintained in their structured, back end systems.” As opposed to what? Unstructured front end systems? Perhaps I am being a bit unfair here, but valuable data is not stored on the perimeter. If the data has value, it is typicallystored in a structured repository because that makes it easier to query by a wider group for multiple purposes. I guess people steal data that has no value as well, but really, what’s the point. “Well, duh.” Saying it without saying it I guess, the Imperva comments are spot on. You can do more for less. The statistics show what we have been talking about for data security, specifically database security, for a long time. I have witnessed many large enterprises realized reduced compliance and security costs by changes in education, changes in process and implementation of software and tools that automate their work. But these reductions came after a significant investment. How long it takes to pay off in terms of reduced manpower, costs and efficiencies in productivity vary widely. And yes, you can screw it up. False starts are not uncommon. Success is not a given. Wrong tool, wrong process, lack of training, whatever. Lots of expense, Best-in-Class, poor results. “But mom, everyone’s doing it!” The paper provides some business justification for DB security, but raises as many questions as it answers. “Top Pressures Driving Investments” is baffling; if ‘Security-related incidents’ is it’s own category, what does ‘Protect the organization mean’? Legal? Barbed wire and rent-a-Cops? And how can 41% of the ‘Best-in-Class’ respondents be in three requirement areas. Is everything a top priority? If so, something is seriously wrong. “Best-in-Class companies are two-times more likely than Laggards to collect, normalize, and correlate security and compliance information related to protecting the database”. I read that as saying SIEM is kinda good for compliance and security stuff around the database, at least most of the time. According to my informal poll, this is 76.4% likely to confuse 100% of the people 50% of the time. “Does this make me look Phat?” If you quotes these statistics to justify acquisition and deployment of database security, that’s great. If you choose to implement a bunch of systems so that you are judged ‘best in class’, that’s your decision. But if I do, call me on it. There is just not enough concrete information here for me to be comfortable with creating an effective strategy, nor cobble together enough data to really know what separates the effective strategies from the bad ones. Seriously, my intention here is not to trash the paper because it contains some good general information on the database security market and some business justification. You are not going to find someone on this planet who promotes database security measures more than I do. But it is the antithesis of what I want to do and how I want to provide value. Jeez, I feel like I am scolding a puppy for peeing on the rug. It’s so cute, but at the same time, it’s just not appropriate. “I call Bu&@% on that!” I have been in and around security for a long time, but the analyst role is new to me. Balancing the trifecta of raising general awareness, providing specific pragmatic advice, and laying out the justification as to why you do it is a really tough trio of objectives. This blog’s readership from many different backgrounds which further compounds the difficulty in addressing an audience; some posts are going to be overtly technical, while others are for general users. Sure, I want to raise awareness of available options, but providing clear, pragmatic advice on how to proceed with security and compliance programs is the focus. If Rich or I say ‘implement these 20 tools and you will be fine’ it is neither accurate nor helpful. If we recommend a tool, ask us why, ask us how, because people and process are at least as important as the technology being harnessed. If you do not feel we are giving the proper weight to various options, tell us. Post a comment on the blog. We are confident enough in our experience and abilities to offer direct advice, but not so arrogant as to think we know everything. The reason that Rich and I are hammering on the whole Open Research angle is both so you know how and where our opinions come from, but to provide readers the ability to question our research as well as add value to it. Share:

Share:
Read Post

Stop Using Internet Explorer 7 (For Now), Or Deploy Workarounds

There is an unpatched vulnerability for Internet Explorer 7 being actively exploited in the wild. The details are public, so any bad guy can take advantage of this. It’s a heap overflow in the XML parser, for you geeks out there. It affects all current versions of Windows. Microsoft issued an advisory with workarounds that prevent exploitation: Set Internet and Local intranet security zone settings to “High” to prompt before running ActiveX Controls and Active Scripting in these zones. Configure Internet Explorer to prompt before running Active Scripting or to disable Active Scripting in the Internet and Local intranet security zone. Enable DEP for Internet Explorer 7. Use ACL to disable OLEDB32.DLL. Unregister OLEDB32.DLL. Disable Data Binding support in Internet Explorer 8. Share:

Share:
Read Post

Friday Summary: 12-12-2008

When I was little, I remember seeing an interview on television of a Chicago con man who made his living by scheming people out of their money. Back when the term was in vogue, the con man was asked to define what a ‘Hustle’ was. His reply was “Get get as much as you can, as fast as you can for as little as you can”. December is the month when the hustlers come to my neighborhood. I live in a remote area where most of the roads are dirt, and the houses are far apart, so we never see foot traffic unless it is December. And every year at this time the con men, hucksters, and thieves come around, claiming they are selling some item or collecting for some charity. Today was an example, but our con man was collecting for a dubious sounding college fund dressed as a Mormon missionary, which was not a recipe for success. Rich had a visitor this week claiming to be a student from ASU, going door to door for bogus charity efforts. Last year’s prize winner at my place was a guy with a greasy old spray bottle, half-filled with water and Pinesol, claiming he was selling a new miracle cleaning product. He was more interested in looking into the windows of the houses, and we guess he was casing places to rob during Christmas as he neither had order forms nor actual product to sell. Kind of a tip off, one which gets my neighbors riled enough to point firearms. The good hustlers know all the angles, have a solid cover story & reasonable fake credentials, and dress for the part. And they are successful as there are plenty of trusting people out there, and hustlers work hard at finding ways to exploit your trust. If you read this blog, you know most of the good hustlers are not walking door to door, they work the Internet, extending their reach, reducing their risk, and raising their payday. All they need are a few programming skills and a little creativity. I was not surprised by the McDonald’s phish scam this week, for no other reason than that I expect it this time of year. The implied legitimacy of a URL coupled with a logo is a powerful way to leverage recognition and trust. Sprinkle in the lure of an easy $75, and you have enough to convince some to enter their credit card numbers for no good reason. This type of scam is not hard to do, as this mini How-To discussion on GNUCitizen shows how simple psychological sleight-of-hand , when combined with a surfjacking attack, is an effective method of distracting even educated users from noticing what is going on. If you want to give your non-technical relatives an inexpensive gift this holiday season, help them stay safe online. On a positive note I have finally created a Twitter account this month. Yeah, yeah, keep the Luddite jokes to yourself. Never really interested in talking about what I am doing at any given moment, but I confess I am actually enjoying it; both for meeting people and as an outlet to share some of the bizarre %!$@ I see on any given week. Here is the week’s security summary: Webcasts, Podcasts, Outside Writing, and Conferences: On the Network Security Podcast this week, with Martin in absentia, Rich and Chris Hoff discuss CheckFree, Microsoft, and EMC, plus a few other topics of interest. Chris makes some great points about outbound proxies and security about halfway through, and how it would be great to have bookmarks into these podcasts so we can fast forward when he goes off on some subject no one is interested in. Worth a listen! Favorite Securosis Posts: Rich: Is it too narcissistic to pick my own post? How the Cloud Destroys Everything I Love (About Web Application Security). Adrian: As it encapsulates the program we are working on and I am happy with the content overall, Part 4: The Web Application Lifecycle. Favorite Outside Posts: Adrian: And not because the title was one of my favorite Monty Python skits, this discussion was a very interesting give and take on Pen Testing on RiskAnalys.is. Rich: A two parter from me. First, Amrit on Amazon AWS security. Then, Hoff on virtualized network security in the cloud. Top News and Posts: A 50 BILLION dollar Ponzi scheme? How does this go unnoticed? The Automaker bail-out dies in the Senate. Hack A Day provided nice coverage on the WordPress update. Koobface worm targets MySpace and other social networking sites. This is the future of malware, folks. An Internet Explorer 7 0day on Windows XP is being exploited in the wild. Anton has a must read short post on HIPAA. HP and Symantec lose unencrypted laptops. Both companies are in the process of deploying encryption, but too late for these incidents. Blog Comment of the Week: Skott on our Building a Web Application Security Program series (too long for the entire comment, here’s the best bit): Tools and plain old testing are going to run into the same void without risk analysis (showing what’s valuable) and policy (defining what needs to be done for everything that’s valuable). Without them, you’re just locking the front door and leaving the windows, and oh, by the way, you probably forgot to put on the roof. Share:

Share:
Read Post

Totally Transparent Research And Sponsorship

Things seem a little strange over here at Securosis HQ- we’re getting a ton of feedback on an old post from November of 2006, but so far only one person has left us any real comments on our Building a Web Application Security Program series. Just to make it clear, once we are done with the series we will be pulling the posts together, updating them to incorporate feedback, and publishing it as a whitepaper. We already have some sponsorship lined up, with slots open for up to two more. This is a research process we like to call “Totally Transparent Research”. One of the criticisms against many analysts is that the research is opaque and potentially unduly influenced by vendors. The concern of vendor influence is especially high when the research carries a vendor logo on it somewhere. It’s an absolutely reasonable and legitimate worry, especially when the research comes from a small shop like ours. To counter this, we decided from the start to put all our research out there in the open. Not just the final product, but the process of writing it in the first place. With few exceptions, all of our whitepaper research, sponsored or otherwise, is put out as a series of blog posts as we write it. At each stage we leave the comments wide open for public peer review- and we never delete or filter comments unless they are both off topic and objectionable (not counting spam). Vendors, competitors, users, or anyone else can call us on our BS or complement our genius. This is all of our pre-edited content that eventually comes together for the papers. We also require that even sponsored papers always be freely available here on the site. Sponsors may get to request a topic, but they don’t get to influence the content (we do provide them with a rough outline so they know what to expect). We write the contracts so that if they don’t like the content in the end, they can walk without penalties and we’ll publish the work anyway. We do take the occasional suggestion from a sponsor when they catch something we miss, and it’s still objective (hey, it happens). While we realize this won’t fully assuage the concerns of everyone out there, we really hope that by following a highly transparent process we can provide free research that’s as objective as possible. We also find that public peer review is invaluable and produces less insular results than us just reviewing internally. Yes, we take end user and vendor calls like every other analyst, but we also prefer to engage in a direct dialog with our readers, friends, and others. We also like Open Source, kittens, and puppies. Not that we’ll be giving everything away for free- we have some stuff in development we’ll be charging for (that won’t be sponsored). But either we get sponsors, or we have to charge for everything. It’s not ideal, but that’s how the world works. Adrian has something like 12 dogs and I’m about to have a kid on top of 3 cats, and that food has to come from someplace. So go ahead and correct us, insult us, or tell us a better way. We can handle it, and we won’t hide it. And if you want to sponsor a web application security paper… Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.