Securosis

Research

Securosis Announces Increase In Cybercrime

October 12, 2007, Phoenix, AZ Securosis, L.L.C., the world’s leading provider of security consulting services, announces that cybercrime has reached record levels since the dawn of history. “Cybercrime continues to increase at a staggering rate,” says Rich Mogull, Founder, CEO, Jedi, and part-time neurosurgeon. “Losses are higher this year than at any time in history. We highly advise companies to immediately engage with us at non-discounted rates to assure they are protecting their children and stopping terrorism.” About Securosis Securosis, L.L.C. is the world’s leading provider of IT security consulting services and impractical security dribble. Securosis’ customers include all of the Fortune 1000, most major governments, and a few minor religious institutions. Securosis helps customers achieve compliance with all international laws and defend themselves from all known zero-day attacks while leveraging synergies through thought leadership. We’re really smart- give us money or we’ll scare your grandma. Share:

Share:
Read Post

Symantec to Acquire Vontu (According To InfoWorld)

Remember this post? If InfoWorld is accurate, Symantec will announce next week that they are acquiring Vontu. This would be consistent with the industry rumors that inspired my earlier post. I have no inside knowledge of this deal. The article states: Security software giant Symantec is preparing to announce an acquisition of Vontu, one the largest remaining independent providers of data leakage prevention software, which is used to control the flow of sensitive information across corporate networks. Multiple industry sources have confirmed to InfoWorld that Symantec will soon announce a buyout of Vontu, perhaps as early as next week, which will significantly further the trend of consolidation that has played-out in the red-hot DLP (data leakage prevention) space over the last year. … Sources said that the proposed deal will have Symantec paying $300-$350 million for privately-held Vontu, whose revenues are estimated at roughly $30 million per year by some industry analysts. Symantec and Vontu representatives declined to comment on the reported acquisition. This is a far more significant deal than McAfee’s acquisition of Onigma. Between Symantec, Websense, and EMC/RSA I think McAfee is now in the weakest position for DLP among the larger vendors. Since it’s late on a Friday, and the deal isn’t confirmed yet, I’ll save full analysis for next week. I think this is positive for Vontu and I hope that Symantec keeps them as independent as possible internally, similar to the Brightmail acquisition, and in opposition to most of their buys. It’s also positive for the remaining independent DLP players, especially Reconnex and Vericept. More next week… Share:

Share:
Read Post

Understanding And Selecting A Database Activity Monitoring Solution: Part 1, Introduction

Database Activity Monitoring may not carry the same burden of hype as Data Loss Prevention, but it is one of the most significant data and application security tools on the market. With an estimated market size of $40M last year, and predictions of $60M to $80M this year, it rivals DLP in spending. Database Activity Monitoring also carries the best DAM acronym in the industry Sorry, couldn’t help myself. DAM is an adolescent technology with significant security and compliance benefits. The market is currently dominated by startups but we’ve seen large vendors starting to enter the space, although products are not currently as competitive as those from smaller vendors. Database Activity Monitoring tools are also sometimes called Database Auditing and Compliance, or various versions of Database Security. There’s a reason I’ve picked DAM as the second technology in my Understanding and Selecting series. I believe that DLP and DAM form the lynchpins of two major evolving data security stacks. DLP, as it migrates to CMF and CMP, will be the center of the content security stack; focused on classifying and protecting structured and unstructured content as it’s created and used. It’s more focused on protecting data after it’s moved outside of databases and major enterprise applications. DAM will combine with application firewalls as the center of the applications and database security stack, providing activity monitoring and enforcement within databases and applications. One protects content in a structured application and database stack (DAM) and the other protects data as it moves out of this context onto workstations and storage, into documents, and into communications channels (CMP). Defining DAM Database Activity Monitors capture and record, at a minimum, all Structured Query Language (SQL) activity in real time or near real time, including database administrator activity, across multiple database platforms, and can generate alerts on policy violations. While a number of tools can monitor various level of database activity, Database Activity Monitors are distinguished by five features: The ability to independently monitor and audit all database activity, including administrator activity and SELECT transactions. Tools can record all SQL transactions: DML, DDL, DCL, (and sometimes TCL) activity. The ability to store this activity securely outside of the database. The ability to aggregate and correlate activity from multiple, heterogeneous Database Management Systems (DBMS). Tools can work with multiple DBMS (e.g., Oracle, Microsoft, IBM) and normalize transactions from different DBMS despite differences in their flavors of SQL. The ability to enforce separation of duties on database administrators. Auditing activity must include monitoring of DBA activity, and solutions should prevent DBA manipulation of and tampering with logs and activity records. The ability to generate alerts on policy violations. Tools don’t just record activity, they provide real-time monitoring and rule-based alerting. For example, you might create a rule that generates an alert every time a DBA performs a SELECT query on a credit card column that returns more than 5 results. Other tools provide some level of database monitoring, including Security Information and Event Management (SIEM), log management, and database management, but DAM products are distinguished by their ability to capture and parse all SQL in real time or near real time and monitor DBA activity. Depending on the underlying platform, a key benefit of most DAM tools is the ability to perform this auditing without relying on local database logging, which often comes with a large performance cost. All the major tools also offer other features beyond simple monitoring and alerting, ranging from vulnerability assessment to change management. Market Drivers DAM tools are extremely flexible and often deployed for what may appear to be totally unrelated reasons. Deployments are typically driven by one of three drivers: Auditing for compliance. One of the biggest boosts to the DAM market has been increasing auditor requirements to record database activity for SOX (Sarbanes-Oxley) compliance. Some enterprises are required to record all database activity for SOX, and DAM tools can do this with less overhead than alternative approaches. As a compensating control for compliance. We are seeing greater use of DAM tools as a compensating control to meet compliance requirements even though database auditing itself isn’t the specified control. The most common example is using DAM as an alternative to encrypting credit card numbers for PCI compliance. As a security control. DAM tools offer significant security benefits and can sometimes even be deployed in a blocking mode. They are particularly helpful in detecting and preventing data breaches for web facing databases and applications, or to protect sensitive internal databases through detection of unusual activity. DAM tools are also beginning to expand into other areas of database and application security, as we’ll cover in a future post. Today, SOX compliance is the single biggest market driver, followed by PCI. Despite impressive capabilities, internally-driven security projects are a distant third motivation for DAM deployments. Use Cases Since Database Activity Monitoring is so versatile, here are a few examples of how it can be used: To enforce separation of duties on database administrators for SOX compliance by monitoring all their activity and generating SOX-specific reports for audits. If an application typically queries a database for credit card numbers, a DAM tool can generate an alert if the application requests more card numbers than a defined threshold (often a threshold of “1”). This can indicate that the application has been compromised via SQL injection or some other attack. To ensure that a service account only accesses a database from a defined source IP, and only runs a narrow range of pre-approved queries. This can alert on compromise of a service either a) from the system that normally uses it, or b) if the account credentials are stolen and used from another system. For PCI compliance you can encrypt the database files or media where they’re stored, then use DAM to audit and alert on access to the credit card field. The encryption protects against physical theft, while the DAM protects against insider abuse and certain forms of external attack. As a change and configuration management

Share:
Read Post

Off Topic: Whoa- This Is Worse For The Record Industry Than Pirating Ever Could Be

As my readers know, I’m not the biggest fan of consumer DRM. I hate being treated like a criminal when I’m not, and I don’t believe anyone has the right to control more of my systems than I do. Something about my security being compromised to provide better security for some corporate entity whose products I may or may not purchase just bugs me. A while back I posted how the Barenaked Ladies distribute their content without DRM. Not for free, but once you buy it you’re free to use it as you wish. I like that. Now, thanks to TechCrunch, we learn that Madonna is leaving the record labels and working with Live Nation to distribute content directly. Nine Inch Nails, Radiohead, and a few others are also jumping the record label ship. Yahoo! Music stated they won’t distribute music with DRM. With MySpace and other social networking sites for promotion, low-cost digital distribution of content either directly to consumers or through online stores, and general frustration and anger with record company pricing, practices, and treatment of artists, it’s hard to see how the companies will survive. It won’t be an immediate death- years if not decades, but now that some of the biggest names in the business are running into independence the writing is clearly on the wall. And the record companies can take their damn DRM with them. Now it’s time to get cracking on the MPAA… Share:

Share:
Read Post

On Trust

I was reading a post over at Layer8 and it got me thinking about trust. Shrdlu attended a talk by Larry Ponemon where he took away this little tidbit: The trust given to an organization depends not only on how well it protects information, but also on how transparent it is. A long time ago I spent some time thinking about trust and digital relationships. I broke it down into three components: Intent, Capability, and Communications: Intent: How an organization (or person) intends to act within a relationship. This is their true intent, not necessarily what they communicate as their intent. For example, we collect credit card data solely to perform online transactions, and will protect it from unauthorized disclosure. Capability: Does an organization have the capability to meet its intent? For example, does it collect card numbers and only use them for transactions, but use security which could not stop a targeted attack? Communications: Does an organization effectively and accurately communicate its intentions and capabilities? If any of these factors fails, so does trust. Let’s look at some examples in the security world. Some vendors, I don’t even need to bother naming them, make outlandish claims about the security of their products that do not reflect reality. Then, when breaches occur, they spin the facts rather than admitting to an honest mistake. Result? No one trusts those vendors anymore. I remember our home town bank as a kid. We’d walk in and it was all marble and stone, with a huge walk-in vault surrounded by guards at the far end. Placing the vault where customers can see it doesn’t improve security, but it clearly communications of a capability to protect your money. These days, no one cares. Why? The world changed and with the FDIC and electronic banking we are far less concerned about a bad guy with a mask stealing our money. Heck, they could steal the entire bank, foundation and all, and we still wouldn’t be out a dime. Breach disclosure is another example of trust. If a company loses my personal information and clearly communicates how it was protected, how it was lost, and a reasonable plan for preventing a recurrence, I am not very likely to leave them. If, on the other hand, they attempt to cover it up, shift blame, or clearly lie about their intent or capability to protect my information, I am far more likely to switch to another provider. A privacy example? Years ago I cancelled my Amazon account after they changed their privacy policy and started sharing my data. The policy in effect when I signed up stated my information would be kept private. They then summarily changed it without my permission. They clearly either lied about, or changed, their intent, and lost me as a customer. It took me 5 years before I bought from them again. It’s very simple: trust is built on what you intend to do, your ability to do it, and your ability to communicate both. Share:

Share:
Read Post

Network Security Podcast: Episode 80

Once again Martin and I recorded late enough in the day that I could enjoy a fine beer during the taping (Moose Drool this week). I also need to shout out to Paul and Larry and Pauldotcom Security Weekly; based on their advice I picked up a WRTSL54GS for some wireless access point hacking. Too bad I bricked it… by opening the box. Needless to say that one is on its way back to the online store, and a new one is headed to me. I’ve been working on this pet project of mine for a year and really hope this is the right box to get the job done. Also, congrats to Martin on re-entering the world of the gainfully employed. He starts with Trustwave on Tuesday. Show Notes: Microsoft AutoRuns PGP Flaw not really a flaw at all Securosis: Slashdot bias and much ado about nothing PGP encryption issue Slashdot: Undocumented bypass in PGP whole disk encryption Securology: PGP whole disk encryption – barely acknowledged intentional bypass Retailers vs. PCI Securosis: Retailers btch slap PCI Security Standards Council Techtarget: National Retail Federation takes aim at PCI DSS Council SC Magazine: Retail Lobby offers alternative to PCI standards Network Security Blog: Merchants mad about credit card retention iPhone Jailbreak (missed the link on this one) Suit against Apple for bricking iPhones Six ticks to Midnight: One plausible journey from here to a total surveillance society Tech Liberation Front Onstar to stop supports RSA Speaking on Security interviews Shon Harris, and I get a mention too. CIO.com: Hacker Economics 1: Malware as a service Tonight’s Music: The Moon is Full by Albert Colins, Johnny Copeland and Robert Cray Network Security Podcast, Episode 80, October 9, 2007 Time: 46:51 Share:

Share:
Read Post

Everything You Need To Know About Security And Risk Is In This Post (Humor)

Meerkat Manor, via the Guerilla CISO. Here’s an excerpt: 09 October 2007: Dear diary, I drew sentry duty for the third day this week. I know it’s my solemn duty to protect the clan, but my risk assessment has determined that, although a predator is a high-impact event, it is a low rate-of-occurance activity and so I think a better use of my time is in foraging for stray eggs. Besides, if the predators come and eat us all, it’s not like I’ll have to face the Meerkat Manor Board of Directors. 10 October 2007: Dear diary, I grow tired of the incessant looking for predators. I mean, why do us meerkats focus exclusively on detective controls which use up to 15% of our available manpower when we could just as easily reduce the sentries to 5% of our efforts and put in place corrective controls such as trap holes and punji sticks to reduce the threats to our home? The true cost savings is that the effort for corrective controls is a one-time installation where sentry duty is a recurring bill. Didn”t the alpha-pair learn anything in their Masters in Meerkat Administration classes? 11 October 2007: Dear diary, today I instituted a metrics program to gauge the effectiveness of our sentry program and to determine if we are getting the best level of risk for the time that we are investing. So far, I”ve made a bar chart to analyze the total number of predator alerts versus the total number of predator intrusions. I think I have a business case to slowly reduce the ratio of sentries to foragers during the day. Share:

Share:
Read Post

The Five Problems With Data Classification, And Introduction To Practical Data Classification

Data classification is one of the most essential tools of data security. It enables us to leverage business priorities into technical and physical controls over the management and protection of data. Applying data security controls without data classification is like trying to protect a pile of cash in an open field filled with piles of leaves by air dropping concrete barricades from 10,000 feet. At night. It’s also hard. Really hard. So hard that outside of a few companies in a few industries, mostly financial services, energy production, military/intelligence, and some manufacturing, I’m not sure I’ve ever seen someone with a useful and effective classification program. I’ve talked with hundreds, possibly thousands, of organizations struggling with data classification. Some give up, others blow wads of cash on consultants that don’t really give them what they want, and others have a well documented, detailed program that everyone ignores. Data classification is so hard because it is both Non-intuitive and instinctive. Instinctive in that we all innately classify everything we see. From people, to movies, to enterprise data, we humans are judgmental classification machines. We classify as good vs. bad, threat vs. non-threat, important vs. irrelevant. Non-intuitive because in an organization we’re asked to classify not based on our instincts, but based on policies designed by someone else. Thus the first problem with data classification isn’t because we can’t classify, it’s because we always classify. We just classify based on our instincts, not a piece of paper on a shelf. When they differ, our instincts win. The second problem with data classification is that we overlay it onto business process, rather than building it in. Classification becomes a task outside of the processes we engage in to complete our job; it’s an “add on” that slows us down, and is simple to ignore. The third problem with data classification is that we fail to provide employees with the tools to get the job done. It’s not only manual and non-intuitive, but we don’t provide the technical tools needed to even make it meaningful. Quarterly assessments in a spreadsheet aren’t very useful. The fourth problem with data classification is that it’s static. We tend to classify data at the time of creation or based on where it’s stored, but that’s never revised based on changing use and business context. Data’s sensitivity varies greatly over its lifecycle and based on how it’s being used; few data classification systems account for this. The fifth, and final, problem with data classification is that it’s usually too complicated. The classification scheme and process itself is even less intuitive than asking someone to classify against their instincts. We use terms like, “sensitive but unclassified” that have little meaning outside the world of the military/government. But that doesn’t mean all hope is lost. As I mentioned before, there are places where data classification works well, mostly because they’ve adapted it for their specific environment. The military does a good job of overcoming these obstacles- data classification is built into the culture, which redefines native instincts to include enterprise priorities. It’s baked into the process of handling information and essential to business (yes, the military is a business) processes. Technology systems are specifically designed and chosen due to their suitability to handle classified data. No, it’s not perfect, but it does work. That doesn’t mean that military classification works in private enterprise. It doesn’t. It fails. Badly. Which is unfortunate, because that’s how all the books tell you to do it. Over the next two posts I’ll suggest something I call Practical Data Classification. It’s designed to provide organizations an effective model that integrates with existing enterprise practices and culture, while still providing value. It’s not for you military or financial types that alreaady do this well; consider it data classification for the rest of us. Share:

Share:
Read Post

Product Happenings: Guardium, SafeBoot, Palo Alto, and Vontu

Despite my departure from the analyst world, thanks to the blog some of the vendors out there are still keeping me updated on their products. I also still have to track big swaths of the market to support my consulting work. While I don’t intend to this blog to just spew PR dribble, I do see some cool stuff every now and then that’s worth mentioning. Disclaimer: I do not currently have a business relationship with any of the vendors/products in today’s post, but based on the nature of my business I do work with vendors and often have discussions about potential projects. I will disclose these relationships when I can, and while I strive to remain objective no matter who I work with you should never go buy something just because I said it was cool. Do the research, get balanced opinions, trust no one. I’m not endorsing these products over their competitors, just highlighting some interesting advances, and you’ll probably see competing products pop up in other posts over time. Here are a few things that have caught my eye: First up is SafeBoot, just acquired by McAfee. Overall I think the acquisition is positive, but there’s really no reason to consolidate whole drive encryption with endpoint DLP. File-level encryption linked to DLP is more interesting, but also very challenging and I suspect at least a couple years out for McAfee other than some basic content like Social Security Numbers. It’s wait and see on this one, but SafeBoot stands up on its own. Next is Guardium, who just updated their product for the mainframe. Guardium briefed me last Friday on this and I meant to get something up earlier. This is a really smart move, especially since they partnered with Nuon Neon who sells to the mainframe buying center. They can now offer full database monitoring (including SELECT queries) on the mainframe outside of network sniffing (which misses certain kinds of connections). Why you care? Now you have an independent way to enforce separation of duties on mainframe administrators without interfering with how they work or affecting performance. And you can integrate the policies for alerts, and the logs, with all your other database monitoring. I think I was more excited about this one than the guys giving me the briefing- it’s one of these “small but big” markets. An industry contact I work with pointed me towards Palo Alto Networks and I had a brief conversation with them about a month ago. Basically, they parse and secure network traffic based on the application, not just port and protocol. This is a big problem for things like DLP solutions that don’t really like it (or work as well) when they have to figure out which application is tunneling over port 80 this week. I think these guys have a lot of partnership opportunities down the road. Last up today is Vontu, who just released version 8. The news here is increasing their endpoint capabilities to start blocking and integration with document management systems. This release isn’t notable for any new world-changing feature, but because most of the work was on the back end and increasing the capabilities of the product line. DLP is settling down a bit and focusing on maturing, rather than land-grabbing with hyped up features. I’ve had some other DLP briefings lately and I’m seeing this focus on maturing the platforms across the board; moving from start-ups to mature products is some seriously hard work. Blocking activity on the endpoint is a big deal and it’s nice to see Vontu add it (a few competitors also have their own flavor of it, so it’s not unique). That’s it for now. I probably won’t do these more than once a month or so and I’ll only include any updates that seem interesting to me either because they are innovative of because they show an industry trend. I’m happy to take briefings from just about anyone, but that by no means guarantees a mention on the blog. Now back to the absolutely thrilling world of data classification… Share:

Share:
Read Post

Practical Data Classification: Type 1, The Hasty Classification

In over thirteen years with mountain rescue and five years as a ski patroller I participated in countless search and avalanche drills, and a fair number of real incidents. Search in the real world, as in the computing world, is difficult due to the need to balance performance with thoroughness. In a rescue situation you need to find the victim as quickly as possible; a thorough search has a higher Probability of Detection (POD), but takes longer. Assuming you’re looking for a live victim this time can mean the difference between a rescue and a recovery. Since detailed searches also take time to gather resources (searchers), most searches/rescues start with what’s called a hasty. A hasty search is light and fast- you send out a smaller, faster team to scour the area for obvious clues. The probability of detection is low, but you don’t need a 50 person team with full gear to find a half-burried skier in an obvious tree well in the middle of a deposition zone (where all the snow ends up after an avalanche). I’ve been on a bunch of hasty teams in real-world searches (no avalanches) and would guess that we found the victim before the big search was launched somewhere around 20-30% of the time. A hasty is effective because it’s designed to maximize speed while finding anything obvious in critical situations. We can adapt the principle of the hasty for data classification. Many classification programs fail because they attempt to solve the entire problem while taking too long to protect the critical assets. In a hasty classification program you focus on a single critical data type and roll out classification enterprise wide. Rather than overwhelming users with a massive program, focus on one kind of data that’s clearly critical in a very focused program to protect it. It’s a baby step to protect a critical asset while slowly changing user habits. Data Classification Type 1: Hasty Classification The short version: Pick one critical type of data. I suggest credit card numbers, Social Security Numbers, or something similar. Have business units tell you where they use it and store it. Issue security policies for how that data needs to be secured. Work with units to secure the systems Security helps the business units secure the data, while audit plays the enforcement role. This makes security the good guys. Keep it updated with ongoing audits and regular “compliance” reporting of where and how data is used and stored. Same process, with more details: Design your basic classifications. I suggest no more than 3-4, and use plain English. For example, “Sensitive/Internal/Public”. If you deal with personally identifiable information (PII) that can be a separate classification, and call it PII, NPI, HIPAA, or whatever term your industry uses. Pick one type of critical data that is easy to recognize. I highly recommend PII- credit card numbers, Social Security Numbers, or something similar. Get executive approval/support- this has to come from as high as possible. If you can’t get it, and you care about security, update your resume. Beating your head against a wall is painful and only annoys the wall and anyone within earshot. Issue a memo requiring everyone to identify any business process or IT system that contains this data within 30/60/90 days. Collect results. While collecting the results, finalize security standards for how this data is to be used, stored, and secured. This includes who is allowed to access it (based on business unit/role), approved business processes (billing only, or billing/CRM, etc.), approved applications/systems (be specific), where it can be stored (specific systems and paper repositories), and any security requirements. Security requirements should be templates and standards with specific, approved configurations. Which software, which patch level, which configuration settings, how systems communicate, and so on. If you can’t do this yourself, just point to open standards like those at cisecurity.org. Issue the security standards. Require business units to bring systems into compliance within a specific time frame, or get an approved exception. IT Security works with business units to bring systems/processes into compliance. They work with the business and do not play an enforcement role. If exceptions are requested, they must figure out how to secure the data for that business need, and the business will be required to adopt needed alternative security controls for that business process. After the time period to bring systems into compliance expires, the audit group begins random audits of business units to ensure reporting accuracy and that systems are in compliance with corporate standards. Business units periodically report (rolling schedule) on any changes on use or storage of the now-classified data. Security continuously evaluates security standards, issues changes where needed, and helps business units keep the data secure. Audit plays the enforcement role of looking for exceptions. I know some of you are sitting there going, “This is the easy way? I’d hate to see the hard way!” The hasty classification is really an entire data classification program, but focused on one single kind of easily identified data. When you think about it, you’re just picking that critical data, figuring out where it is, helping secure it, and using audit to make sure you’re doing what you think you’re doing. When I discuss this with people I prefer to lay out all the steps in detail, but most of you will adapt it to suit your own environment. The key is to keep it simple, pick one data type to start, and separate between those securing the data, and those verifying that the data is secure. In our next post on this topic we’ll talk about how to grow this into a complete program. I’m even working on pretty pictures! Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.