Securosis

Research

Network Security Podcast, Episode 101 Up

Ah, RSA. Not much more to say, but we managed to squeeze out a good 30 minutes of recap and conclusions. We spent most of our time on a few issues, especially some of the lessons from our Security Groundhog Day panel, and tried to avoid too many frat-boyish, “I was so drunk at that party dude!”-isms. Overall the conference was pretty much the same as always. The show floor was subdued, with lower traffic, and it started to feel like the industry is maturing a bit. Still, there were far too many biometric and secure USB device vendors floating around. We close off with a discussion of which security shows make the most sense for you, depending on where you are in your career. Unless you’re there for business development and networking/socialization, RSA probably isn’t the show for you. You can download it at NetSecPodcast.com. Share:

Share:
Read Post

Best Practices For DLP Content Discovery: Part 2

Someone call the Guinness records people- I’m actually posting the next part of this series when I said I would! Okay, maybe there’s a deadline or something, but still… In part 1 we discussed the value of DLP content discovery, defined it a little bit, and listed a few use cases to demonstrate it’s value. Today we’re going to delve into the technology and a few major features you should look for. First I want to follow up on something from the last post. I reached out to one of the DLP vendors I work with, and they said they are seeing around 60% of their clients purchase discovery in their initial DLP deployment. Anecdotal conversations from other vendors/clients supports this assertion. Now we don’t know exactly how soon they roll it out, but my experience supports the position that somewhere over 50% of clients roll out some form of discovery within the first 12-18 months of their DLP deployment. Now on to the… Technology Let’s start with the definition of content discovery. It’s merely the definition of DLP/CMP, but excluding the in use and in motion components: “Products that, based on central policies, identify, monitor, and protect data at rest through deep content analysis”. As with the rest of DLP, the key distinguishing characteristic (as opposed to other data at rest tools like content classification and e-discovery) is deep content analysis based on central policies. While covering all content analysis techniques is beyond the scope of this post, examples include partial document matching, database fingerprinting (or exact data matching), rules-based, conceptual, statistical, pre-definited categories (like PCI compliance), and combinations of the above. They offer far deeper analysis than just simple keyword and regular expression matching. Ideally, DLP content discovery should also offer preventative controls, not just policy alerts on violations. How does this work? Architecture At the heart is the central policy server; the same system/device that manages the rest of your DLP deployment. The key three features of the central management server are policy creation, deployment management/administration, and incident handling/workflow. In large deployments you may have multiple central servers, but they all interconnect in a hierarchical deployment. Data at rest is analyzed using one of four techniques/components: Remote scanning: either the central policy server or a dedicated scanning server that connects with storage repositories/hosts via network shares or other administrative access. Files are then scanned for content violations. Connections are often made using administrative credentials, and any content transfered between the two should be encrypted, but this may require reconfiguration of the storage repository and isn’t always possible. Most tools allow bandwidth throttling to limit network impact, and placing scanning servers closer to the storage also increases speed and limits impact. It supports scanning nearly any storage repository, but even with optimization performance will be limited due to reliance on networking. Server agent: a thin agent is installed on the server and scans content locally. Agents can be tuned to limit performance impact, and results are sent securely to the central management server. While scanning performance is higher than remote scanning, it requires platform support and local software installation. Endpoint agent: while you can scan endpoints/workstations remotely using administrative file shares, this will rapidly eat up network bandwidth. DLP solutions increasingly include endpoint agents with local discovery capabilities. These agents normally include other DLP functions, such as USB monitoring/blocking. Application integration: direct integration, often using an agent, with document management, content management, or other storage-oriented applications. This integration not only supports visibility into management content, but allows the discovery tool to understand local context and possibly enforce actions within the system. A good content discovery tool will understand file context, not just content. For example, the tool can analyze access controls on the files and using its directory integration understand which users and groups have what access. Thus the accounting department can access corporate financials, but any files with that content allowing all-user access are identified for remediation. Engineering teams can see engineering plans, but the access controls are automatically updated to restrict access by the accounting team if engineering content shows up in the wrong repository. From an architectural perspective you’ll want to look for solutions that support multiple options, with performance that meets your requirements. That’s it for today. Tomorrow we’ll review enforcement options (which we’ve hinted at), management, workflow, and reporting. I’m not going to repeat everything from the big DLP whitepaper, but concentrate on aspects important to protecting data at rest.n < p style=”text-align:right;font-size:10px;”>Technorati Tags: CMP, Content Discovery, Content Monitoring and Protection, Data Loss Prevention, Data security, DLP, Information Security, Information-centric security, Security, Tools, Tutorial Share:

Share:
Read Post

Debix Contest Ending This Week

I really owe you readers (and Debix) an apology. My shoulder knocked me back more than expected, and I let the contest to win a year’s subscription to Debix for identity theft prevention linger. We’re going to close it out on Friday, and David Mortman and I will be announcing the (anonymous) winners. So head over to this thread and add your story before Friday…n < p style=”text-align:right;font-size:10px;”>Technorati Tags: Debix, Fraud, Identity Theft Share:

Share:
Read Post

Best Practices For Reducing Risks With DLP Content Discovery: Part 1

‘Boy, RSA was sure a blur this year. No, not because of the alcohol, and not because the event was any more hectic than usual. My schedule, on the other hand, was more packed than ever. I barely walked the show floor and was only able to wave in passing to people I fully intended on sitting down with over a beer or coffee and having deep philosophical conversations with. Since pretty much everyone in the world knows I spend most of my time on information-centric security, for which DLP is a core tool, it’s no surprise I took a ton of questions on it over the week. Many of these questions were inspired by analysis, including my own, that leaks over email/web really aren’t a big source of losses. People use that to try to devalue DLP, forgetting that network monitoring/prevention is just one piece of the pie. A small piece, in the overall scheme of things. Let’s review our definition of DLP: “Products that, based on central policies, identify, monitor, and protect data at rest, in motion, and in use through deep content analysis”. Content Discovery, the ability to scan and monitor data at rest, is one of the most important features of any DLP solution; one with significant ability to reduce enterprise risk. While network DLP tells you how users are communicating sensitive information, content discovery tells you where sensitive information is stored within the enterprise, and often how it’s used. Content discovery is likely more effective at reducing enterprise risks than network monitoring, and is one reason I tend to recommend full-suite DLP solutions over single channel options. Why? Consider the value of knowing nearly every location where you store sensitive information, based on deep content analysis, and who has access to that data. Of being able to continuously monitor your environment and receive notification when sensitive content is moved to unapproved locations, or even if the access rights are changed on it. Of, in some cases, being able to proactively protect the content by quarantining, encrypting, or moving it when policy violations occur. Content discovery, by providing deep insight as to the storage and use of your sensitive information, is a powerful risk reduction tool. One that often also reduces audit costs. Before we jump into a technology description let’s highlight a few simple use cases that demonstrate this risk reduction: Company A creates a policy to scan their storage infrastructure for unencrypted credit card numbers. They provide this report to their PCI auditor to reduce audit costs and prove they are not storing cardholder information against policy. Company B is developing a new product. They create a policy to generate an alert if engineering plans appear anywhere except on protected servers. Company C, a software development company, uses their discovery tool to ensure that source code only resides in their versioning/management repository. They scan developer systems to keep source code from being stored outside the approved development environment. Company D, an insurance company, scans employee laptops to ensure employees don’t store medical records to work at home, and only access them through the company’s secure web portal. In each case we’re not talking about preventing a malicious attack, although we are making it a bit harder for an attacker to find anything of value; we’re focused on reducing risk by reducing our exposure and gaining information on the use of content. Sometimes it’s for compliance, sometimes it’s to protect corporate intellectual property, and at other times it’s simply to monitor internal compliance with corporate policies. In discussions with clients, content discovery is moving from a secondary priority to the main driver in many DLP deals (I hope to get a number out there in the next post). As with most of our security tools, content discovery isn’t perfect. Monitoring isn’t always in real time, and it’s possible we could miss some storage locations, but even without perfection we can materially reduce enterprise risks. Over the next few days we’ll talk a little more about the technology, then focus on best practices for deployment and ongoing management. < p style=”text-align:right;font-size:10px;”>Technorati Tags: Content Discovery, Data Loss Prevention, Information Security, Information-centric security, Risk Management, Security Share:

Share:
Read Post

Whitepaper: Understanding and Selecting a Database Activity Monitoring Solution

Today, in cooperation with SANS, Securosis is releasing Understanding and Selecting a Database Activity Monitoring Solution. This is a compilation of my multipart series on DAM, fully edited with expanded content. The paper is sponsored by Guardium, Imperva, Sece o, Sentrigo, and Tizor, but all content was developed independently by me and reviewed by SANS. It is available here, and will soon be available in the SANS Reading Room or directly from the vendors. It was a fair bit of work and I hope you like it. The content is copyrighted under a Creative Commons license, so feel free to share it and even cut out any helpful bits and pieces as long as you attribute the source. As always, questions, comments, and complaints are welcome… … and there isn’t a DAM joke in the entire thing; I save those for the blog. < p style=”text-align:right;font-size:10px;”>Technorati Tags: Database Activity Monitoring, Database Security, Whitepaper, Tutorial Share:

Share:
Read Post

And this year’s theme at RSA is…

Nothing. Nada. Zip. While we’ve seen themes emerge most years at RSA; such as DLP, PKI, and compliance; there really doesn’t seem to be any particular preference this year. Sure, we see data security and PCI on every booth, but I don’t see any particular technology or theme consistently highlighted. This could indicate a maturation, or simply that market demands are so all over the place that vendors are using either shotguns or lasers to target buyers. Good week so far, and don’t forget to check out http://netsecpodcast.com for our micro interviews. I’m sending this from the iPhone, and it’s time to give my thumbs a break. Share:

Share:
Read Post

An Inconvenient Lack Of Truth

On Tuesday morning I’ll be giving a breakfast session at RSA sponsored by Vericept entitled Understanding and Preventing Data Breaches. This is the latest update to my keynote presentation where I dig into all things data breaches to make a best effort at determining what’s really going on out there. Since the system itself is essentially designed to hide the truth and shift risk like a token ring network, digging to the heart of the matter is no easy task. On Friday Dark Reading published my latest column which is a companion piece to the presentation. It’s a summary of some of the conclusions I’ve come to based on this research. Like much of what I write I consider much of this to be obvious, but not the kinds of things we typically discuss. It’s far easier to count breaches and buy point solutions than to really discuss and solve the root cause. Here are a couple of excerpts, but you should really read the full article: …When I began my career in information security, I never imagined we would end up in a world where we have as much need for historians and investigative journalists as we do technical professionals. It’s a world where the good guys refuse to share either their successes or failures unless compelled by law. It’s a world where we have plenty of information on tools and technologies, but no context in which to make informed risk decisions on how to use them. Call me idealistic, but there is clearly something wrong with a world where CISOs are regularly prevented by their legal departments from presenting their successful security programs at conferences. … 1. Blame the system, not the victims, for identity fraud. … 2. Blame the credit card companies, not the retailers, for credit card fraud. … 3. Consumers suffer from identity fraud, retailers from credit card fraud. … 4. We need fraud disclosure, not breach disclosure. … 5. We need public root cause analysis. … 6. Breach disclosures teach us the wrong lessons. … Based on the ongoing research I’ve seen, it’s clear that the system is broken in multiple ways. It’s not our failure as security professionals — it’s the failure of the systems we are dedicated to protecting. While my presentation focuses on using what little information we have to make specific tactical recommendations, the truth is we’ll just be spinning our wheels until we start sharing the right information — our successes and failures — and work on fixing the system, not just patching the holes at the fringes. < p style=”text-align:right;font-size:10px;”>Technorati Tags: Dark Reading, Data Breach, Governance, Information Security, Information-centric security, Security, Security Industry Share:

Share:
Read Post

Predictions and Coverage for RSA 2008

This morning Dr. Rothman was kind enough to set me up for my last pre-RSA blog post with his Top 3 RSA Themes. It seems that every year there’s some big theme among the show floor vendors. I also can’t make it through a call, especially with VCs, without someone asking, “What’s exciting?” The truth is I agree with Mike that the days of hot have long cooled. We’re very much an industry now, and if I see something creative it’s often so engineering driven as to be doomed to failure (sorry guys, CLIs don’t cut it anymore). Since Mike was kind enough to post his themes, I’ll be kind enough to post my opinions of them and my own predictions. This is pretty negative until the end, mostly because we’re talking macro trends, not the individual innovation and maturation that really advance the industry. (Warning, I use really bad words and uglier metaphors; if you don’t like being offended, skip this one. It’s a Friday, and this isn’t my most professional post). Virtualization Security This is the one theme I can’t argue with. We’ll see a TON of marketing around virtualization, and nearly no products that actually provide any security. Virtualization is hot even if security isn’t, and what we’ll see is the marketing land grab as everyone sprays marketing piss everywhere to cock block the competition. GRC I really hope Mike is wrong that GRC will be a big theme. If he’s right, I’ll be spewing vomit all over the show floor before I even start bingeing. GRC is nothing more than a pathetic attempt by technology vendors to ass-kiss their way into an elevator pitch to executives who don’t give a rat’s ass about technology. GRC tools are little more than pretty dashboards that don’t actually help anyone get their jobs done on a day to day basis. Every CEO/CFO loves them when they see them, but there is no person in the organization with operational responsibility to use them on a day to day basis. Thus there is practically no market; and what few companies buy these things don’t end up using them except for quarterly reports. On top of that, the vendors charge way too much for this crap. On the other end, we have useful security management and reporting tools that get branded GRC. This isn’t lipstick on a pig, it’s smearing crap on a supermodel. Some people are into it, but they are seriously whacked in the head. These tools still have value, but you might have to dig past the marketing BS to get there. The more “GRC” they pile on, the harder it will be to find the useful bits and get your job done. Here’s a hint folks- people have jobs; give them tools that directly help them operationally get their job done on a day to day basis. If it craps pretty reports for the auditors, so much the better. Security in the cloud I’m going to split this one a bit. On the one side is true in-the-cloud security; ISPs and other providers filtering before things hit you. It’s very useful, but I don’t think we’ll see it as a big trend. The next big trend is services in general, but I don’t consider these in the cloud. Services are a great way to gouge clients (as a consultant I should know) and more and more vendors want in on the action. Everyone’s tired of IBM having all the client-reaping fun. Security services in general will definitely be a top 5 trend. It’s not all bad- there are a lot of really good services emerging, but it’s a buyer-beware market and you really need to do your research and make sure you have outs if it isn’t working. And now a few of my trend predictions… Data leakage that isn’t DLP Everyone here knows I’m a fan of DLP; what I’m not a fan of is random garbage calling itself DLP because it prevents “data leaks”. I blame Nick Selby for this one since he’s been lumping a bunch of things together under Anti Data Leakage. Yes, your firewall stops data leaks if you turn all the ports off, but that isn’t DLP. This year will be the year of abuse for the term DLP, but hopefully we can move the discussion forward to information-centric security where many of these non-DLP tools will provide value. Once someone else buys them and stuffs them into a suite, that is. Network performance you don’t need Remember, vendors are like politicians and lie to us because we want them to. You probably don’t need 10 gigabit network performance, but you’re going to ask for it, and someone is going to tell you you’re getting it. Even when you’re not, but you’ll never notice anyway. The Laundry List Stealing from Mike, here are a few other trends we’ll see: Anti-botnets. Anti-malware we thought our AV vendors were already doing. Encryption integrated with other information-centric tools (this one is good). Encryption integrated with random crap on the endpoint that has nothing to do with encryption. All things with 2.0 in the name. I’m a bit cynical here, but that’s because RSA is more about marketing than anything else. In every one of these categories there are good products, but RSA isn’t the place to be an honest vendor and have your ass handed to you by your competition. There will definitely be some really great stuff, probably some of it new, but the major trends are always about jumping on the bandwagon (that’s why they’re trends). From a coverage standpoint I’ll be doing my best to give you a feel for RSA, minus the hangovers. I don’t get to attend many sessions, including the keynotes, but the news sites do a good job of covering those (besides, they’re nothing more than $100,000 marketing pitches). Martin and I will be interviewing and podcasting from the event and posting everything in short segments up on

Share:
Read Post

Securosis is Now PCI Certified

I was talking with Jeremiah Grossman out at the SOURCE Conference in Boston, lamenting the state of PCI certification. Although ASVs continue to drop their rates and reduce the requirements for compliance by issuing exceptions, it’s still a costly and intrusive process. Sure, pretty much anyone who signs up and completes payment achieves certification, but adoption rates are still low and only a fraction of the retail community, especially the online community, is compliant. That’s why I got excited when I heard about Scanless PCI. They claim to use a patent-pending technique (doesn’t everyone) to certify merchants with no setup and no technology changes. The best part? It’s free. As in beer. Absolutely free. Free PCI certification? I don’t get the business model, but after evaluating the technology with Jeremiah and Robert Hansen (Rsnake) I’m convinced it works. If the top 2 web application security guys sign off on it, I’m all in. According to Jeremiah, Sounded too good to be true so I investigated their website. To my amazement I left the site completely convinced that their offering is every bit as effective at stopping hackers as other ASVs we”ve discussed here in the past. Their process was so straight forward I figured there was no excuse for my blog not to be PCI Certified as well. Check out the right side column, compliance was zip zap! I’m sold, and Securosis is now PCI compliant! < p style=”text-align:right;font-size:10px;”>Technorati Tags: PCI Share:

Share:
Read Post

Understanding and Selecting a Database Activity Monitoring Solution: Part 6, The Selection Process

At long last, thousands of words and 5 months later, it’s time to close out our series on Database Activity Monitoring. Today we’ll cover the selection process. For review, you can look up our previous entries here: Part 1 Part 2 Part 3 Part 4 Part 5 Define Needs Before you start looking at any tools; you need to understand why you might need DAM; how you plan on using it; and the business processes around management, policy creation, and incident handling. Create a selection committee: Database Activity Monitoring initiatives tend to involve four major technical stakeholders , and one or two non-technical business units. On the technical side it’s important to engage the database and application administrators with systems that may be within the scope of the project over time, not just the one database and/or application you plan on starting with. Although many DAM projects start with a limited scope, they can quickly grow into enterprise-wide programs. Security and the database team are typically the main project drivers, and the office of the CIO is often involved due to compliance needs or to mediate cross-team issues. On the non-technical side, you should have representatives from audit, as well as compliance and risk (if they exist in your organization). Once you identify the major stakeholders, you’ll want to bring representatives together into a selection committee. Define the systems and platforms to protect: DAM projects are typically driven by a clear audit or security goal tied to particular systems, applications, or databases. In this stage, detail the scope of what will be protected and the technical specifics of the platforms involved. You”ll use this list to determine technical requirements and prioritize features and platform support later in the selection process. Remember that your needs will grow over time, so break the list into a group of high priority systems with immediate needs, and a second group summarizing all major platforms you may need to protect later. Determine protection and compliance requirements: For some systems you might want strict preventative security controls, while for others you may just need comprehensive activity monitoring for a compliance requirement. In this step you map your protection and compliance needs to the platforms and systems from the previous step. This will help you determine everything from technical requirements to process workflow. Outline process workflow and reporting requirements: Database Activity Monitoring workflow tends to vary based on the use case. When used as an internal control for separation of duties, security will monitor and manage events and have an escalation process should database administrators violate policy. When used as an active security control, the workflow may more actively engage security and database administration as partners in managing incidents. In most cases, audit, legal, or compliance will have at least some sort of reporting role. Since different DAM tools have different strengths and weaknesses in terms of management interfaces, reporting, and internal workflow, knowing your process before defining technical requirements can prevent headaches down the road. By the completion of this phase you should have defined key stakeholders, convened a selection team, prioritized the systems to protect, determined protection requirements, and roughed out workflow needs. Formalize Requirements This phase can be performed by a smaller team working under the mandate of the selection committee. Here, the generic needs determined in phase 1 are translated into specific technical features, while any additional requirements are considered. This is the time to come up with any criteria for directory integration, additional infrastructure integration, data storage, hierarchical deployments, change management integration, and so on. You can always refine these requirements after you proceed to the selection process and get a better feel for how the products work. At the conclusion of this stage you develop a formal RFI (Request For Information) to release to vendors, and a rough RFP (Request For Proposals) that you’ll clean up and formally issue in the evaluation phase. Evaluate Products As with any products, it’s sometimes difficult to cut through the marketing materials and figure out if a product really meets your needs. The following steps should minimize your risk and help you feel confident in your final decision: Issue the RFI: Larger organizations should issue an RFI though established channels and contact a few leading DAM vendors directly. If you’re a smaller organization, start by sending your RFI to a trusted VAR and email a few of the DAM vendors which seem appropriate for your organization. Perform a paper evaluation: Before bringing anyone in, match any materials from the vendor or other sources to your RFI and draft RFP. Your goal is to build a short list of 3 products which match your needs. You should also use outside research sources and product comparisons. Bring in 3 vendors for an on-site presentation and demonstration: Instead of generic demonstrations, ask the vendors to walk you through specific use cases that match your expected needs. Don”t expect a full response to your draft RFP; these meetings are to help you better understand the different options out there and eventually finalize your requirements. Finalize your RFP and issue it to your short list of vendors: At this point you should completely understand your specific requirements and issue a formal, final RFP. Assess RFP responses and begin product testing: Review the RFP results and drop anyone who doesn’t meet any of your minimal requirements (such as platform support), as opposed to “nice to have” features. Then bring in any remaining products for in-house testing. You”ll want to replicate your highest volume system and the corresponding traffic, if at all possible. Build a few basic policies that match your use cases, then violate them, so you can get a feel for policy creation and workflow. Select, negotiate, and buy: Finish testing, take the results to the full selection committee, and begin negotiating with your top choice. Internal Testing Platform support and installation to determine compatibility with your database/application environment. This is the single most important factor to test, including monitoring

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.