Securosis

Research

Google AdWords

This is not a ‘security’ post. Has anyone had a problem with Google AdWords continuing to bill their credit cards after their account is terminated? Within the last two months, four people have complained to me that their credit cards continued to be changed even though they cancelled their accounts. In fact, the charges were slightly higher than normal. In a couple of cases they had to cancel their credit cards in order to get the charges to stop, resulting in letters from “The Google AdWords Team” threatening to pursue with the issuing bank … and, no, I am not talking about the current spam floating around out there but a legitimate email. All this despite having the email acknowledgement that the AdWords account had been cancelled. I did a quick web search (without Google) and I only found a few old complaints on line about this, but in my small circle of friends, this is a pretty high number of complaints considering how few use Google for their small businesses. I was wondering if anyone else out there has experienced this issue? Okay- maybe it is a security post after all… Share:

Share:
Read Post

Upcoming Webcast- DLP and DAM Together

On July 29th I’ll be giving a webcast entitled Using Data Leakage Prevention and Database Activity Monitoring for Data Protection. It’s a mix of my content on DLP, DAM and Information Centric security, designed to show you how to piece these technologies together. It’s sponsored by Tizor, and you can register here (the content, as always, is my independent stuff). Here’s the description: When it comes to data security, few things are certain, but there is one thing that very few security experts will dispute. Enterprises need a new way of thinking about data security, because traditional data security methods are just not working. Data Leakage Prevention (DLP) and Database Activity Monitoring (DAM) are two fundamental components of the new security landscape. Predicated on the need to “know” what is actually happening with sensitive data, DLP and DAM address pressing security issues. But despite the value that these two technologies offer, there is a great deal of confusion about what these technologies actually do and how they should be implemented. At this webinar, Rich Mogull, one of today”s most well respected security experts, will clear up the confusion about DLP and DAM. Rich will discuss: * The business problems created by a lack of data centric security * How these problems relate to today”s threats and technologies * What DLP and DAM do and how they fit into the enterprise security environment * Best practices for creating a data centric security model for your organization Share:

Share:
Read Post

ADMP and Assessment

Application and Database Monitoring and Protection. ADMP for short. In Rich’s previous post, under “Enter ADMP”, he discussed coordination of security applications to help address security issues. They may gather data in different ways, from different segments within the IT infrastructure, and cooperate with other applications based upon the information they have gathered or gleaned from analysis. What is being described is not shoving every service into an appliance for one stop shopping; that is decidedly not what we are getting at. Conceptually it is far closer to DLP ‘suites’ that offer endpoint and network security, with consolidated policy management. Rich has been driving this discussion for some time, but the concept is not yet fully evolved. We are both advocates and see this as a natural evolution to application security products. Oddly, Rich and I very seldom discuss the details prior to posting, and this topic is no exception. I wanted to discuss a couple items I believe should be included under the ADMP umbrella, namely Assessment and Discovery. Assessment and Discovery can automatically seed monitoring products with what to monitor, and cooperate with their policy set. Thus far the focus through a majority of our posts has been monitoring and protection, as in active protection, for ADMP. It reflects a primary area of interest for us as well as what we perceive as the core value for customers. The cooperation between monitored points within the infrastructure, both for collected data and the resulting data analysis, represents a step forward and can increase the effectiveness of each monitoring point. Vendors such as Imperva are taking steps into this type of strategy, specifically for tracking how a user’s web activity maps to the back end infrastructure. I imagine they will come up with more creative uses for this deployment topology in the future. Here I am driving at the cooperation between preventative (assessment and discovery in this context) and detective (monitoring) controls. Or more precisely, how monitoring and various types of assessment and discovery can cooperate to make the entire offering more efficient and effective. And when I talk about assessment, I am not talking about a network port scan to guess what applications and versions are running- but rather active interrogation and/or inspection of the application. And for discovery, not just the location of servers and applications, but a more thorough investigation of content, configuration and functions. Over the last four years I have advocated discovery, assessment and then monitoring, in that order. Discover what assets I have, assess what my known weaknesses are, and then fix what I can. I would then turn on monitoring for generic threats I that concern me, but also tune my monitoring polices to accommodate weaknesses in my configuration. My assumption is that there will always be vulnerabilities which monitoring will assist with controlling. But with application platforms- particularly databases- most firms are not and cannot be fully compliant with best practices and still offer the business processing functions the database is intended for. Typically weaknesses in security that are going to remain part of the daily operation of the applications and databases require some specific setting or module that is just not that secure. I know that there are some who disagree with this; Bruce Schneier has advocated for a long time that “Monitor First” is the correct approach. My feeling is that IT is a little different, and (adapting his analogy) I may not know where all of the valuables are stored, and I may not know what the type of alarm is needed to protect the safe. I can discover a lot from monitoring, and it allows me to witness both behavior and method during an attack, and use that to my advantage in the future. Assessment can provide tremendous value in terms of knowing what and how to protect, and it can do so prior to an attack. Most assessment and discovery tools are run periodically; while they may not be continuous, nor designed to find threats in real time, they are still not a “set and forget” part of security. They are best run periodically to account for the fluid nature of IT systems. I would add assessment of web applications, databases, and traditional enterprise application into this equation. Some of the web application assessment vendors have announced their ability to cooperate with WAF solutions, as WhiteHat Security has done with F5. Augmenting monitoring/WAF is a very good idea IMO, both in terms of coping with the limitations inherent to assessment of live web applications without causing disaster, but also the impossibility of getting complete coverage of all possible generated content. Being able to shield known limitations of the application, due either to design or patching delay, is a good example of the value here. In the same way, many back-end application platforms provide functionality that is relied upon for business processing that is less than secure. These might be things like database links or insecure network ‘listener’ configurations, which cannot be immediately resolved, either due to business continuity or timing constraints. An assessment platform (or even a policy management tool, but more on that later) or a rummage through database tables looking for personaly identifiable information, which is then fed to a database monitoring solution, can help deal with such difficult situations. Interrogation of the database reveals the weakness or sensitive information, and the result set is fed to the monitoring tool to check for inappropriate use of the feature or access to the data. I have covered many of these business drivers in a previous post on Database Vulnerability Assessment. And it is very much for these drivers like PCI that I believe the coupling of assessment with monitoring and auditing is so powerful- the applications help compensate for each another, enabling each to do what it is best at, passing off coverage of areas where they are less effective. Next up, I want to talk about policy formats, the ability to construct policies that apply

Share:
Read Post

Dark Reading Column: Attack Of The Consumers (And Those Pesky iPhones)

I have a sneaking suspicion my hosting provider secretly hates me after getting Slashdotted twice this week. But I don’t care, because in less than 48 hours it’s iPhone Day!!! Okay, so I already have one and all the new one adds is a little more speed, and a GPS that probably isn’t good enough for what I need. But I use the friggen thing so darn much I can definitely use that speed. It’s been up for a few days, but with everything else going on I’m just now getting back to my latest Dark Reading column. This month I take a look at what may be one of the most disruptive trends in enterprise technology- the consumerization of IT. Here’s an excerpt: That’s the essence of the consumerization of IT. Be it laptops, cellphones, or Web services, we’re watching the walls crumble between business and consumer technology. IT expands from the workplace and permeates our entire lives. From home broadband and remote access, to cellphones, connected cars, TiVos, and game consoles with Web browsers. Employees are starting to adapt technology to their own individual work styles to increase personal productivity. The more valued the knowledge worker, the more likely they are to personalize their technology — work provided or not. Some companies are already reporting difficulties in getting highly qualified knowledge workers and locking them into strict IT environments. No, it’s not like the call center will be running off their own laptops, but they’ll probably be browsing the Web, sending IMs, and updating their blogs off their phones as they sit in front of their terminals. This is far from the end of the world. While we need to change some of our approaches, we’re gaining technology tools and experience in running looser environments without increasing our risk. There are strategies we can adopt to loosen the environment, without increasing risks: Share:

Share:
Read Post

More On The DNS Vulnerability

Okay- it’s been a crazy 36 hours since Dan Kaminsky released his information on the massive multivendor patch and DNS issue. I want to give a little background on how I’ve been involved (for full disclosure) as well as some additional aspects of this. If you hate long stories, the short version is he just walked me through the details, this is a very big deal, and you need to patch immediately. Dan contacted me about a week or so ago to help get the word out to the CIO-level audience. As an analyst, that’s a group I have more access to. I was involved with the initial press conference and analyst briefings, and helped write the executive overview to put the issue in non-geek terms. At the time he just gave me the information that was later made public. I’ve known Dan for a few years now and trust him, so I didn’t push as deeply as I would with someone I don’t have that relationship with. Thus, as the comments and other blogs dropped into a maelstrom of discontent, I didn’t have anything significant to add. Dan realized he underestimated the response of the security community and decided to let me, Ptacek, Dino, and someone else I won’t mention into the fold. Here’s the deal- Dan has the goods. More goods than I expected. Dino and Ptacek agree. Tom just issued a public retraction/apology. This is absolutely one of the most exceptional research projects I’ve seen. Dan’s reputation will emerge more than intact, although he will still have some black eyes for not disclosing until Black Hat. Here’s what you need to know: You must patch your name servers as soon as possible. This is real, it’s probably not what you’re thinking. It’s a really good exploit (which is bad news for us). Ignore the “Important” rating from Microsoft, and other non-critical ratings. You have to keep in mind that for many of those organizations nothing short of remote code execution without authentication will result in a critical rating. That’s how the systems are built. Dan screwed up some of his handling of this, and I’m part of that screwup since I set my cynical analyst hat aside and ran totally on trust and reputation. Now that I know more, I stand behind my reaction and statements, but that’s a bad habit for me to get into. This still isn’t the end of the world, but it’s serious enough you should break your patch cycle (if you have one) on name servers to get them fixed. Then start rolling out to the rest of your infrastructure. CERT is updating their advisory on an ongoing basis. It’s located here. Next time something like this happens I’ll push for full details sooner, but Dan is justified in limiting exposure of this. His Black Hat talk will absolutely rock this year. Share:

Share:
Read Post

Dan Kaminsky Discovers Fundamental Issue In DNS: Massive Multivendor Patch Released

Today, CERT is issuing an advisory for a massive multivendor patch to resolve a major issue in DNS that could allow attackers to easily compromise any name server (it also affects clients). Dan Kaminsky discovered the flaw early this year and has been working with a large group of vendors on a coordinated patch. The issue is extremely serious, and all name servers should be patched as soon as possible. Updates are also being released for a variety of other platforms since this is a problem with the DNS protocol itself, not a specific implementation. The good news is this is a really strange situation where the fix does not immediately reveal the vulnerability and reverse engineering isn’t directly possible. Dan asked for some assistance in getting the word out and was kind enough to sit down with me for an interview. We discuss the importance of DNS, why this issue is such a problem, how he discovered it, and how such a large group of vendors was able to come together, decide on a fix, keep it secret, and all issue on the same day. Dan, and the vendors, did an amazing job with this one. We’ve also attached the official CERT release and an Executive Overview document discussing the issue. Executive Overview (pdf) CERT Advisory (link) Update: Dan just released a “DNS Checker” on his site Doxpara.com to see if you are vulnerable to the issue. Network Security Podcast, Episode 111, July 8, 2008 And here’s the text of the Executive Overview: Fixes Released for Massive Internet Security Issue On July 8th, technology vendors from across the industry will simultaneously release patches for their products to close a major vulnerability in the underpinnings of the Internet. While most home users will be automatically updated, it’s important for all businesses to immediately update their networks. This is the largest synchronized security update in the history of the Internet, and is the result of hard work and dedication across dozens of organizations. Earlier this year, professional security research Dan Kaminsky discovered a major issue in how Internet addresses are managed (Domain Name System, or DNS). This issue was in the design of DNS and not limited to any single product. DNS is used by every computer on the Internet to know where to find other computers. Using this issue, an attacker could easily take over portions of the Internet and redirect users to arbitrary, and malicious, locations. For example, an attacker could target an Internet Service Provider (ISP), replacing the entire web – all search engines, social networks, banks, and other sites – with their own malicious content. Against corporate environments, an attacker could disrupt or monitor operations by rerouting network traffic traffic, capturing emails and other sensitive business data. Mr. Kaminsky immediately reported the issue to major authorities, including the United States Computer Emergency Response Team (part of the Department of Homeland Security), and began working on a coordinated fix. Engineers from major technology vendors around the world converged on the Microsoft campus in March to coordinate their response. All of the vendors began repairing their products and agreed that a synchronized release, on a single day, would minimize the risk that malicious individuals could figure out the vulnerability before all vendors were able to offer secure versions of their products. The vulnerability is a complex issue, and there is no evidence to suggest that anyone with malicious intent knows how it works. The good news is that due to the nature of this problem, it is extremely difficult to determine the vulnerability merely by analyzing the patches; a common technique malicious individuals use to figure out security weaknesses. Unfortunately, due to the scope of this update it’s highly likely that the vulnerability will become public within weeks of the coordinated release. As such, all individuals and organizations should apply the patches offered by their vendors as rapidly as possible. Since not every system can be patched automatically, and to provide security vendors and other organizations with the knowledge they need to detect and prevent attacks on systems that haven’t been updated, Mr. Kaminsky will publish the details of the vulnerability at a security conference on August 6th. It is expected by this point the details of the vulnerability will be independently discovered, potentially by malicious individuals, and it’s important to make the specific details public for our collective defense. We hope that by delaying full disclosure, organizations will have time to protect their most important systems, including testing and change management for the updates. Mr. Kaminsky has also developed a tool to help people determine if they are at risk from “upstream” name servers, such as their Internet Service Provider, and will be making this publicly available. Home users with their systems set to automatically update will be protected without any additional action. Vendor patches for software implementing DNS are being issued from major software manufacturers, but some extremely out of date systems may need to updated to current versions before the patches are applied. Executives need to work with their information technology teams to ensure the problem is promptly addressed. There is absolutely no reason to panic; there is no evidence of current malicious activity using this flaw, but it is important everyone follow their vendor’s guidelines to protect themselves and their organizations. Share:

Share:
Read Post

Comments on Security Breach Statistics

I still have not quite reached complete apathy regarding breach statistics, but I am really close. The Identity Theft Resource Center statistics made their way into the Washington Post last week, and were reposted on the front page of The Arizona Republic business section this morning. In a nutshell they are saying the number of breaches was up 69% for the first half of 2008 over the first half of 2007. I am certain no one is surprised. As a security blogging community we have been talking about how the custodians of the information fail to address security, how security products are not all that effective, how the ‘bad guys’ are creative, opportunistic, and committed to finding new exploits, and my personal favorite, how the people who set up the (financial, banking, heath care, government, insert your favorite here) systems have a serious financial stake in things being quick and easy rather than secure. Ultimately, I would have been surprised if the number had gone down. I used to do a presentation called “Dr. Strangelog or; How I stopped worrying and loved the breach”. No, I was not advocating building subterranean caverns to wait this out; rather a mental adjustment in how to approach security. For the corporate IT audience, the premise is that you are never going to be 100% secure, so plan to do the best you can, and be prepared to react when a breach happens. And I try to point out some of the idiocy in certain policies that invite unnecessary risk … like storing credit card numbers when it is unnecessary, not encrypting backup tapes, and allowing all your customer records to ever be on a laptop outside the company. While we have gone well beyond these basics, I still think that contrarian thinking is in order to find new solutions, or to redefine the problem itself as it seems impossible to stop the breaches at this point. As an individual, as opposed to as a security practitioner, Is there anything meaningful in these numbers? Is there any value what so ever? Is it going to be easier to quantify the records that have not been breached? Are we getting close to having every personal record compromised at least once? The numbers are so large that they start to lose their meaning. Breaches are so common that they have spawned several secondary markets in areas such as tools and techniques for fraudulently gaining additional personal information, partial personal information useful for the same purpose, and of course various anti-fraud tools and services. I start to wonder if the corporations and public entities of the world have already effectively wiped out personal privacy. Share:

Share:
Read Post

Best Practices for Endpoint DLP: Part 3

In our last post we discussed the core functions of an endpoint DLP tool. Today we’re going to talk more about agent deployment, management, policy creation, enforcement workflow, and overall integration. Agent Management Agent management consists of two main functions- deployment and maintenance. On the deployment side, most tools today are designed to work with whatever workstation management tools your organization already uses. As with other software tools, you create a deployment package and then distribute it along with any other software updates. If you don’t already have a software deployment tool, you’ll want to look for an endpoint DLP tool that includes basic deployment capabilities. Since all endpoint DLP tools include central policy management, deployment is fairly straightforward. There’s little need to customize packages based on user, group, or other variables beyond the location of the central management server. The rest of the agent’s lifecycle, aside from major updates, is controlled through the central management server. Agents should communicate regularly with the central server to receive policy updates and report incidents/activity. When the central management server is accessible, this should happen in near real time. When the endpoint is off the enterprise network (without VPN/remote access), the DLP tool will store violations locally in a secure repository that’s encrypted and inaccessible to the user. The tool will then connect with the management server next time it’s accessible, receiving policy updates and reporting activity. The management server should produce aging reports to help you identify endpoints which are out of date and need to be refreshed. Under some circumstances, the endpoint may be able to communicate remote violations through encrypted email or another secure mechanism from outside the corporate firewall. Aside from content policy updates and activity reporting, there are a few other features that need central management. For content discovery, you’ll need to control scanning schedule/frequency, and control bandwidth and performance (e.g., capping CPU usage). For real time monitoring and enforcement you’ll also want performance controls, including limits on how much space is used to store policies and the local cache of incident information. Once you set your base configuration, you shouldn’t need to do much endpoint management directly. Things like enforcement actions are handled implicitly as part of policy, thus integrated into the main DLP policy interface. Policy Creation and Workflow Policy creation for endpoints should be fully integrated into your central DLP policy framework for consistent enforcement across data in motion, at rest, and in use. Policies are thus content focused, rather than location focused– another advantage of full suites over individual point products. In the policy management interface you first define the content to protect, then pick channels and enforcement actions (all, of course, tied to users/groups and context). For example, you might want to create a policy to protect customer account numbers. You’d start by creating a database fingerprinting policy pulling names and account numbers from the customer database; this is the content definition phase. Assuming you want the policy to apply equally to all employees, you then define network protective actions- e.g., blocking unencrypted emails with account numbers, blocking http and ftp traffic, and alerting on other channels where blocking isn’t possible. For content discovery, quarantine any files with more than one account number that are not on a registered server. Then, for endpoints, restrict account numbers from unencrypted files, portable storage, or network communications when the user is off the corporate network, switching to a rules-based (regular expression) policy when access to the policy server isn’t available. In some cases you might need to design these as separate but related policies- for example, the database fingerprinting policy applies when the endpoint is on the network, and a simplified rules-based policy when the endpoint is remote. Incident management should also be fully integrated into the overall DLP incident handling queue. Incidents appear in a single interface, and can be routed to handlers based on policy violated, user, severity, channel, or other criteria. Remember that DLP is focused on solving the business problem of protecting your information, and thus tends to require a dedicated workflow. For endpoint DLP you’ll need some additional information beyond network or non-endpoint discovery policies. Since some violations will occur when the system is off the network and unable to communicate with the central management server, “delayed notification” violations need to be appropriately stamped and prioritized in the management interface. You’d hate to miss the loss of your entire customer database because it showed up as a week-old incident when the sales laptop finally reconnected. Otherwise, workflow is fully integrated into your main DLP solution, and any endpoint-specific actions are handled through the same mechanisms as discovery or network activity. Integration If you’re running an endpoint only solution, an integrated user interface obviously isn’t an issue. For full suite solutions, as we just discussed, policy creation, management, and incident workflow should be completely integrated with network and discovery policies. Other endpoint management is typically a separate tab in the main interface, alongside management areas for discovery/storage management and network integration/management. While you want an integrated management interface, you don’t want it so integrated that it becomes confusing or unwieldy to use. In most DLP tools, content discovery is managed separately to define repositories and manage scanning schedules and performance. Endpoint DLP discovery should be included here, and allow you to specify device and user groups instead of having to manage endpoints individually. That’s about it for the technology side; in our next posts we’ll look at best practices for deployment and management, and present a few generic use cases. I realize I’m pretty biased towards full-suite solutions, and this is your chance to call me on it. If you disagree, please let me know in the comments… Share:

Share:
Read Post

Mozilla Project In Open Document Format

Due to popular demand, there’s now an OpenOffice format (.ods) file for the Mozilla security metrics project. You can pick up the file here… (I have no idea why I didn’t use NeoOffice before- very nice!). Share:

Share:
Read Post

What To Buy?

This is a non-security post… I did not get a lot of work done Thursday afternoon. I was shopping. Specifically, I am shopping for a new laptop. I have a four year old Fujitsu running XP. The MTBF on this machine is about 20 months, so I am a little beyond laptop shelf life. A friend lent me a nice laptop with Vista for a week, and I must say, I really do not like it. Don’t like the performance. Don’t like the DRM. Don’t like the new arrangement of the UI. Don’t like the lowest-common-denominator approach to design. Don’t like an OS that thinks it knows what I want and shoves the wrong things at me. The entire direction it’s heading seems to be the antithesis of fast, efficient, & friendly. So what to buy? If you do not choose Windows, there really are not a lot of options for business laptops. Do you really have a choice? I was reading this story that said Intel had no plans to adopt Windows Vista for their employees. Interesting that this comes out now. Technically speaking, the Microsoft “End of Life” date for Windows XP was June 30th. I sympathize with IT departments, as this makes things difficult for them. I am just curious what departments such as Intel’s will be buying employees as their laptops croak? With some 80,000 employees, I am assuming this is a daily occurrence, so I wonder how closely their decision-making process resembles mine. I wonder what they are going to do. Reuse XP keys? I have used, and continue to use, a lot of OSes. I started my career with CTOS, and I worked on and with UNIX for more than a decade. I have used various flavors of Linux & BSD since 1995. I have had Microsoft’s OSes and Linux dual booting on my home machines for the last decade. I am really not an OS bigot, as there are things about each that I like. For example, I like Ubuntu and the context cube desktop interface, but I am not sure I want that for my primary operating system. I could buy a basic box and install XP with an older key, but worry I might have trouble finding XP drivers and updates. Being an engineer, I figured I would approach this logically. I sat down and wrote down all the applications, features, and services I use on a weekly basis and mapped out what I needed. Several Linux variants would work, and I could put XP in a virtual partition to catch anything that was not available, but the more I look, the more I like the MacBook. While I have never owned a Mac, I am beginning to think it is time to buy one. And really, the engineer in me got thrown under the bus when I visited the Mac store http://store.apple.com/. %!&$! logic, now I just kind of want one. If I am going through this thought process, I just wonder how many companies are as well. MS has a serious problem. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.