Securosis

Research

Individual Privacy vs. Business Drivers

‘I ended a recent Breach Statistics post with “I start to wonder if the corporations and public entities of the world have already effectively wiped out personal privacy.” It was just a thowaway idea that had popped into my head, but the more I thought about it over the next couple of days, the more it bothered me. It is probably because that idea was germinating while reading a series of news events during the past couple of weeks made me grasp the sheer momentum of privacy erosion that is going on. It is happening now, with little incentive for the parties involved to change their behavior, and there is seemingly little we can do about it. A Business Perspective Rich posted a blog entry on “YouTube, Viacom, And Why You Should Fear Google More Than The Government” on this topic as well. Technically I disagree with Rich in one regard, that being to have a degree of fear for all parties involved as Viacom, Google and the US government are in essence deriving value at the expense of individual privacy. I think this really ties in as companies like Google have strong financial incentives to store as much data on people- both at the aggregate and the personal level- as they can. And it’s not just Google, but most Internet companies. Think about Amazon’s business model and their use of statistics and behavior profiling to alter the shopping experience (and pricing) for each visitor to their web site. My takeaway from Rich’s post was “The government has a plethora of mechanisms to track our activity”, and it is starting to look as if the biggest is the records created and maintained by corporations. Corporate entities are now the third party data harvester, and government entities act as the aggregator. While we like to think that we don’t live in a world that does such things, there are reasons to believe that this form of data management had a deciding factor in the 2000 presidential election with Database Technologies/Choicepoint. We already know that domestic spying is a reality. Over the weekend I was catching up on some reading, going over some articles about how the government has provided immunity to telecom companies for providing data to the government. If that is not an incentive to continue data collection without regard for confidentiality, a “get out of jail free” card if you will, I don’t know what is. I also got a chance to watch the SuperNova video on Privacy and Security in the Network Age. Bruce Schneier’s comments in the first 10 minutes are pretty powerful. He has been evolving this line of thought over many years and he has really honed the content into a very compelling story. His example about facial recognition software, storage essentially being free, and with ubiquitous cameras is fairly startling when you realize everything you do in a public place could be recorded. Can you imagine having your entire four years at high school filmed, like it or not, and stored forever? Or if someone harvested your worst 5 minutes of driving on film over the last decade? Bruce is exactly right that this conversation is not about our security, but the entire effort is about control and policy enforcement. And it is not the government that is operating the cameras; it is businesses and institutions that make money with the collected data. With business that harvest data now seemingly immune to prosecution for privacy rights violations, there are no “checks and balances” to keep them from pursing this- rather they are financially motivated to do so. From cameras on the freeway to Google, there are always people willing to pay for surveillance data. They are not financially incentivized to care about privacy per se; unless it becomes a major PR nightmare and affects their core business, it is not going to happen. My intention with the post was not to get all political, but rather to point out that businesses which collect data need some incentive to keep that consumer information confidential. I don’t think there is a legitimate business motivator right now. CA1386 and associated legislation is not a deterrent. Businesses make their money by collecting information, analyzing it, and then presenting new information based upon what they have previously collected. Many companies’ entire business models are predicated upon successfully doing this. The collection of sensitive and personally identifiable information is part of daily operation. Leakage is part of the business risk. But other than a competitive advantage, do they have any motivation to keep the data safe or to protect privacy? We have seen billions of records stolen, leaked or willfully provided, and yet there is little change in corporate activity in regards to privacy. So I guess what scares me the most about all this is that I see little incentive for firms to protect individual privacy, and that lack of privacy is supported- and taken advantage of- backed by government. Our government is not only going to approve of the collection of personal data, it is going to benefit from it. This is why I see the problem accelerating. The US government has basically found a way to outsource the costs and risks of surveillance. They are not going to complain about mis-use of your sensitive data as they are saving billions of dollars by using data collected by corporations. There are a couple of other angles to this I want to cover, but I will get to those in another post. Share:

Share:
Read Post

NitroSecurity’s Acquisition of RippleTech

‘I was reading through the NitroSecurity press release last week, thinking about the implications of their RippleTech purchase. This is an interesting move and not one of the Database Activity Monitoring acquisitions I was predicting. So what do we have here? IPS, DAM, SIM, and log management under one umbrella. Some real time solutions, some forensic solutions. They are certainly casting a broad net of offerings for compliance and security. Will the unified product provide greater customer value? Difficult to say at this point. Conceptually I like the combination of network and agent based data collectors working together, I like what is possible with integrated IPS and DAM, and I am personally rather fond of offering real-time monitoring alongside forensic analysis audits. And those who know me are aware I tend to bash IPS as lacking enough application ‘context’ to make meaningful inspections of business transactions. A combined solution may help rectify this deficiency. Still, there is probably considerable distance between reality and the ideal. Rich and I were talking about this the other day, and I think he captured the essence very succinctly: “DAM isn’t necessarily a good match to integrate into intrusion prevention systems- they meet different business requirements, they are usually sold to a different buying center, and it’s not a problem you can solve on the network alone.” I do not know a lot about NitroSecurity and I have not really been paying them much attention as they have been outside the scope of firms I typically follow. I know that they offer an intrusion prevention appliance, and that they have marketed it for compliance, security and systems management. They also have a SIM/SEM product as well, which should have some overlapping capabilities with RippleTech’s log management solution. RippleTech I have been paying attention to since the Incache LLC acquisition back in 2006. I had seen Incache’s DBProbe and later DBProbeSec, but I did not perceive much value to the consumer over and above the raw data acquisition and generic reports for the purpose of database security. It really seem to have evolved little from its roots as a performance monitoring tool and was missing much in the way of policies, reporting and workflow integration needed for security and compliance. I was interested in seeing which technology RippleTech chose to grow- the network sniffer or the agent- for several reasons. First, we were watching a major change in the Database Activity Monitoring (DAM) space at that time from security to compliance as the primary sales driver. Second, the pure network solutions missed some of the critical need for console based activity and controls, and we saw most of the pure network vendors move to a hybrid model for data collection. I guessed that the agent would become their primary data collector as it fit well with a SEM architecture and addressed the console activity issue. It appears that I guessed wrong, as RippleTech seems to offer primarily a network collector with Informant, their database activity monitoring product. I am unsure if LogCaster actually collects database audit logs, but if memory serves it does not. Someone in the know, please correct me if I am wrong on this one. Regardless, if I read the thrust of this press release correctly, NitroSecurity bought RippleTech primarily for the DAM offering. Getting back to Rich’s point, it appears that some good pieces are in place. It will come down to how they stitch all of these together, and what features are offered to which buyers. If they remain loosely coupled data collectors with basic reporting, then this is security mish-mash. If all of the real time database analystics are coming from network data, they will miss many of the market requirements. Still, this could be very interesting depending upon where they are heading, so NitroSecurity is clearly on my radar from this point forward. Share:

Share:
Read Post

Upcoming: Database Encryption Whitepaper

We are going to be working on another paper with SANS- this time on database encryption. This is a technology that offers consumers considerable advantages in meeting security and compliance challenges, and we have been getting customer inquiries on what the available options are. As encryption products have continued to mature over the last few years, we think it is a good time to delve into this subject. If you’re on the vendor side and interested in sponsorship, drop us a line. You don’t get to influence the content, but we get really good exposure with these SANS papers. Share:

Share:
Read Post

Stolen Data Cheaper

‘It’s rare I laugh out loud when reading the paper, but I did on this story. It is a great angle on a moribund topic, saying that there is such a glut of stolen finance and credit data for sale that it is driving prices down. LONDON (Reuters) – Prices charged by cybercriminals selling hacked bank and credit card details have fallen sharply as the volume of data on offer has soared, forcing them to look elsewhere to boost profit margins, a new report says. The thieves are true capitalists, and now they are experiencing one of the downsides of their success. What do you know, “supply and demand” works. And what exactly are they going to do to boost profit margins? Sell extended warranties? Maybe it is just the latent marketeer in me coming to the fore, but could you just imagine if hackers made television commericals to sell their wares? Cal Hackington? Crazy Eddie’s Datamart? It’s time to short your investments in Cybercriminals, Inc. Share:

Share:
Read Post

Oracle Critical Patch Update- Patch OAS Now!!!

I was just in the process of reviewing the details on the latest Oracle Critical Patch Advisory for July 2008 and found something a bit frightening. As in could let any random person own your database frightening. I am still sifting through the database patches to see what is interesting. I did not see much in the database section, but while reading through the document something looked troubling. When I see language that says “vulnerabilities may be remotely exploitable without authentication” I get very nervous. CVE 2008-2589 does not show up on cve.mitre.org, but a quick Google search turns up Nate McFeters’ comments on David Litchfield’s disclosure of the details on the vulnerability. Basically, it allows a remote attacker without a user account to slice through your Oracle Application Server and directly modify the database. If you have any external OAS instance you probably don’t have long to get it patched. I am not completely familiar with the WWV_RENDER_REPORT package, but its use is not uncommon. It appears that the web server is allowing parameters to pass through unchecked. As the package is owned by the web server user, whatever is injected will be able to perform any action that the web server account is authorized to do. Remotely. Yikes! I will post more comments on this patch in the future, but it is safe to assume that if you are running Oracle Application Server versions 9 or 10, you need to patch ASAP! Why Oracle has given this a base score of 6.4 is a bit of a mystery (see more on Oracle’s scoring), but that is neither here nor there. I assume that word about a remote SQL injection attack that does not require authentication will spread quickly. Patch your app servers. Share:

Share:
Read Post

ADMP: A Policy Driven Example

A friend of mine and I were working on a project recently to feed the results of a vulnerability assessment or discovery scans into a behavioral monitoring tool. He was working on a series of policies that would scan database tables for specific metadata signatures and content signatures that had a high probability of being personally identifiable information. The goal was to scan databases for content types, and send back a list of objects that looked important or had a high probability of being sensitive information. I was working on a generalized policy format for the assessment. My goal was not only to include the text and report information on what the policy had found and possible remediation steps, but more importantly, a set of instructions that could be sent out as a result of the policy scan. Not for a workflow system, but rather instruction on how another security application should react if a policy scan found sensitive data. As an example, let’s say we wrote a query to scan databases for social security numbers. If we ran the policy and found a 9 digit field, verifying the contents were all numbers, or an 11 character field with numbers and dashes, we would characterize that as a high probability that we had discovered a social security number. And when you have a few sizable SAP installations around, with some 40K tables, casual checking does not cut it. As I have found a tendency for QA people to push production data into test servers, this has been a handy tool for basic security and detection of rogue data and database installations. The part I was working on was the reactive portion. Rather than just generating the report/trouble ticket for someone in IT or Security to review the database column to determine if it was in fact sensitive information, I would automatically instruct the DAM tools to instantiate a policy that records all activity against that column. Obviously issues about previously scanned and accepted tables, “white lists”, and such needed to be worked out. Still, the prototype was basically working, and I wanted to begin addressing a long-standing critisicm of DAM- that knowing what to monitor can take quite a bit of research and development, or a lot of money in professional services. This is one of the reasons why I have a vision of ADMP being a top-down policy-driven aggregation of exsting security solutions. Where I am driving with this is that I should be able to manage a number of security applications through policies. Say I write a PCI-DSS policy regarding the security of credit card numbers. That generic policy would have specific components that are enforced at different locations within the organization. The policy could propagate a subset of instructions down to the assessment tool to check for the security settings around credit card information and access control settings. It could simultaneously seed the discovery application so that it is checking for credit card numbers in unregistered locations. It could simultaneously instruct DAM applications to automatically track the use of these database fields. I instruct the WAF to block anything that references triggering objects directly. And so on. The enforcement of the rules is performed by the application best suited for it, and at the location that is most suitable for responding. I have hinted at this in the past, but never really discussed fully what I meant. The policy becomes the link. Use the business policy to wrap specific actions in a specific set of actionable rules for disparate applications. The policy represents the business driver, and it is mapped down to specific applications or components to enforce individual rules that constitute the policy. A simple policy management interface can now control and maintain corporate standards, and individual stakeholders can have a say in the implementation and realization of those policies “behind the scenes”, if you will. Add or subtract security widgets as you wish, and add a rule onto the policy to direct said widgets how to behave. My examples are solely around the interaction between the assessment/discovery phase, and the database activity monitoring software. However, much more is possible if you link WAF, web app assessment, DLP, DAM, and other products into the fold. Clearly there are a lot of people thinking along these lines, if not exactly this scenario, and many are reaching to the database to help secure it. We are seeing SIM/SEM products do more with databases, albeit usually with logs. The database vendors are moving into the security space as well and are beginning to leverage content inspection and multi-application support. We are seeing the DLP vendors do more with databases, as evidenced by the recent Symantec press release, which I think is a very cool addition to their functionality. The DLP providers tend to be truly content aware. We are even seeing the UTM vendors reach for the database, but the jury is still out on how well this will be leveraged. I don’t think it is a stretch to say we will be seeing more and more of these services linked together. Who adopts a policy driven model will be interesting to see, but I have heard of a couple firms that approach the problem this way. You can probably tell I like the policy angle as the glue for security applications. It does not require too much change to any given product. Mostly an API and some form of trust validation for the cooperating applications. I started to research the policy formats like OVAL, AVDL, and others to see if I could leverage them as a communication medium. There has been a lot of work done in this area by the assessment vendors, but while they were based on XML and probably inherently extensible, I did not see anything I was confident in, and was thinking I would have to define a different template to take advatage of this model. Food for thought, anyway. Share:

Share:
Read Post

Google AdWords

This is not a ‘security’ post. Has anyone had a problem with Google AdWords continuing to bill their credit cards after their account is terminated? Within the last two months, four people have complained to me that their credit cards continued to be changed even though they cancelled their accounts. In fact, the charges were slightly higher than normal. In a couple of cases they had to cancel their credit cards in order to get the charges to stop, resulting in letters from “The Google AdWords Team” threatening to pursue with the issuing bank … and, no, I am not talking about the current spam floating around out there but a legitimate email. All this despite having the email acknowledgement that the AdWords account had been cancelled. I did a quick web search (without Google) and I only found a few old complaints on line about this, but in my small circle of friends, this is a pretty high number of complaints considering how few use Google for their small businesses. I was wondering if anyone else out there has experienced this issue? Okay- maybe it is a security post after all… Share:

Share:
Read Post

ADMP and Assessment

Application and Database Monitoring and Protection. ADMP for short. In Rich’s previous post, under “Enter ADMP”, he discussed coordination of security applications to help address security issues. They may gather data in different ways, from different segments within the IT infrastructure, and cooperate with other applications based upon the information they have gathered or gleaned from analysis. What is being described is not shoving every service into an appliance for one stop shopping; that is decidedly not what we are getting at. Conceptually it is far closer to DLP ‘suites’ that offer endpoint and network security, with consolidated policy management. Rich has been driving this discussion for some time, but the concept is not yet fully evolved. We are both advocates and see this as a natural evolution to application security products. Oddly, Rich and I very seldom discuss the details prior to posting, and this topic is no exception. I wanted to discuss a couple items I believe should be included under the ADMP umbrella, namely Assessment and Discovery. Assessment and Discovery can automatically seed monitoring products with what to monitor, and cooperate with their policy set. Thus far the focus through a majority of our posts has been monitoring and protection, as in active protection, for ADMP. It reflects a primary area of interest for us as well as what we perceive as the core value for customers. The cooperation between monitored points within the infrastructure, both for collected data and the resulting data analysis, represents a step forward and can increase the effectiveness of each monitoring point. Vendors such as Imperva are taking steps into this type of strategy, specifically for tracking how a user’s web activity maps to the back end infrastructure. I imagine they will come up with more creative uses for this deployment topology in the future. Here I am driving at the cooperation between preventative (assessment and discovery in this context) and detective (monitoring) controls. Or more precisely, how monitoring and various types of assessment and discovery can cooperate to make the entire offering more efficient and effective. And when I talk about assessment, I am not talking about a network port scan to guess what applications and versions are running- but rather active interrogation and/or inspection of the application. And for discovery, not just the location of servers and applications, but a more thorough investigation of content, configuration and functions. Over the last four years I have advocated discovery, assessment and then monitoring, in that order. Discover what assets I have, assess what my known weaknesses are, and then fix what I can. I would then turn on monitoring for generic threats I that concern me, but also tune my monitoring polices to accommodate weaknesses in my configuration. My assumption is that there will always be vulnerabilities which monitoring will assist with controlling. But with application platforms- particularly databases- most firms are not and cannot be fully compliant with best practices and still offer the business processing functions the database is intended for. Typically weaknesses in security that are going to remain part of the daily operation of the applications and databases require some specific setting or module that is just not that secure. I know that there are some who disagree with this; Bruce Schneier has advocated for a long time that “Monitor First” is the correct approach. My feeling is that IT is a little different, and (adapting his analogy) I may not know where all of the valuables are stored, and I may not know what the type of alarm is needed to protect the safe. I can discover a lot from monitoring, and it allows me to witness both behavior and method during an attack, and use that to my advantage in the future. Assessment can provide tremendous value in terms of knowing what and how to protect, and it can do so prior to an attack. Most assessment and discovery tools are run periodically; while they may not be continuous, nor designed to find threats in real time, they are still not a “set and forget” part of security. They are best run periodically to account for the fluid nature of IT systems. I would add assessment of web applications, databases, and traditional enterprise application into this equation. Some of the web application assessment vendors have announced their ability to cooperate with WAF solutions, as WhiteHat Security has done with F5. Augmenting monitoring/WAF is a very good idea IMO, both in terms of coping with the limitations inherent to assessment of live web applications without causing disaster, but also the impossibility of getting complete coverage of all possible generated content. Being able to shield known limitations of the application, due either to design or patching delay, is a good example of the value here. In the same way, many back-end application platforms provide functionality that is relied upon for business processing that is less than secure. These might be things like database links or insecure network ‘listener’ configurations, which cannot be immediately resolved, either due to business continuity or timing constraints. An assessment platform (or even a policy management tool, but more on that later) or a rummage through database tables looking for personaly identifiable information, which is then fed to a database monitoring solution, can help deal with such difficult situations. Interrogation of the database reveals the weakness or sensitive information, and the result set is fed to the monitoring tool to check for inappropriate use of the feature or access to the data. I have covered many of these business drivers in a previous post on Database Vulnerability Assessment. And it is very much for these drivers like PCI that I believe the coupling of assessment with monitoring and auditing is so powerful- the applications help compensate for each another, enabling each to do what it is best at, passing off coverage of areas where they are less effective. Next up, I want to talk about policy formats, the ability to construct policies that apply

Share:
Read Post

Comments on Security Breach Statistics

I still have not quite reached complete apathy regarding breach statistics, but I am really close. The Identity Theft Resource Center statistics made their way into the Washington Post last week, and were reposted on the front page of The Arizona Republic business section this morning. In a nutshell they are saying the number of breaches was up 69% for the first half of 2008 over the first half of 2007. I am certain no one is surprised. As a security blogging community we have been talking about how the custodians of the information fail to address security, how security products are not all that effective, how the ‘bad guys’ are creative, opportunistic, and committed to finding new exploits, and my personal favorite, how the people who set up the (financial, banking, heath care, government, insert your favorite here) systems have a serious financial stake in things being quick and easy rather than secure. Ultimately, I would have been surprised if the number had gone down. I used to do a presentation called “Dr. Strangelog or; How I stopped worrying and loved the breach”. No, I was not advocating building subterranean caverns to wait this out; rather a mental adjustment in how to approach security. For the corporate IT audience, the premise is that you are never going to be 100% secure, so plan to do the best you can, and be prepared to react when a breach happens. And I try to point out some of the idiocy in certain policies that invite unnecessary risk … like storing credit card numbers when it is unnecessary, not encrypting backup tapes, and allowing all your customer records to ever be on a laptop outside the company. While we have gone well beyond these basics, I still think that contrarian thinking is in order to find new solutions, or to redefine the problem itself as it seems impossible to stop the breaches at this point. As an individual, as opposed to as a security practitioner, Is there anything meaningful in these numbers? Is there any value what so ever? Is it going to be easier to quantify the records that have not been breached? Are we getting close to having every personal record compromised at least once? The numbers are so large that they start to lose their meaning. Breaches are so common that they have spawned several secondary markets in areas such as tools and techniques for fraudulently gaining additional personal information, partial personal information useful for the same purpose, and of course various anti-fraud tools and services. I start to wonder if the corporations and public entities of the world have already effectively wiped out personal privacy. Share:

Share:
Read Post

What To Buy?

This is a non-security post… I did not get a lot of work done Thursday afternoon. I was shopping. Specifically, I am shopping for a new laptop. I have a four year old Fujitsu running XP. The MTBF on this machine is about 20 months, so I am a little beyond laptop shelf life. A friend lent me a nice laptop with Vista for a week, and I must say, I really do not like it. Don’t like the performance. Don’t like the DRM. Don’t like the new arrangement of the UI. Don’t like the lowest-common-denominator approach to design. Don’t like an OS that thinks it knows what I want and shoves the wrong things at me. The entire direction it’s heading seems to be the antithesis of fast, efficient, & friendly. So what to buy? If you do not choose Windows, there really are not a lot of options for business laptops. Do you really have a choice? I was reading this story that said Intel had no plans to adopt Windows Vista for their employees. Interesting that this comes out now. Technically speaking, the Microsoft “End of Life” date for Windows XP was June 30th. I sympathize with IT departments, as this makes things difficult for them. I am just curious what departments such as Intel’s will be buying employees as their laptops croak? With some 80,000 employees, I am assuming this is a daily occurrence, so I wonder how closely their decision-making process resembles mine. I wonder what they are going to do. Reuse XP keys? I have used, and continue to use, a lot of OSes. I started my career with CTOS, and I worked on and with UNIX for more than a decade. I have used various flavors of Linux & BSD since 1995. I have had Microsoft’s OSes and Linux dual booting on my home machines for the last decade. I am really not an OS bigot, as there are things about each that I like. For example, I like Ubuntu and the context cube desktop interface, but I am not sure I want that for my primary operating system. I could buy a basic box and install XP with an older key, but worry I might have trouble finding XP drivers and updates. Being an engineer, I figured I would approach this logically. I sat down and wrote down all the applications, features, and services I use on a weekly basis and mapped out what I needed. Several Linux variants would work, and I could put XP in a virtual partition to catch anything that was not available, but the more I look, the more I like the MacBook. While I have never owned a Mac, I am beginning to think it is time to buy one. And really, the engineer in me got thrown under the bus when I visited the Mac store http://store.apple.com/. %!&$! logic, now I just kind of want one. If I am going through this thought process, I just wonder how many companies are as well. MS has a serious problem. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.