Securosis

Research

NitroSecurity’s Acquisition of RippleTech

‘I was reading through the NitroSecurity press release last week, thinking about the implications of their RippleTech purchase. This is an interesting move and not one of the Database Activity Monitoring acquisitions I was predicting. So what do we have here? IPS, DAM, SIM, and log management under one umbrella. Some real time solutions, some forensic solutions. They are certainly casting a broad net of offerings for compliance and security. Will the unified product provide greater customer value? Difficult to say at this point. Conceptually I like the combination of network and agent based data collectors working together, I like what is possible with integrated IPS and DAM, and I am personally rather fond of offering real-time monitoring alongside forensic analysis audits. And those who know me are aware I tend to bash IPS as lacking enough application ‘context’ to make meaningful inspections of business transactions. A combined solution may help rectify this deficiency. Still, there is probably considerable distance between reality and the ideal. Rich and I were talking about this the other day, and I think he captured the essence very succinctly: “DAM isn’t necessarily a good match to integrate into intrusion prevention systems- they meet different business requirements, they are usually sold to a different buying center, and it’s not a problem you can solve on the network alone.” I do not know a lot about NitroSecurity and I have not really been paying them much attention as they have been outside the scope of firms I typically follow. I know that they offer an intrusion prevention appliance, and that they have marketed it for compliance, security and systems management. They also have a SIM/SEM product as well, which should have some overlapping capabilities with RippleTech’s log management solution. RippleTech I have been paying attention to since the Incache LLC acquisition back in 2006. I had seen Incache’s DBProbe and later DBProbeSec, but I did not perceive much value to the consumer over and above the raw data acquisition and generic reports for the purpose of database security. It really seem to have evolved little from its roots as a performance monitoring tool and was missing much in the way of policies, reporting and workflow integration needed for security and compliance. I was interested in seeing which technology RippleTech chose to grow- the network sniffer or the agent- for several reasons. First, we were watching a major change in the Database Activity Monitoring (DAM) space at that time from security to compliance as the primary sales driver. Second, the pure network solutions missed some of the critical need for console based activity and controls, and we saw most of the pure network vendors move to a hybrid model for data collection. I guessed that the agent would become their primary data collector as it fit well with a SEM architecture and addressed the console activity issue. It appears that I guessed wrong, as RippleTech seems to offer primarily a network collector with Informant, their database activity monitoring product. I am unsure if LogCaster actually collects database audit logs, but if memory serves it does not. Someone in the know, please correct me if I am wrong on this one. Regardless, if I read the thrust of this press release correctly, NitroSecurity bought RippleTech primarily for the DAM offering. Getting back to Rich’s point, it appears that some good pieces are in place. It will come down to how they stitch all of these together, and what features are offered to which buyers. If they remain loosely coupled data collectors with basic reporting, then this is security mish-mash. If all of the real time database analystics are coming from network data, they will miss many of the market requirements. Still, this could be very interesting depending upon where they are heading, so NitroSecurity is clearly on my radar from this point forward. Share:

Share:
Read Post

Move to New Zealand, Get Out Of Jail Free

New Zealand is absolutely my favorite place on the face of the planet. I’ve made it down there twice, once for a month before I met my wife, and once for just under 3 weeks with her as we drove thousands of kilometers exploring as much of both islands as we could. As much as I love it, I don’t think I’d want to live there full time (I kind of like the US, despite our current administration). But the latest news from New Zealand does give me a bit of an itch to head back down and “experiment” with the law. Seems a young fellow made about $31K giving some bad guys software they used to rake in something like $20M. Bad stuff- Mr Walker was detained in the North Island city of Hamilton last November as part of an investigation with US and Dutch police into global networks of hijacked PCs, known as botnets He’s 18, so odds are jail time, right? Like serious jail time? Nope. Judge Judith Potter dismissed the charges, relating to a 2006 attack on a computer system at a US university, saying a conviction could jeopardise a potentially bright career. Nice. Hey, I think I might want to be a security guard at a convenience store, okay if we drop that little assault and robbery thing? I made way less than $31K? Heck, I didn’t steal the cash, I just drove the car, gave someone the gun and ski mask, and… Share:

Share:
Read Post

Best Practices for Endpoint DLP: Part 5, Deployment

In our last post we talked about prepping for deployment- setting expectations, prioritizing, integrating with the infrastructure, and defining workflow. Now it’s time to get out of the lab and get our hands dirty. Today we’re going to move beyond planning into deployment. Integrate with your infrastructure: Endpoint DLP tools require integration with a few different infrastructure elements. First, if you are using a full DLP suite, figure out if you need to perform any extra integration before moving to endpoint deployments. Some suites OEM the endpoint agent and you may need some additional components to get up and running. In other cases, you’ll need to plan capacity and possibly deploy additional servers to handle the endpoint load. Next, integrate with your directory infrastructure if you haven’t already. Determine if you need any additional information to tie users to devices (in most cases, this is built into the tool and its directory integration components). Integrate on the endpoint: In your preparatory steps you should have performed testing to be comfortable that the agent is compatible with your standard images and other workstation configurations. Now you need to add the agent to the production images and prepare deployment packages. Don’t forget to configure the agent before deployment, especially the home server location and how much space and resources to use on the endpoint. Depending on your tool, this may be managed after initial deployment by your management server. Deploy agents to initial workgroups: You’ll want to start with a limited deployment before rolling out to the larger enterprise. Pick a workgroup where you can test your initial policies. Build initial policies: For your first deployment, you should start with a small subset of policies, or even a single policy, in alert or content classification/discovery mode (where the tool reports on sensitive data, but doesn’t generate policy violations). Baseline, then expand deployment: Deploy your initial policies to the starting workgroup. Try to roll the policies out one monitoring/enforcement mode at a time, e.g., start with endpoint discovery, then move to USB blocking, then add network alerting, then blocking, and so on. Once you have a good feel for the effectiveness of the policies, performance, and enterprise integration, you can expand into a wider deployment, covering more of the enterprise. After the first few you’ll have a good understanding of how quickly, and how widely, you can roll out new policies. Tune policies: Even stable policies may require tuning over time. In some cases it’s to improve effectiveness, in others to reduce false positives, and in still other cases to adapt to evolving business needs. You’ll want to initially tune policies during baselining, but continue to tune them as the deployment expands. Most DLP clients report that they don’t spend much time tuning policies after baselining, but it’s always a good idea to keep your policies current with enterprise needs. Add enforcement/protection: By this point you should understand the effectiveness of your policies, and have educated users where you’ve found policy violations. You can now start switching to enforcement or protective actions, such as blocking, network filtering, or encryption of files. It’s important to notify users of enforcement actions as they occur, otherwise you might frustrate them u ecessarily. If you’re making a major change to established business process, consider scaling out enforcement options on a business unit by business unit basis (e.g., restricting access to a common content type to meet a new compliance need). Deploying endpoint DLP isn’t really very difficult; the most common mistake enterprises make is deploying agents and policies too widely, too quickly. When you combine a new endpoint agent with intrusive enforcement actions that interfere (positively or negatively) with people’s work habits, you risk grumpy employees and political backlash. Most organizations find that a staged rollout of agents, followed by first deploying monitoring policies before moving into enforcement, then a staged rollout of policies, is the most effective approach. Share:

Share:
Read Post

Upcoming: Database Encryption Whitepaper

We are going to be working on another paper with SANS- this time on database encryption. This is a technology that offers consumers considerable advantages in meeting security and compliance challenges, and we have been getting customer inquiries on what the available options are. As encryption products have continued to mature over the last few years, we think it is a good time to delve into this subject. If you’re on the vendor side and interested in sponsorship, drop us a line. You don’t get to influence the content, but we get really good exposure with these SANS papers. Share:

Share:
Read Post

San Francisco Needs A Really Good Pen Tester

‘Direct from the “you can’t make this up” department, this news started floating around a couple days ago: JULY 15, 2008 | 11:55 AM – Right now, San Francisco computer experts are frantically trying to crack an exclusive administrative password of one of their former computer engineers who”s sitting in jail for basically holding the city”s new multimillion-dollar network hostage. Terry Childs, 43, is cooling his heels in the slammer on charges of computer tampering for configuring sole admin control of the city”s new FiberWAN network so that no other IT officials can have administrative rights to the network, which contains email, payroll, law enforcement, and inmate booking files’ apps and data, according to a published report. Childs apparently gave some passwords to police that didn”t work, and refused to give up his magic credentials when they threatened to arrest him. Seems he set up the password lockout to ensure he didn”t get fired after he was cited for poor performance on the job. There really isn’t much to say, but if you are a kick ass pen tester in the Bay area (perhaps someone booked for a lewd offense you wouldn’t like to see plastered on the Internet) I suspect there’s a potential gig out there for you. Share:

Share:
Read Post

Stolen Data Cheaper

‘It’s rare I laugh out loud when reading the paper, but I did on this story. It is a great angle on a moribund topic, saying that there is such a glut of stolen finance and credit data for sale that it is driving prices down. LONDON (Reuters) – Prices charged by cybercriminals selling hacked bank and credit card details have fallen sharply as the volume of data on offer has soared, forcing them to look elsewhere to boost profit margins, a new report says. The thieves are true capitalists, and now they are experiencing one of the downsides of their success. What do you know, “supply and demand” works. And what exactly are they going to do to boost profit margins? Sell extended warranties? Maybe it is just the latent marketeer in me coming to the fore, but could you just imagine if hackers made television commericals to sell their wares? Cal Hackington? Crazy Eddie’s Datamart? It’s time to short your investments in Cybercriminals, Inc. Share:

Share:
Read Post

After Action Report: What Fortinet Should Do With IPLocks

When Fortinet acquired parts of IPLocks it was a bit of a bittersweet moment. When I started my career as an analyst, IPLocks was the first vendor client I worked with. I was tasked with covering database security and spent a fair bit of time walking clients through methods of improving their database monitoring; mostly for security in those days, since auditors hadn’t yet invaded the data center. It was all really manual, using things like triggers and stored procedures since native auditing sucked on every platform. After a few months of this I was connected with IPLocks- a small database security vendor with a tool to do exactly what I was trying to figure out how to do manually. They’d been around for a few years, but since everyone at this time thought database security was “encryption”, they bounced around the market more than usual. Over the next few years I watched as the Database Activity Monitoring market started to take off, with more clients and more vendors jumping into the mix. IPLocks always struggled, but I felt it was more business issues than technology issues. Needless to say, they had some leadership issues at the top. Since I hired Adrian, their CTO until the sale to Fortinet, it isn’t appropriate for me to comment on the acquisition itself. Rather, I want to talk about what this means to the DAM/ADMP market. First up is that according to this press release, Fortinet acquired the vulnerability assessment technology, and is only licensing the activity monitoring technology. As we dig in, this is an important distinction. IPLocks is one of only two companies (the other being Application Security Inc.) with a dedicated database VA product. (Imperva and Guardium have VA capabilities, but not stand-alone commercial products). From that release, it looks like Fortinet has a broad license to use the monitoring technology, but doesn’t own that IP. Was this a smart acquisition? Maybe- it all depends on what Fortinet wants to do. On the surface, the Fortinet/IPLocks deal doesn’t make sense. The products are not well aligned, address different business problems, and Fortinet only owns part of the IP, with a license for the rest. But this is also an opportunity for Fortinet to grow their market and align themselves for future security needs. Should they use this as the catalyst to develop an ADMP product line, they will get value out of the acquisition. But if they fail to advance either through further acquisitions or internal development (with significant resources, and assuming their monitoring license allows) they just wasted their money. Sorry guys, now you need a WAF. In the short term they need to learn the new market they just jumped into and refine/align the product to sell to their existing base. A lot of this will be positioning, sales training, and learning a new buying cycle. Threat management sales folks are generally unsuccessful at selling to the combined buying center focused on database security. Then they need to build a long term strategy and extend the product into the ADMP space. There is a fair bit in their existing gateway technology base they can leverage as they add additional capabilities, but this is not just another blade on the UTM. It’s all in their hands. This isn’t a slam dunk, but is definitely a good opportunity if they handle it right. Share:

Share:
Read Post

Best Practices For Endpoint DLP: Part 4, Best Practices for Deployment

We started this series with an overview of endpoint DLP, and then dug into endpoint agent technology. We closed out our discussion of the technology with agent deployment, management, policy creation, enforcement workflow, and overall integration. Today I’d like to spend a little time talking about best practices for initial deployment. The process is extremely similar to that used for the rest of DLP, so don’t be surprised if this looks familiar. Remember, it’s not plagiarism when you copy yourself. For initial deployment of endpoint DLP, our main concerns are setting expectations and working out infrastructure integration issues. Setting Expectations The single most important requirement for any successful DLP deployment is properly setting expectations at the start of the project. DLP tools are powerful, but far from a magic bullet or black box that makes all data completely secure. When setting expectations you need to pull key stakeholders together in a single room and define what’s achievable with your solution. All discussion at this point assumes you’ve already selected a tool. Some of these practices deliberately overlap steps during the selection process, since at this point you’ll have a much clearer understanding of the capabilities of your chosen tool. In this phase, you discuss and define the following: What kinds of content you can protect, based on the content analysis capabilities of your endpoint agent. How these compare to your network and discovery content analysis capabilities. Which policies can you enforce at the endpoint? When disconnected from the corporate network? Expected accuracy rates for those different kinds of content- for example, you’ll have a much higher false positive rate with statistical/conceptual techniques than partial document or database matching. Protection options: Can you block USB? Move files? Monitor network activity from the endpoint? Performance- taking into account differences based on content analysis policies. How much of the infrastructure you’d like to cover. Scanning frequency (days? hours? near continuous?). Reporting and workflow capabilities. What enforcement actions you’d like to take on the endpoint, and which are possible with your current agent capabilities. It’s extremely important to start defining a phased implementation. It’s completely unrealistic to expect to monitor every last endpoint in your infrastructure with an initial rollout. Nearly every organization finds they are more successful with a controlled, staged rollout that slowly expands breadth of coverage and types of content to protect. Prioritization If you haven’t already prioritized your information during the selection process, you need to pull all major stakeholders together (business units, legal, compliance, security, IT, HR, etc.) and determine which kinds of information are more important, and which to protect first. I recommend you first rank major information types (e.g., customer PII, employee PII, engineering plans, corporate financials), then re-order them by priority for monitoring/protecting within your DLP content discovery tool. In an ideal world your prioritization should directly align with the order of protection, but while some data might be more important to the organization (engineering plans) other data may need to be protected first due to exposure or regulatory requirements (PII). You’ll also need to tweak the order based on the capabilities of your tool. After your prioritize information types to protect, run through and determine approximate timelines for deploying content policies for each type. Be realistic, and understand that you’ll need to both tune new policies and leave time for the organizational to become comfortable with any required business changes. Not all polices work on endpoints, and you need to determine how you’d like to balance endpoint with network enforcement. We’ll look further at how to roll out policies and what to expect in terms of deployment times later in this series. Workstation and Infrastructure Integration and Testing Despite constant processor and memory improvements, our endpoints are always in a delicate balance between maintenance tools and a user’s productivity applications. Before beginning the rollout process you need to perform basic testing with the DLP endpoint agent under different circumstances on your standard images. If you don’t use standard images, you’ll need to perform more in depth testing with common profiles. During the first stage, deploy the agent to test systems with no active policies and see if there are any conflicts with other applications or configurations. Then deploy some representative policies, perhaps taken from your network policies. You’re not testing these policies for actual deployment, but rather looking to test a range of potential policies and enforcement actions so you have a better understanding of how future production policies will perform. Your goal in this stage is to test as many options as possible to ensure the endpoint agent is properly integrated, performs satisfactorily, enforces policies effectively, and is compatible with existing images and other workstation applications. Make sure you test any network monitoring/blocking, portable storage control, and local discovery performance. Also test the agent’s ability to monitor activity when the endpoint is remote, and properly report policies violations when it reconnects to the enterprise network. Next (or concurrently), begin integrating the endpoint DLP into your larger infrastructure. If you’ve deployed other DLP components you might not need much additional integration, but you’ll want to confirm that users, groups, and systems from your directory services match which users are really on which endpoints. While with network DLP we focus on capturing users based on DHCP address, with endpoint DLP we concentrate on identifying the user during authentication. Make sure that, if multiple users are on a system, you properly identify each so policies are applied appropriately. Define Process DLP tools are, by their very nature, intrusive. Not in terms of breaking things, but in terms of the depth and breadth of what they find. Organizations are strongly advised to define their business processes for dealing with DLP policy creation and violations before turning on the tools. Here’s a sample process for defining new policies: Business unit requests policy from DLP team to protect a particular content type. DLP team meets with business unit to determine goals and protection requirements. DLP team engages with legal/compliance to

Share:
Read Post

Oracle Critical Patch Update- Patch OAS Now!!!

I was just in the process of reviewing the details on the latest Oracle Critical Patch Advisory for July 2008 and found something a bit frightening. As in could let any random person own your database frightening. I am still sifting through the database patches to see what is interesting. I did not see much in the database section, but while reading through the document something looked troubling. When I see language that says “vulnerabilities may be remotely exploitable without authentication” I get very nervous. CVE 2008-2589 does not show up on cve.mitre.org, but a quick Google search turns up Nate McFeters’ comments on David Litchfield’s disclosure of the details on the vulnerability. Basically, it allows a remote attacker without a user account to slice through your Oracle Application Server and directly modify the database. If you have any external OAS instance you probably don’t have long to get it patched. I am not completely familiar with the WWV_RENDER_REPORT package, but its use is not uncommon. It appears that the web server is allowing parameters to pass through unchecked. As the package is owned by the web server user, whatever is injected will be able to perform any action that the web server account is authorized to do. Remotely. Yikes! I will post more comments on this patch in the future, but it is safe to assume that if you are running Oracle Application Server versions 9 or 10, you need to patch ASAP! Why Oracle has given this a base score of 6.4 is a bit of a mystery (see more on Oracle’s scoring), but that is neither here nor there. I assume that word about a remote SQL injection attack that does not require authentication will spread quickly. Patch your app servers. Share:

Share:
Read Post

ADMP: A Policy Driven Example

A friend of mine and I were working on a project recently to feed the results of a vulnerability assessment or discovery scans into a behavioral monitoring tool. He was working on a series of policies that would scan database tables for specific metadata signatures and content signatures that had a high probability of being personally identifiable information. The goal was to scan databases for content types, and send back a list of objects that looked important or had a high probability of being sensitive information. I was working on a generalized policy format for the assessment. My goal was not only to include the text and report information on what the policy had found and possible remediation steps, but more importantly, a set of instructions that could be sent out as a result of the policy scan. Not for a workflow system, but rather instruction on how another security application should react if a policy scan found sensitive data. As an example, let’s say we wrote a query to scan databases for social security numbers. If we ran the policy and found a 9 digit field, verifying the contents were all numbers, or an 11 character field with numbers and dashes, we would characterize that as a high probability that we had discovered a social security number. And when you have a few sizable SAP installations around, with some 40K tables, casual checking does not cut it. As I have found a tendency for QA people to push production data into test servers, this has been a handy tool for basic security and detection of rogue data and database installations. The part I was working on was the reactive portion. Rather than just generating the report/trouble ticket for someone in IT or Security to review the database column to determine if it was in fact sensitive information, I would automatically instruct the DAM tools to instantiate a policy that records all activity against that column. Obviously issues about previously scanned and accepted tables, “white lists”, and such needed to be worked out. Still, the prototype was basically working, and I wanted to begin addressing a long-standing critisicm of DAM- that knowing what to monitor can take quite a bit of research and development, or a lot of money in professional services. This is one of the reasons why I have a vision of ADMP being a top-down policy-driven aggregation of exsting security solutions. Where I am driving with this is that I should be able to manage a number of security applications through policies. Say I write a PCI-DSS policy regarding the security of credit card numbers. That generic policy would have specific components that are enforced at different locations within the organization. The policy could propagate a subset of instructions down to the assessment tool to check for the security settings around credit card information and access control settings. It could simultaneously seed the discovery application so that it is checking for credit card numbers in unregistered locations. It could simultaneously instruct DAM applications to automatically track the use of these database fields. I instruct the WAF to block anything that references triggering objects directly. And so on. The enforcement of the rules is performed by the application best suited for it, and at the location that is most suitable for responding. I have hinted at this in the past, but never really discussed fully what I meant. The policy becomes the link. Use the business policy to wrap specific actions in a specific set of actionable rules for disparate applications. The policy represents the business driver, and it is mapped down to specific applications or components to enforce individual rules that constitute the policy. A simple policy management interface can now control and maintain corporate standards, and individual stakeholders can have a say in the implementation and realization of those policies “behind the scenes”, if you will. Add or subtract security widgets as you wish, and add a rule onto the policy to direct said widgets how to behave. My examples are solely around the interaction between the assessment/discovery phase, and the database activity monitoring software. However, much more is possible if you link WAF, web app assessment, DLP, DAM, and other products into the fold. Clearly there are a lot of people thinking along these lines, if not exactly this scenario, and many are reaching to the database to help secure it. We are seeing SIM/SEM products do more with databases, albeit usually with logs. The database vendors are moving into the security space as well and are beginning to leverage content inspection and multi-application support. We are seeing the DLP vendors do more with databases, as evidenced by the recent Symantec press release, which I think is a very cool addition to their functionality. The DLP providers tend to be truly content aware. We are even seeing the UTM vendors reach for the database, but the jury is still out on how well this will be leveraged. I don’t think it is a stretch to say we will be seeing more and more of these services linked together. Who adopts a policy driven model will be interesting to see, but I have heard of a couple firms that approach the problem this way. You can probably tell I like the policy angle as the glue for security applications. It does not require too much change to any given product. Mostly an API and some form of trust validation for the cooperating applications. I started to research the policy formats like OVAL, AVDL, and others to see if I could leverage them as a communication medium. There has been a lot of work done in this area by the assessment vendors, but while they were based on XML and probably inherently extensible, I did not see anything I was confident in, and was thinking I would have to define a different template to take advatage of this model. Food for thought, anyway. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.