Securosis

Research

DLP Acquisitions: The Good, The Bad, And The Whatever

I’ve been covering the Data Loss Prevention/Content Monitoring and Filtering space pretty much since before it existed and it’s been pretty wild to watch a market grow from it’s inception to early mainstream. It’s also a weird experience to stand on the sidelines and watch as all the incredibly hard work of contacts in various vendors finally pays off. As complex and not-quite-mature as the market is, I’m still a fan of DLP. If you go in with the right intentions, especially understanding that it’s great for limiting accidents and bad processes, but not all that good against malicious threats, you’ll be able to reduce your risk in a business-friendly way. It helps solve a business problem, does it reasonably well despite needing some maturity, and has the added benefit of giving you good insight into where your data is and how it’s used. I’m predicting the core DLP market will do somewhere around $100M this year. No lower than $80M and not higher than $120M, but probably closer to $90-$100M. If we add in products with DLP features that aren’t pure plays, this grows to no more than $180M. In other words, the entire DLP market is, at most, about half of what Symantec paid for Vontu. I’ll talk more about the future of DLP at some point, but the big vendors that win will be those that see DLP as a strategic acquisition for a future platform base around content-aware security (and maybe more than security). The losers will be the ones that buy just to get into the game or add a feature to an existing product line. We’ve hit the point where I don’t expect to see more than one or two acquisitions before the end of the year, and I doubt either of those will be as big as even the PortAuthority/Websense deal ($80M), never mind Vontu/Symantec. It’s possible we’ll see one more near the $100M range, but I suspect nothing until next year. As such it’s a good time to reflect on the acquisitions over the past eighteen months and figure out which ones might be more successful than others. Disclaimer: Although I currently have business relationships with a few DLP vendors none of those relationships preclude me from giving my honest opinions. My position is that even if I lose some business in the short term (which I don’t expect), in the long run it’s far more important for me to retain my reputation for accuracy and objectivity.   I’ll discuss these in roughly chronological order, but I’m too lazy to look up the exact dates: McAfee/Onigma: McAfee acquired a small Israeli startup that specialized in endpoint DLP fairly early on. Onigma was unproven in the market and pre-acquisition I didn’t talk to any production references. Some of my Israeli contacts considered the technology interesting. McAfee no offers DLP as a combined network/endpoint solution, but based on the customers I’ve talked with it’s not very competitive as a stand-alone solution. It seems to be reasonable at protecting basic data like credit card numbers, and might be a good add-on if you just want basic DLP and already use the McAfee product line. It lacks content discovery or all-channel network protection, limiting its usefulness if you want a complete solution. I need to admit that this is the product I am least familiar with and I welcome additional information or criticism of this analysis. Overall, McAfee has a long way to go to be really competitive with DLP. Onigma got them into the game, but that’s about it. Rating- thumb slightly down. Websense/PortAuthority: Before the Vontu deal, PortAuthority was the one raising eyebrows when Websense acquired them for $80M. When they were still Vidius, I didn’t consider the product competitive, but a year after they injected some cash and changed the name the product became very solid with a couple unique features and good unstructured data capabilities. My initial evaluation was a thumbs up- Websense had the channels and exiting market for some good up sell, and their endpoint agent could be a good platform for the PortAuthority technology to extend DLP onto workstations (they do use technology from Safend, but some of the features of the Websense agent make it potentially a better option). The challenge, as you’ll see in some of these other deals, is that DLP is a different sell, to a different buying center, and a different way of looking at security. Nearly one year later I think Websense is still struggling a bit and Q4 numbers, when released, will be extremely telling. The Content Protection Suite is an opportunity for Websense to move a way from a more commoditized market (web filtering) and build a strong base for long term growth, but we have yet to see them fully execute in that direction. I’ve always considered this one a smart acquisition, but I worry a bit that the execution is faltering. Q4 will be a critical one for Websense, and 2008 an even more critical year since the initial integration pains should be over. Rating- thumb slightly up, able to go in either direction based on Q4. EMC/Tablus: Tablus was an early visionary in the market and, with PortAuthority, one of the top two technologies for working with unstructured data (as opposed to credit card/Social Security numbers). Despite a good core technology (and one of the first endpoint agents, via early acquisition) they faltered significantly on execution. The product suffered from integration and UI issues, and we didn’t see them in as many evaluations as some of the others. That said, the EMC acquisition (undisclosed numbers, but rumored in the $40M range) is one of the smarter ones in the market. EMC/RSA is the biggest threat in the data security market today- they have more components, ranging from database encryption to DRM to DLP, than anyone else. Because of Tablus’s stronger abilities in unstructured data it’s well positioned to integrate across the EMC product line. The biggest challenge is

Share:
Read Post

(Updated) DLP Acquisitions: The Good, The Bad, And The Whatever

Updated- based on a challenge in email, and redoing some math, I’m going out on a limb and revising my market projections down. My best guess is the market will do closer to $80M this year, unless Q4 is unusually strong. I’ve been covering the Data Loss Prevention/Content Monitoring and Filtering space pretty much since before it existed and it’s been pretty wild to watch a market grow from its inception to early mainstream. It’s also a weird experience to stand on the sidelines and watch as all the incredibly hard work of contacts in various vendors finally pays off. As complex and not-quite-mature as the market is, I’m still a fan of DLP. If you go in with the right intentions, especially understanding that it’s great for limiting accidents and bad processes, but not all that good against malicious threats, you’ll be able to reduce your risk in a business-friendly way. It helps solve a business problem, does it reasonably well despite needing some maturity, and has the added benefit of giving you good insight into where your data is and how it’s used. I’m predicting the core DLP market will do somewhere around $100M $60M-80M this year. No lower than $80M $55M and not higher than $120M $100M, but probably closer to $90-$100M $60-$70M. If we add in products with DLP features that aren’t pure plays, this grows to no more than $180M. In other words, the entire DLP market is, at most, about half of what Symantec paid for Vontu. I’ll talk more about the future of DLP at some point, but the big vendors that win will be those which see DLP as a strategic acquisition for a future platform base around content-aware security (and maybe more than security). The losers will be the ones which buy just to get into the game or add a feature to an existing product line. We’ve hit the point where I don’t expect to see more than one or two acquisitions before the end of the year, and I doubt either of those will be as big as even the PortAuthority/Websense deal ($80M), never mind Vontu/Symantec. It’s possible we’ll see one more near the $100M range, but I suspect nothing until next year. As such it’s a good time to reflect on the acquisitions over the past eighteen months and figure out which ones might be more successful than others. Disclaimer: Although I currently have business relationships with a few DLP vendors, none of those relationships precludes me from giving my honest opinions. My position is that even if I lose some business in the short term (which I don’t expect), in the long run it’s far more important for me to retain my reputation for accuracy and objectivity. I’ll discuss these in roughly chronological order, but I’m too lazy to look up the exact dates: McAfee/Onigma: McAfee acquired a small Israeli startup that specialized in endpoint DLP fairly early on. Onigma was unproven in the market and pre-acquisition I didn’t talk to any production references. Some of my Israeli contacts considered the technology interesting. McAfee now offers DLP as a combined network/endpoint solution, but based on the customers I’ve talked with it’s not very competitive as a stand-alone solution. It seems to be reasonable at protecting basic data like credit card numbers, and might be a good add-on if you just want basic DLP and already use the McAfee product line. It lacks content discovery or all-channel network protection, limiting its usefulness if you want a complete solution. I need to admit that this is the product I am least familiar with and I welcome additional information or criticism of this analysis. Overall, McAfee has a long way to go to be really competitive in DLP. Onigma got them into the game, but that’s about it. Rating: thumb slightly down. Websense/PortAuthority: Before the Vontu deal, PortAuthority was the one raising eyebrows when Websense acquired them for $80M. When they were still Vidius, I didn’t consider the product competitive, but a year after they injected some cash and changed the name the product became very solid with a couple unique features and good unstructured data capabilities. My initial evaluation was a thumbs up- Websense had the channels and exiting market for some good upsell, and their endpoint agent could be a good platform for the PortAuthority technology to extend DLP onto workstations (they do use technology from Safend, but some of the features of the Websense agent make it potentially a better option). The challenge, as you’ll see in some of these other deals, is that DLP is a different sell, to a different buying center, and a different way of looking at security. Nearly one year later I think Websense is still struggling a bit and Q4 numbers, when released, will be extremely telling. The Content Protection Suite is an opportunity for Websense to move away from a more commoditized market (web filtering) and build a strong base for long term growth, but we have yet to see them fully execute in that direction. I’ve always considered this one a smart acquisition, but I worry a bit that the execution is faltering. Q4 will be a critical one for Websense, and 2008 an even more critical year since the initial integration pains should be over. Rating: thumb slightly up, able to go in either direction based on Q4. EMC/Tablus: Tablus was an early visionary in the market and, with PortAuthority, one of the top two technologies for working with unstructured data (as opposed to credit card/Social Security numbers). Despite a good core technology (and one of the first endpoint agents, via early acquisition) they faltered significantly on execution. The product suffered from integration and UI issues, and we didn’t see them in as many evaluations as some of the others. That said, the EMC acquisition (undisclosed numbers, but rumored in the $40M range) is one of the smarter ones in the market. EMC/RSA is the biggest threat in

Share:
Read Post

Help Build The Best IPFW Firewall Rules Sets Ever

Updated: See https://securosis.com/wp-content/uploads/2007/11/ipfw-securosis.txt. I need to completely thank and acknowledge windexh8er for suggesting this post in the comments on the Leopard firewall post, and providing the starting content. In his (or her) own words: So how about everyone constantly complaining about the crap-tastic new implementation of the Leopard firewall we baseline a good IPFW config? Here’s for starters: 00100 allow ip from any to any via lo* 00110 deny ip from 127.0.0.0/8 to any in 00120 deny ip from any to 127.0.0.0/8 in 00500 check-state 00501 deny log ip from any to any frag 00502 deny log tcp from any to any established in 01500 allow udp from 10.100.0.0/24 5353 to any dst-port 1024-65535 in 01700 allow icmp from any to any icmptypes 3 01701 allow icmp from any to any icmptypes 4 01702 allow icmp from any to any icmptypes 8 out 01703 allow icmp from any to any icmptypes 0 in 01704 allow icmp from any to any icmptypes 11 in 65500 allow tcp from me to any keep-state 65501 allow udp from me to any keep-state 65534 deny log ip from any to any 65535 allow ip from any to any this firewall configuration will do a number of things. First of all line 500 is key to checking the state table before we block any poser incoming connections. Line 502 blocks connections coming in that pretend they were established, but really weren’t. Line 501 is pretty self explanatory, blocking fragmented packets in. I know nothing I’m using is fragmenting, so YMMV. Line 1500 is an example. Since Bonjour services cannot be tracked correctly in the state table we need to allow things back to 5353/UDP on the box (that is if you want to use it). But my example shows that I’m only allowing those services on my local network. Anytime I head to Panera or Starbucks I don’t have to worry about 5353 being ‘open’, unless of course those networks are using 10.100.0.0/24. Most of the time they’re not. But if I noticed that I would disable that rule for the time being. Next we get to ICMP. What do these let us do? ICMP type 3 let’s path MTU in and out (i.e. PMTU – Path MTU Discovery). Many people don’t realize the advantages of PMTU, because they think ICMP is inherently evil. Try doing some performance engineering and PMTU becomes a great resource. Anyway, type 3 is not evil. Next, type 8 is source quench. It will tell my upstream gateway to “slow down” if need be. Again, not evil for the most part. The pros outweigh the cons anyway. Types 8 and 0 rely on each other. 8 lets me ping out and 0 lets that back in. BUT – people will not be able to ping me. Sneaky sneaky The last one, type 11, will let me run traceroute. So now 65500 and 65501 basically let my computer open any port out. In the essence of keeping this ruleset “set it and forget it” style this can be done better. Like specifying everything you need to let out and blocking everything else. But I can’t delve into that for ‘every’ user, so this makes it a little more convenient. 65534 is our deny. Notice all the denies I setup have logging statements. I always have a terminal running tailing my firewall log. Then again, for those who don’t know how to respond maybe just keep that on the down low – you might get sick if you saw all of the traffic hitting your box depending on the network you’re connected to. Rich – you should start a thread for whittling down the best default ruleset for IPFW on Tiger/Leopard and let’s do a writeup on how to implement it Ask and ye shall receive- I’ll be putting together some of my own suggestions, but this is a heck of a great start and I’m having trouble thinking of any good additions right now. Let’s all pile on- once we get consensus I’ll do another post with the results. Share:

Share:
Read Post

Leopard Firewall- Apple Documents And Potentially Good News

Updated: See http://securosis.com/2007/11/15/ipfw-rules/. Thanks to an email from John Baxter via MacInTouch, it looks like Apple posted some documentation on the new firewall that contains some really good news: The Application Firewall does not overrule rule set with ipfw, because ipfw operates on packets at a lower level in the networking stack. If true, this is some seriously good news. It means we can run ipfw rule sets in conjunction with the new firewall. Why would you want to do that? I plan on writing an ipfw rule set that allows file sharing, web, and ssh through and will use the GUI in the application firewall to allow or deny those services I sometimes want to open up without manually changing firewall rule sets. sigh if only I’d known this earlier! I won’t have a chance to test today, so please let me know in the comments if the application firewall overrides your ipfw rule sets. This should help us create the best Leopard ipfw rule set… Share:

Share:
Read Post

Network Security Podcast, Episode 83

Martin returns in this episode as we discuss a bunch of totally unrelated security news, from security camera screen savers to breaking into data centers with movie-style techniques: Show Notes: DIY WiFi antenna reception boosters – I really built a pair of these, and they really do work Masked thieves storm into Chicago colocation (again!) I’m vulnerable to Identity Theft – Thanks a lot HMRC – Fellow security blogger David Whitelegg was a victim of his own government. Vontu purchased by Symantec for $350 million – No link, but I’m sure you’ll get several in your morning security newsletters. SalesForce.com compromised by phishing attacks – Brian Krebs has two very good articles on this (so far) Deconstructing the Fake FTC e-mail virus attack SalesForce.com acknowledges data loss Screensaver displays security cam images SurveillanceSaver Alpha http://www.google.com/search?q=inurl:%22jpg/image.jpg%3Fr%3D%22 WabiSabiLabi founder arrested in Italy Tonight’s music: Reverse Engineering by +nurse Network Security Podcast, Episode 83 Time: 46:52 Share:

Share:
Read Post

Dark Reading Column Up- Tylenol As A Breach Disclosure

My first semi-regular column is up over at Dark Reading, “Lea ing From Tylenol”. Ignore the background stuff on me in the beginning that I had to write; the meat starts about a third of the way through. If you don’t remember it, about 15 years ago, seven people lay dead or dying in the Chicago area from some sort of poisoning, and law enforcement investigations indicated that Tylenol was involved, perhaps through contamination in the manufacturing process or through bottle-tampering in retail stores. If Johnson & Johnson followed today’s breach disclosure practices, the company would perform an internal investigation of their factories, while urging the police to avoid making any public statements. The lawyers would begin combing through business contracts and regulations to see if they had any legal obligation to disclose the tampering. It’s even more fun after that… Share:

Share:
Read Post

Understanding And Selecting A Database Activity Monitoring Solution: Part 2, Technical Architecture

In Part 1 of our series we introduced Database Activity Monitoring (DAM) and discussed some of its use cases. In this post we’ll discuss current technical architectures. Author’s Note: Although I call this product category Database Activity Monitoring, I don’t believe that name sufficiently describes where the market is headed. Over time we will migrate towards Application and Database Monitoring and Protection as products combine application and database monitoring with more options for active blocking, but it’s still too early to rebrand the market with that definition. Some tools do already offer those options, but both product and customer maturity still need to advance before we can migrate the definition. Base Architecture One of the key values of DAM is the ability to monitor multiple databases running on multiple database management systems (DBMS) across multiple platforms (Windows vs. Unix vs. …). The DAM tool aggregates collected information from multiple collectors to a central, secure server. In some cases the central server/management console also collects information, while in other cases it serves merely as a repository for collectors to drop data. This creates three potential options for deployment, depending on the solution you choose: Single Server/Appliance: A single server or appliance serves as both the sensor/collection point and management console. This mode is typically used for smaller deployments. Two-tiered Architecture: This option consists of a central management server, and remote collection points/sensors. The central server does no direct monitoring and just aggregates information from remote systems, manages policies, and generates alerts. The remote collectors can use any of the collection techniques and feed data back to the central server. Hierarchical Architecture: Collection points/sensors aggregate to business or geographical level management servers, which in turn report to an enterprise wide management server. Hierarchical deployments are best suited for large enterprises which may have different business unit or geographic needs. They can also be configured to only pass certain kinds of data between the tiers to manage large volumes of information or maintain unit/geographic privacy and policy needs. Whatever deployment architecture you choose, the central server aggregates all collected data, performs policy based alerting, and manages reporting and workflow. I’ve focused this description on typical DAM deployments for database monitoring and alerting; as we delve into the technology we’ll see additional deployment options for more advanced features like blocking. Collection Techniques At the core of all DAM solutions are the collectors that monitor database traffic and either collect it locally or send it to the central management server. These collectors are, at a minimum, capable of monitoring SQL traffic. This is one of the defining characteristics of DAM and what differentiates it from log management, Security Information and Event Management, or other tools that also offer some level of database monitoring. As usual, I’m going to simplify a bit, but there are three major categories of collection techniques. Network Monitoring: This technique monitors network traffic for SQL, parses the SQL, and stores in in the collector’s internal database. Most tools monitor bidirectionally, but some early tools only monitored inbound SQL requests. The advantages of network monitoring are that it has zero overhead on the monitored databases, can monitor independent of platform, requires no modification to the databases, and can monitor multiple, heterogenous database management systems at once. The disadvantages are that it has no knowledge of the internal state of the database and will miss any database activity that doesn’t cross the network, such as logging in locally or remote console connections. For this last reason, I only recommend network monitoring when used in conjunction with another monitoring technique that can capture local activity. Network monitoring can still be used if connections to the databases are encrypted via SSL or IPSec by placing a VPN appliance in front of the databases, and positioning the DAM collector between the VPN/SSL appliance and the database, where the traffic is unencrypted. Audit Log Monitoring: When using this technique, the collector is given administrative access to the target database and native database auditing is turned on. The collector externally monitors the DBMS and collects activity recorded by the native auditing or other internal database features that can output activity data. The overhead on the monitored system is thus the overhead introduced by turning on the native logging/auditing. In some cases this is completely acceptable- e.g., Microsoft SQL Server is designed to provide low-overhead remote monitoring. In other cases, particularly Oracle before version 10g, the overhead is material and may not be acceptable for performance reasons. Advantages include the ability (depending on DBMS platform) to monitor all database activity including local activity, performance equal to the performance of the native logging/monitoring, and monitoring of all database activity, including internal activity, regardless of client connection method. The big disadvantage is potential performance issues depending on the database platform, especially older versions of Oracle. This also requires opening an administrative account on the database and possibly some configuration changes. Local agent: This technique requires the installation of a software agent on the database server to collect activity. Individual agents vary widely in performance and techniques used, even within a product line, due to the requirements of DBMS and host platform support. Some early agents relied on locally sniffing a network loopback, which misses some types of client connections. The latest round of agents hooks into the host kernel to audit activity without modification to the DBMS and with minimal performance impact. Leading agents typically impact performance no greater than 3-5%, which seems to be the arbitrary limit database administrators are willing to accept. Advantages include collection of all activity without turning on native auditing, ability to monitor internal database activity like stored procedures, and potentially low overhead. Disadvantages include limited platform support (a new agent needs to be built for every platform) and the requirement to install an agent on every monitored database. The Future is a Hybrid I’m often asked which collection technique is best, and the answer is, “all of them”. Different collection techniques have different advantages and

Share:
Read Post

Heading to San Francisco

It’s a bit last minute, but I’ll be out in San Francisco next week for a panel at Oracle OpenWorld. I’m still working on my plans, but the panel is on Wednesday the 14th. I’m trying to decide how long to stay, so if you’re interested in meeting drop me a line… Share:

Share:
Read Post

Understanding And Selecting A DLP Solution: Part 7, The Selection Process

Welcome to the last part of our series on understanding and selecting a data loss prevention/content monitoring and filtering solution. Over the past 6 entries we’ve focused on the different components of solutions and the technologies that underlie them. Today, we’ll close the series with recommendations on how to run the selection process and pick the right solution for your organization. Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 As we’ve discussed, DLP solutions can protect a wide range of data under a wide variety of circumstances, which makes DLP a particularly dangerous technology to acquire without the proper preparation. While there’s somewhat of a feature consensus among major players, how these features are implemented varies widely from vendor to vendor. I’ve also seen some organizations jump on the DLP bandwagon without having any idea how they’d like to use their new solution. In other cases, I’ve talked to clients complaining about high false positives while failing to turn on features in the product they’ve already bought that could materially improve accuracy. I’ve probably talked to over 100 organizations that have evaluated and deployed DLP, and based on their experiences I recommend a three phase selection process. Most of this is no different than the average procurement process, but there are a few extra recommendations specific to DLP- especially in the first phase. This process is skewed for larger organizations, so small to mid-size enterprises will need to scale back and adjust to match your resources. Define Needs and Prepare Your Organization Before you start looking at any tools, you need to understand why you need DLP, how you plan on using it, and the business processes around creating policies and managing incidents. Identify business units that need to be involved and create a selection committee: We tend to include two kinds of business units in the DLP selection process- content owners with sensitive data to protect, and content protectors with the responsibility for enforcing controls over the data. Content owners include those business units that hold and use the data. Content protectors tend to include departments like human resources, IT security, corporate legal, compliance, and risk management. Once you identify the major stakeholders, you’ll want to bring them together for the next few steps. Define what you want to protect: Start by listing out the kinds of data, as specifically as possible, that you plan on using DLP to protect. We typically break content out into three categories- personally identifiable information (PII, including healthcare, financial, and other data), corporate financial data, and intellectual property. The first two tend to be more structured and will drive you towards certain solutions, while IP tends to be less structured, bringing different content analysis requirements. Even if you want to protect all kinds of content, use this process to specify and prioritize, preferably on paper. Decide how you want to protect it and set expectations: In this step you will answer two key questions. First, in what channels/phases do you want to protect the data? This is where you decide if you just want basic email monitoring, or if you want comprehensive data-in-motion, data-at-rest, and data-in-use protection. I suggest you be extremely specific, listing out major network channels, data stores, and endpoint requirements. The second question is what kind of enforcement do you plan on implementing? Monitoring and alerting only? Email filtering? Automatic encryption? You’ll get a little more specific in the formalized requirements later, but you should have a good idea of your expectations at this point. Also, don’t forget that needs may change over time, and I recommend you break requirements into short term (within 6 months of deployment), mid-term (12-18 months after deployment), and long-term (up to 3 years after deployment). Roughly outline process workflow: One of the biggest stumbling blocks for a successful DLP deployment is failure to prepare the enterprise. In this stage you define your expected workflows for creating new protection policies and handling incidents involving insiders and external attackers. Which business units are allowed to request data to protect? Who is responsible for building the policies? When a policy is violated, what’s the workflow to remediate? When is HR notified? Corporate legal? Who handles day to day policy violations? Is it a technical security role, or non-technical, like a compliance officer? The answers to these kinds of questions will guide you towards different solutions that best meet your workflow needs. By the completion of this phase you will have defined key stakeholders, convened a selection team, prioritized the data you want to protect, determined where you want to protect it, and roughed out workflow requirements for building policies and remediating incidents. Formalize Requirements This phase can be performed by a smaller team working under a mandate from the selection committee. Here, the generic requirements determined in phase 1 are translated into specific technical requirements, while any additional requirements are considered. For example, any requirements for directory integration, gateway integration, data storage, hierarchical deployments, endpoint integration, and so on. Hopefully this series gives you a good idea of what to look for, and you can always refine these requirements after you proceed in the selection process and get a better feel for how the products work. At the conclusion of this stage you develop a formal RFI (Request For Information) to release to the vendors, and a rough RFP (Request For Proposals) that you’ll clean up and formally issue in the evaluation phase. Evaluate Products As with any products it’s sometimes difficult to cut through the marketing hype to figure out if a product really meets your needs. The following steps should minimize your risk and help you feel fully confident in your final decision: Issue the RFI: This is the procurement equivalent of pulling the trigger on a starting gun at the start of an Olympic 100 meter dash. Be prepared for all the sales calls. If you’re a smaller organization, start by sending your RFI to a trusted VAR and email

Share:
Read Post

It’s Official- Symantec Really Buying Vontu

From the press release: CUPERTINO, Calif. – November 5, 2007 – Symantec Corp. (Nasdaq: SYMC) today announced it has signed a definitive agreement to acquire Vontu, the leader in Data Loss Prevention (DLP) solutions, for $350 million, which will be paid in cash and assumed options. The acquisition is expected to close in the fourth calendar quarter of 2007, subject to receiving regulatory approvals and satisfaction of other customary closing conditions. I’ll post some analysis of all the M&A activity in DLP tomorrow; for now, I need to go off and finish my last post in the DLP series. Congrats to the Vontu team, and it will be really interesting to see if all the recent acquisitions finally give the DLP market a boost. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.