Securosis

Research

Data Protection Isn’t A Network Security Or Endpoint Problem

I woke up in a pretty good mood this morning. First of all, it’s Friday and I can just feel the weekend oozing around the corners of the neighborhood. Sure, every day is either a Friday or a Monday when you’re self employed, but there’s still something special about the official weekend. I also woke up a little more alert than usual lately, and even decided to skip my morning routine of getting totally ready before slipping downstairs for some morning coffee and news (via RSS, of course). That’s when my day took a sad turn. As soon as I dug into my news feeds I saw that my friend Amrit’s blog had been pwned by some attacker looking to damage his reputation. That’s the only thing that can explain this post on how DLP is just a feature of either network security or endpoint security. Either that, or Amrit was intentionally goading me, something all of us security bloggers are a little prone to doing. I’ll just point out a few internal inconsistencies with the hax0r-pretending-to-be-Amrit’s position. However I need to call out several things that are being missed in all the DLP analysis. First DLP is a future feature of either network or host-based security, just as all other security technologies whether they be AV, IDS/IPS, firewall, etc are segmented by network and host so shall DLP follow. The never ending explosion of crap and bloatware that must be deployed at the network and host is becoming increasingly difficult to manage. So why would an organization want a separate infrastructure, team, and set of processes to deal with data security differently than information security? Um, last time I checked data security was part of information security. But this is a great example of what Hoff and I have been ranting on about the loss of the term information security, which in some circles only represents AV and firewalls. If that’s your definition of information security, Amrit is correct. Also, we see more divisions than just host and network in real-world operational environments. Is application security a network or host issue? Incident response? SIEM? NAC? Even web filtering today has both host and network components. Those lines were drawn when it made sense to draw those lines; now we’re drawing new ones. We need a different infrastructure, team, and set of processes because you can’t solve the problem with network-only or host-only infrastructure, teams, and processes. We’re solving a business problem (”protect my data”), not just thinking of this as a collection of tools. Second thing to note is that organizations segment administration responsibility between the network and desktop and servers, that is network security technologies are generally purchased, deployed, and administered by a different group (usually network operations) than the group that is responsible for desktop security (usually desktop operations/support). It is common for an organization to deploy one AV vendor at the email gateways and a separate vendor at the desktop, just like it is common to deploy different firewalls at the network vs. the host, same with anti-spam, Intrusion detection/prevention and pretty much anything else that can run on the network or the host – so why would it be any different for DLP? It won’t be- as I’ve discussed before, DLP will be all over the place and will integrate with these existing investments, while being managed someplace else. Why the heck should the guy managing AV be dealing with highly sensitive policy violations around the use of intellectual property? Besides, the different gateway vs. desktop AV argument is spurious- most organizations do that for defense in depth. The management of deploying and integrating DLP will be the responsibility of the network and host teams Amrit is so enamored with. The management of DLP policies and violations will be a separate group (in a big enough organization) with a data/content/compliance focus. There are two kinds of administrative responsibilities- the one to solve the problem, and the one to keep the stuff running. The latter is a throwaway, and can be implemented by whoever is “in charge” of the platform where the sensor is being deployed. So if one believes, as I do, that DLP will converge with adjacent security and eventually systems management functions and one believes, as I do, that there is a pretty clear separation of duties between the network and host operations folks in an organization then one would have to question analysis that called for a converged network/host solution. Or, if you believe as I do that we’re here to solve business problems and not just support organizational momentum of the past, the only way to solve DLP is through a hybrid solution. It makes absolutely no sense to have to build different data protection policies for the network, host, and storage; rather, we’ll build one policy based on the content and distribute that to all the necessary enforcement points. That’s where DLP is headed. We’ll let BigFix distribute the agents and keep them running, just as we let the mail and web gateways keep their DLP engines running. Don’t worry Amrit, you guys will continue to see your success as an endpoint management tool. But you’ll be distributing agents for something that connects back to a DLP tool that also talks to network gateways, storage, and a bunch of other stuff. That is not to say that there shouldn”t be an ideal of integration, but an ideal is a far cry from reality and the reality is that network focused tool vendors are terrible, absolutely abysmal at providing central management of desktop technologies (can anyone say Cisco CSA?) so why would an organization deploy an agent from a network focused company? And for that matter why would an organization deploy a network device from a desktop focused vendor – they wouldn”t, unless the vendor had mastered both, and there were no organizational politics between the network and desktop teams, and there was good collaboration between the security and operations teams,

Share:
Read Post

DLP Acquisitions: The Good, The Bad, And The Whatever

I’ve been covering the Data Loss Prevention/Content Monitoring and Filtering space pretty much since before it existed and it’s been pretty wild to watch a market grow from it’s inception to early mainstream. It’s also a weird experience to stand on the sidelines and watch as all the incredibly hard work of contacts in various vendors finally pays off. As complex and not-quite-mature as the market is, I’m still a fan of DLP. If you go in with the right intentions, especially understanding that it’s great for limiting accidents and bad processes, but not all that good against malicious threats, you’ll be able to reduce your risk in a business-friendly way. It helps solve a business problem, does it reasonably well despite needing some maturity, and has the added benefit of giving you good insight into where your data is and how it’s used. I’m predicting the core DLP market will do somewhere around $100M this year. No lower than $80M and not higher than $120M, but probably closer to $90-$100M. If we add in products with DLP features that aren’t pure plays, this grows to no more than $180M. In other words, the entire DLP market is, at most, about half of what Symantec paid for Vontu. I’ll talk more about the future of DLP at some point, but the big vendors that win will be those that see DLP as a strategic acquisition for a future platform base around content-aware security (and maybe more than security). The losers will be the ones that buy just to get into the game or add a feature to an existing product line. We’ve hit the point where I don’t expect to see more than one or two acquisitions before the end of the year, and I doubt either of those will be as big as even the PortAuthority/Websense deal ($80M), never mind Vontu/Symantec. It’s possible we’ll see one more near the $100M range, but I suspect nothing until next year. As such it’s a good time to reflect on the acquisitions over the past eighteen months and figure out which ones might be more successful than others. Disclaimer: Although I currently have business relationships with a few DLP vendors none of those relationships preclude me from giving my honest opinions. My position is that even if I lose some business in the short term (which I don’t expect), in the long run it’s far more important for me to retain my reputation for accuracy and objectivity.   I’ll discuss these in roughly chronological order, but I’m too lazy to look up the exact dates: McAfee/Onigma: McAfee acquired a small Israeli startup that specialized in endpoint DLP fairly early on. Onigma was unproven in the market and pre-acquisition I didn’t talk to any production references. Some of my Israeli contacts considered the technology interesting. McAfee no offers DLP as a combined network/endpoint solution, but based on the customers I’ve talked with it’s not very competitive as a stand-alone solution. It seems to be reasonable at protecting basic data like credit card numbers, and might be a good add-on if you just want basic DLP and already use the McAfee product line. It lacks content discovery or all-channel network protection, limiting its usefulness if you want a complete solution. I need to admit that this is the product I am least familiar with and I welcome additional information or criticism of this analysis. Overall, McAfee has a long way to go to be really competitive with DLP. Onigma got them into the game, but that’s about it. Rating- thumb slightly down. Websense/PortAuthority: Before the Vontu deal, PortAuthority was the one raising eyebrows when Websense acquired them for $80M. When they were still Vidius, I didn’t consider the product competitive, but a year after they injected some cash and changed the name the product became very solid with a couple unique features and good unstructured data capabilities. My initial evaluation was a thumbs up- Websense had the channels and exiting market for some good up sell, and their endpoint agent could be a good platform for the PortAuthority technology to extend DLP onto workstations (they do use technology from Safend, but some of the features of the Websense agent make it potentially a better option). The challenge, as you’ll see in some of these other deals, is that DLP is a different sell, to a different buying center, and a different way of looking at security. Nearly one year later I think Websense is still struggling a bit and Q4 numbers, when released, will be extremely telling. The Content Protection Suite is an opportunity for Websense to move a way from a more commoditized market (web filtering) and build a strong base for long term growth, but we have yet to see them fully execute in that direction. I’ve always considered this one a smart acquisition, but I worry a bit that the execution is faltering. Q4 will be a critical one for Websense, and 2008 an even more critical year since the initial integration pains should be over. Rating- thumb slightly up, able to go in either direction based on Q4. EMC/Tablus: Tablus was an early visionary in the market and, with PortAuthority, one of the top two technologies for working with unstructured data (as opposed to credit card/Social Security numbers). Despite a good core technology (and one of the first endpoint agents, via early acquisition) they faltered significantly on execution. The product suffered from integration and UI issues, and we didn’t see them in as many evaluations as some of the others. That said, the EMC acquisition (undisclosed numbers, but rumored in the $40M range) is one of the smarter ones in the market. EMC/RSA is the biggest threat in the data security market today- they have more components, ranging from database encryption to DRM to DLP, than anyone else. Because of Tablus’s stronger abilities in unstructured data it’s well positioned to integrate across the EMC product line. The biggest challenge is

Share:
Read Post

(Updated) DLP Acquisitions: The Good, The Bad, And The Whatever

Updated- based on a challenge in email, and redoing some math, I’m going out on a limb and revising my market projections down. My best guess is the market will do closer to $80M this year, unless Q4 is unusually strong. I’ve been covering the Data Loss Prevention/Content Monitoring and Filtering space pretty much since before it existed and it’s been pretty wild to watch a market grow from its inception to early mainstream. It’s also a weird experience to stand on the sidelines and watch as all the incredibly hard work of contacts in various vendors finally pays off. As complex and not-quite-mature as the market is, I’m still a fan of DLP. If you go in with the right intentions, especially understanding that it’s great for limiting accidents and bad processes, but not all that good against malicious threats, you’ll be able to reduce your risk in a business-friendly way. It helps solve a business problem, does it reasonably well despite needing some maturity, and has the added benefit of giving you good insight into where your data is and how it’s used. I’m predicting the core DLP market will do somewhere around $100M $60M-80M this year. No lower than $80M $55M and not higher than $120M $100M, but probably closer to $90-$100M $60-$70M. If we add in products with DLP features that aren’t pure plays, this grows to no more than $180M. In other words, the entire DLP market is, at most, about half of what Symantec paid for Vontu. I’ll talk more about the future of DLP at some point, but the big vendors that win will be those which see DLP as a strategic acquisition for a future platform base around content-aware security (and maybe more than security). The losers will be the ones which buy just to get into the game or add a feature to an existing product line. We’ve hit the point where I don’t expect to see more than one or two acquisitions before the end of the year, and I doubt either of those will be as big as even the PortAuthority/Websense deal ($80M), never mind Vontu/Symantec. It’s possible we’ll see one more near the $100M range, but I suspect nothing until next year. As such it’s a good time to reflect on the acquisitions over the past eighteen months and figure out which ones might be more successful than others. Disclaimer: Although I currently have business relationships with a few DLP vendors, none of those relationships precludes me from giving my honest opinions. My position is that even if I lose some business in the short term (which I don’t expect), in the long run it’s far more important for me to retain my reputation for accuracy and objectivity. I’ll discuss these in roughly chronological order, but I’m too lazy to look up the exact dates: McAfee/Onigma: McAfee acquired a small Israeli startup that specialized in endpoint DLP fairly early on. Onigma was unproven in the market and pre-acquisition I didn’t talk to any production references. Some of my Israeli contacts considered the technology interesting. McAfee now offers DLP as a combined network/endpoint solution, but based on the customers I’ve talked with it’s not very competitive as a stand-alone solution. It seems to be reasonable at protecting basic data like credit card numbers, and might be a good add-on if you just want basic DLP and already use the McAfee product line. It lacks content discovery or all-channel network protection, limiting its usefulness if you want a complete solution. I need to admit that this is the product I am least familiar with and I welcome additional information or criticism of this analysis. Overall, McAfee has a long way to go to be really competitive in DLP. Onigma got them into the game, but that’s about it. Rating: thumb slightly down. Websense/PortAuthority: Before the Vontu deal, PortAuthority was the one raising eyebrows when Websense acquired them for $80M. When they were still Vidius, I didn’t consider the product competitive, but a year after they injected some cash and changed the name the product became very solid with a couple unique features and good unstructured data capabilities. My initial evaluation was a thumbs up- Websense had the channels and exiting market for some good upsell, and their endpoint agent could be a good platform for the PortAuthority technology to extend DLP onto workstations (they do use technology from Safend, but some of the features of the Websense agent make it potentially a better option). The challenge, as you’ll see in some of these other deals, is that DLP is a different sell, to a different buying center, and a different way of looking at security. Nearly one year later I think Websense is still struggling a bit and Q4 numbers, when released, will be extremely telling. The Content Protection Suite is an opportunity for Websense to move away from a more commoditized market (web filtering) and build a strong base for long term growth, but we have yet to see them fully execute in that direction. I’ve always considered this one a smart acquisition, but I worry a bit that the execution is faltering. Q4 will be a critical one for Websense, and 2008 an even more critical year since the initial integration pains should be over. Rating: thumb slightly up, able to go in either direction based on Q4. EMC/Tablus: Tablus was an early visionary in the market and, with PortAuthority, one of the top two technologies for working with unstructured data (as opposed to credit card/Social Security numbers). Despite a good core technology (and one of the first endpoint agents, via early acquisition) they faltered significantly on execution. The product suffered from integration and UI issues, and we didn’t see them in as many evaluations as some of the others. That said, the EMC acquisition (undisclosed numbers, but rumored in the $40M range) is one of the smarter ones in the market. EMC/RSA is the biggest threat in

Share:
Read Post

Help Build The Best IPFW Firewall Rules Sets Ever

Updated: See https://securosis.com/wp-content/uploads/2007/11/ipfw-securosis.txt. I need to completely thank and acknowledge windexh8er for suggesting this post in the comments on the Leopard firewall post, and providing the starting content. In his (or her) own words: So how about everyone constantly complaining about the crap-tastic new implementation of the Leopard firewall we baseline a good IPFW config? Here’s for starters: 00100 allow ip from any to any via lo* 00110 deny ip from 127.0.0.0/8 to any in 00120 deny ip from any to 127.0.0.0/8 in 00500 check-state 00501 deny log ip from any to any frag 00502 deny log tcp from any to any established in 01500 allow udp from 10.100.0.0/24 5353 to any dst-port 1024-65535 in 01700 allow icmp from any to any icmptypes 3 01701 allow icmp from any to any icmptypes 4 01702 allow icmp from any to any icmptypes 8 out 01703 allow icmp from any to any icmptypes 0 in 01704 allow icmp from any to any icmptypes 11 in 65500 allow tcp from me to any keep-state 65501 allow udp from me to any keep-state 65534 deny log ip from any to any 65535 allow ip from any to any this firewall configuration will do a number of things. First of all line 500 is key to checking the state table before we block any poser incoming connections. Line 502 blocks connections coming in that pretend they were established, but really weren’t. Line 501 is pretty self explanatory, blocking fragmented packets in. I know nothing I’m using is fragmenting, so YMMV. Line 1500 is an example. Since Bonjour services cannot be tracked correctly in the state table we need to allow things back to 5353/UDP on the box (that is if you want to use it). But my example shows that I’m only allowing those services on my local network. Anytime I head to Panera or Starbucks I don’t have to worry about 5353 being ‘open’, unless of course those networks are using 10.100.0.0/24. Most of the time they’re not. But if I noticed that I would disable that rule for the time being. Next we get to ICMP. What do these let us do? ICMP type 3 let’s path MTU in and out (i.e. PMTU – Path MTU Discovery). Many people don’t realize the advantages of PMTU, because they think ICMP is inherently evil. Try doing some performance engineering and PMTU becomes a great resource. Anyway, type 3 is not evil. Next, type 8 is source quench. It will tell my upstream gateway to “slow down” if need be. Again, not evil for the most part. The pros outweigh the cons anyway. Types 8 and 0 rely on each other. 8 lets me ping out and 0 lets that back in. BUT – people will not be able to ping me. Sneaky sneaky The last one, type 11, will let me run traceroute. So now 65500 and 65501 basically let my computer open any port out. In the essence of keeping this ruleset “set it and forget it” style this can be done better. Like specifying everything you need to let out and blocking everything else. But I can’t delve into that for ‘every’ user, so this makes it a little more convenient. 65534 is our deny. Notice all the denies I setup have logging statements. I always have a terminal running tailing my firewall log. Then again, for those who don’t know how to respond maybe just keep that on the down low – you might get sick if you saw all of the traffic hitting your box depending on the network you’re connected to. Rich – you should start a thread for whittling down the best default ruleset for IPFW on Tiger/Leopard and let’s do a writeup on how to implement it Ask and ye shall receive- I’ll be putting together some of my own suggestions, but this is a heck of a great start and I’m having trouble thinking of any good additions right now. Let’s all pile on- once we get consensus I’ll do another post with the results. Share:

Share:
Read Post

Leopard Firewall- Apple Documents And Potentially Good News

Updated: See http://securosis.com/2007/11/15/ipfw-rules/. Thanks to an email from John Baxter via MacInTouch, it looks like Apple posted some documentation on the new firewall that contains some really good news: The Application Firewall does not overrule rule set with ipfw, because ipfw operates on packets at a lower level in the networking stack. If true, this is some seriously good news. It means we can run ipfw rule sets in conjunction with the new firewall. Why would you want to do that? I plan on writing an ipfw rule set that allows file sharing, web, and ssh through and will use the GUI in the application firewall to allow or deny those services I sometimes want to open up without manually changing firewall rule sets. sigh if only I’d known this earlier! I won’t have a chance to test today, so please let me know in the comments if the application firewall overrides your ipfw rule sets. This should help us create the best Leopard ipfw rule set… Share:

Share:
Read Post

Network Security Podcast, Episode 83

Martin returns in this episode as we discuss a bunch of totally unrelated security news, from security camera screen savers to breaking into data centers with movie-style techniques: Show Notes: DIY WiFi antenna reception boosters – I really built a pair of these, and they really do work Masked thieves storm into Chicago colocation (again!) I’m vulnerable to Identity Theft – Thanks a lot HMRC – Fellow security blogger David Whitelegg was a victim of his own government. Vontu purchased by Symantec for $350 million – No link, but I’m sure you’ll get several in your morning security newsletters. SalesForce.com compromised by phishing attacks – Brian Krebs has two very good articles on this (so far) Deconstructing the Fake FTC e-mail virus attack SalesForce.com acknowledges data loss Screensaver displays security cam images SurveillanceSaver Alpha http://www.google.com/search?q=inurl:%22jpg/image.jpg%3Fr%3D%22 WabiSabiLabi founder arrested in Italy Tonight’s music: Reverse Engineering by +nurse Network Security Podcast, Episode 83 Time: 46:52 Share:

Share:
Read Post

Dark Reading Column Up- Tylenol As A Breach Disclosure

My first semi-regular column is up over at Dark Reading, “Lea ing From Tylenol”. Ignore the background stuff on me in the beginning that I had to write; the meat starts about a third of the way through. If you don’t remember it, about 15 years ago, seven people lay dead or dying in the Chicago area from some sort of poisoning, and law enforcement investigations indicated that Tylenol was involved, perhaps through contamination in the manufacturing process or through bottle-tampering in retail stores. If Johnson & Johnson followed today’s breach disclosure practices, the company would perform an internal investigation of their factories, while urging the police to avoid making any public statements. The lawyers would begin combing through business contracts and regulations to see if they had any legal obligation to disclose the tampering. It’s even more fun after that… Share:

Share:
Read Post

Understanding And Selecting A Database Activity Monitoring Solution: Part 2, Technical Architecture

In Part 1 of our series we introduced Database Activity Monitoring (DAM) and discussed some of its use cases. In this post we’ll discuss current technical architectures. Author’s Note: Although I call this product category Database Activity Monitoring, I don’t believe that name sufficiently describes where the market is headed. Over time we will migrate towards Application and Database Monitoring and Protection as products combine application and database monitoring with more options for active blocking, but it’s still too early to rebrand the market with that definition. Some tools do already offer those options, but both product and customer maturity still need to advance before we can migrate the definition. Base Architecture One of the key values of DAM is the ability to monitor multiple databases running on multiple database management systems (DBMS) across multiple platforms (Windows vs. Unix vs. …). The DAM tool aggregates collected information from multiple collectors to a central, secure server. In some cases the central server/management console also collects information, while in other cases it serves merely as a repository for collectors to drop data. This creates three potential options for deployment, depending on the solution you choose: Single Server/Appliance: A single server or appliance serves as both the sensor/collection point and management console. This mode is typically used for smaller deployments. Two-tiered Architecture: This option consists of a central management server, and remote collection points/sensors. The central server does no direct monitoring and just aggregates information from remote systems, manages policies, and generates alerts. The remote collectors can use any of the collection techniques and feed data back to the central server. Hierarchical Architecture: Collection points/sensors aggregate to business or geographical level management servers, which in turn report to an enterprise wide management server. Hierarchical deployments are best suited for large enterprises which may have different business unit or geographic needs. They can also be configured to only pass certain kinds of data between the tiers to manage large volumes of information or maintain unit/geographic privacy and policy needs. Whatever deployment architecture you choose, the central server aggregates all collected data, performs policy based alerting, and manages reporting and workflow. I’ve focused this description on typical DAM deployments for database monitoring and alerting; as we delve into the technology we’ll see additional deployment options for more advanced features like blocking. Collection Techniques At the core of all DAM solutions are the collectors that monitor database traffic and either collect it locally or send it to the central management server. These collectors are, at a minimum, capable of monitoring SQL traffic. This is one of the defining characteristics of DAM and what differentiates it from log management, Security Information and Event Management, or other tools that also offer some level of database monitoring. As usual, I’m going to simplify a bit, but there are three major categories of collection techniques. Network Monitoring: This technique monitors network traffic for SQL, parses the SQL, and stores in in the collector’s internal database. Most tools monitor bidirectionally, but some early tools only monitored inbound SQL requests. The advantages of network monitoring are that it has zero overhead on the monitored databases, can monitor independent of platform, requires no modification to the databases, and can monitor multiple, heterogenous database management systems at once. The disadvantages are that it has no knowledge of the internal state of the database and will miss any database activity that doesn’t cross the network, such as logging in locally or remote console connections. For this last reason, I only recommend network monitoring when used in conjunction with another monitoring technique that can capture local activity. Network monitoring can still be used if connections to the databases are encrypted via SSL or IPSec by placing a VPN appliance in front of the databases, and positioning the DAM collector between the VPN/SSL appliance and the database, where the traffic is unencrypted. Audit Log Monitoring: When using this technique, the collector is given administrative access to the target database and native database auditing is turned on. The collector externally monitors the DBMS and collects activity recorded by the native auditing or other internal database features that can output activity data. The overhead on the monitored system is thus the overhead introduced by turning on the native logging/auditing. In some cases this is completely acceptable- e.g., Microsoft SQL Server is designed to provide low-overhead remote monitoring. In other cases, particularly Oracle before version 10g, the overhead is material and may not be acceptable for performance reasons. Advantages include the ability (depending on DBMS platform) to monitor all database activity including local activity, performance equal to the performance of the native logging/monitoring, and monitoring of all database activity, including internal activity, regardless of client connection method. The big disadvantage is potential performance issues depending on the database platform, especially older versions of Oracle. This also requires opening an administrative account on the database and possibly some configuration changes. Local agent: This technique requires the installation of a software agent on the database server to collect activity. Individual agents vary widely in performance and techniques used, even within a product line, due to the requirements of DBMS and host platform support. Some early agents relied on locally sniffing a network loopback, which misses some types of client connections. The latest round of agents hooks into the host kernel to audit activity without modification to the DBMS and with minimal performance impact. Leading agents typically impact performance no greater than 3-5%, which seems to be the arbitrary limit database administrators are willing to accept. Advantages include collection of all activity without turning on native auditing, ability to monitor internal database activity like stored procedures, and potentially low overhead. Disadvantages include limited platform support (a new agent needs to be built for every platform) and the requirement to install an agent on every monitored database. The Future is a Hybrid I’m often asked which collection technique is best, and the answer is, “all of them”. Different collection techniques have different advantages and

Share:
Read Post

Heading to San Francisco

It’s a bit last minute, but I’ll be out in San Francisco next week for a panel at Oracle OpenWorld. I’m still working on my plans, but the panel is on Wednesday the 14th. I’m trying to decide how long to stay, so if you’re interested in meeting drop me a line… Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.