Securosis

Research

The Rights Management Dilemma

Over the past few months I’ve seen a major uptick in the number of user inquiries I’m taking on enterprise digital rights management (or enterprise rights management, but I hate that term). Having covered EDRM for something like 8 years or so now, I’m only slightly surprised. I wouldn’t say there’s a new massive groundswell of sudden desperate motivation to protect corporate intellectual assets. Rather, it seems like a string of knee-jerk reactions related to specific events. What concerns me is that I’ve noticed two consistent trends throughout these discussions: EDRM is being mandated from someplace in management. Not, “protect our data”, but EDRM specifically. There is no interest in discussing how to best protect the content in question, especially other technologies or process changes. People are being told to get EDRM, get it now, and nothing else matters. This is problematic on multiple levels. While rights management is one of the most powerful technologies to protect information assets, it’s also one of the most difficult to manage and implement once you hit a certain scale. It’s also far from a panacea, and in many of these organizations it either needs to be combined with other technologies and processes, or should be considered after other more basic steps are taken. For example, most of these clients haven’t performed any content discovery (manual or with DLP) to find out where the information they want to protect is located in the first place. Rights management is typically most effective when: It’s deployed on a workgroup level. The users involved are willing and able to adjust their workflow to incorporate EDRM. There is minimal need for information exchange of the working files with external organizations. The content to protect is easy to identify, and centrally concentrated at the start of the project. Where EDRM tends to fail is with enterprise-wide deployments, or when the culture of the user population doesn’t prioritize the value of their content sufficiently to justify the necessary process changes. I do think that EDRM will play a very large role in the future of information-centric security, but only as its inevitable merging with data loss prevention is complete. The dilemma of rights management is that its very power and flexibility is also its greatest liability (sort of like some epic comic book thing). It’s just too much to ask users to keep track of which user populations map to which rights on which documents. This is changing, especially with the emerging DRM/DLP partnerships, but it’s been the primary reason EDRM deployments have been so self-limiting. Thus I find myself frequently cautioning EDRM prospects to carefully scope and manage their projects, or look at other technologies first, at the same time I’m telling them it’s the future of information centric security. Anyone seen my lithium? Share:

Share:
Read Post

Pragmatic Data Security: Groundwork

Back in Part 1 of our series on Pragmatic Data Security, we covered some guiding concepts. Before we actually dig in, there’s some more groundwork we need to cover. There are two important fundamentals that provide context for the rest of the process. The Data Breach Triangle In May of 2009 I published a piece on the Data Breach Triangle, which is based on the fire triangle every Boy Scout and firefighter is intimately familiar with. For a fire to burn you need fuel, oxygen, and heat – take any single element away and there’s no combustion. Extending that idea: to experience a data breach you need an exploit, data, and an egress route. If you block the attacker from getting in, don’t leave them data to steal, or block the stolen data’s outbound path, you can’t have a successful breach. To date, the vast majority of information security spending is directed purely at preventing exploits – including everything from vulnerability management, to firewalls, to antivirus. But when it comes to data security, in many cases it’s far cheaper and easier to block the outbound path, or make the data harder to access in the first place. That’s why, as we detail the process, you’ll notice we spend a lot of time finding and removing data from where it shouldn’t be, and locking down outbound egress channels. The Two Domains of Data Security We’re going to be talking about a lot of technologies through this series. Data security is a pretty big area, and takes the right collection of tools to accomplish. Think about network security – we use everything from firewalls, to IDS/IPS, to vulnerability assessment and monitoring tools. Data security is no different, but I like to divide both the technologies and the processes into two major buckets, based on how we access and use the information: The Data Center and Enterprise Applications – When a user access content through an enterprise application (client/server or web), often backed by a database. Productivity Tools – When a user works with information with their desktop tools, as opposed to connecting to something in the data center. This bucket also includes our communications applications. If you are creating or accessing the content in Microsoft Office, or exchanging it over email/IM, it’s in this category. To provide a little more context, our web application and database security tools fall into the first domain, while DLP and rights management generally fall into the second. Now I bet some of you thought I was going to talk about structured and unstructured data, but I think that distinction isn’t nearly as applicable as the data center vs. productivity applications. Not all structured data is in a database, and not all unstructured data is on a workstation or file server. Practically speaking, we need to focus on the business workflow of how users work with data, not where the data might have come from. You can have structured data in anything from a database to a spreadsheet or a PDF file, or unstructured data stored in a database, so that’s no longer an effective division when it comes to the design and implementation of appropriate security controls. The distinction is important since we need to take slightly different approaches based on how a user works with the information, taking into account its transitions between the two domains. We have a different set of potential controls when a user comes through a controlled application, vs. when a user is creating or manipulating content on their desktop and exchanging it through email. As we introduce and explore the Pragmatic Data Security process, you’ll see that we rely heavily on the concepts of the Data Breach Triangle and these two domains of data security to focus our efforts and design the right business processes and control schemes without introducing unneeded complexity. Share:

Share:
Read Post

Data Discovery and Databases

I periodically write for Dark Reading, contributing to their Database Security blog. Today I posted What Data Discovery Tools Really Do, introducing how data discovery works within relational database environments. As is the case with many of the posts I write for them, I try not to use the word ‘database’ to preface every description, as it gets repetitive. But sometimes that context is really important. Ben Tomhave was kind enough to let me know that the post was referenced on the eDiscovery and Digital evidence mailing list. One comment there was, “One recurring issue has been this: If enterprise search is so advanced and so capable of excellent granularity (and so touted), why is ESI search still in the boondocks?” I wanted to add a little color to the post I made on Dark Reading as well as touch on an issue with data discovery for ESI. Automated data discovery is a relatively new feature for data management, compliance, and security tools. Specifically in regard to relational databases, the limitations of these products have only been an issue in the last couple years due to growing need – particularly in accuracy of analysis. The methodologies for rummaging around and finding stuff are effective, but the analysis methods have a little way to go. That’s why we are beginning to see labeling and content inspection. With growing use of flat file and quasi-relational databases, look for labeling and Google type search to become commonplace. In my experience, metadata-based data discovery was about 85% effective. Having said that, the number is totally bogus. Why? Most of the stuff I was looking for was easy to find, as the databases were constructed by someone was good at database design, using good naming conventions and accurate column definitions. In reality you can throw the 85% number out, because if a web application developer is naming columns “Col1, Col2, Col3, … Col56”, and defining them as text fields up to 50 characters long, your effectiveness will be 0%. If you do not have labeling or content analysis to support the discovery process, you are wasting your time. Further, with some of the ISAM and flat file databases, the discovery tools do not crawl the database content properly, forcing some vendors to upgrade to support other forms of data management and storage. Given the complexity of environments and the mixture of data and database types, both discovery and analysis components must continue to evolve. Remember that a relational database is highly structured, with columns and tables being fully defined at the time of creation. Data that is inserted goes through integrity checks, and in some cases, must conform to referential integrity checks as well. Your odds of automated tools finding useful information in such databases is far higher because you have definitive descriptions. In flat files or scanned documents? All bets are off. As part of a project I conducted in early 2009, I spoke with a bunch of attorneys in California and Arizona regarding issues of legal document discovery and management. In that market, document discovery is a huge business and there is a lot of contention in legal circles regarding its use. In terms of legal document and data discovery, the process and tools are very different from database data discovery. From what I have witnessed and from explanations by people who sit on steering committees for issues pertaining to legal ESI, very little of the data is ever in a relational database. The tools I saw were pure keyword and string pattern matching on flat files. Some of the large firms may have document management software that is a little more sophisticated, but much of it is pure flat file server scanning with reports, because of the sheer volume of data. What surprised me during my discussions was that document management is becoming a huge issue as large legal firms are attempting to win cases by flooding smaller firms with so many documents that they cannot even process the results of the discovery tools. They simply do not have adequate manpower and it undermines their ability to process their casefiles. The fire around this market has to do with politics and not technology. The technology sucks too, but that’s secondary suckage. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.