Securosis

Research

DLP in the Cloud

It’s been quite a while since we updated our Data Loss Prevention (DLP) research. It’s not that DLP hasn’t continued to be an area of focus (it has), but a bunch of other shiny things have been demanding our attention lately. Yeah, like the cloud. Well, it turns out a lot of organizations are using this cloud thing now, so they inevitably have questions about whether and how their existing controls (including DLP) map into the new world. As we update our Understanding and Selecting DLP paper, we’d be remiss if we didn’t discuss how to handle potential leakage in cloud-based environments. But let’s not put the cart ahead of the horse. First we need to define what we mean by cloud with applicable use cases for DLP. We could bust out the Cloud Security Alliance guidance and hit you over the head with a bunch of cloud definitions. But for our purposes it’s sufficient to say that in terms of data access you are most likely dealing with: SaaS Software as a Service (SaaS) is the new back office. That means whether you know about it or not, you have critical data in a SaaS environment, and it must be protected. Cloud File Storage: These services enable you to extend a device’s file system to the cloud, replicating and syncing between devices and facilitating data sharing. Yes, these services are a specific subtype of SaaS (and PaaS, Platform as a Service), but the amount of critical data they hold, along with how differently they work than a typical SaaS application, demands that we treat them differently. IaaS: Infrastructure as a Service (IaaS) is the new data center. That means many of your critical applications (and data) will be moving to a cloud service provider – most likely Amazon Web Services, Microsoft Azure, or Google Cloud Platform. And inspection of data traversing a cloud-based application is, well… different, which that means protecting that data is also… different. DLP is predicated on scanning data at rest and inspecting and enforcing policies on data in motion, which is a poor fit for IaaS. You don’t really have endpoints suitable for DLP agent installation. Data is in either structured (like a database) or unstructured (filesystem) datastores. Data protection for structured datastores defaults to application-centric methods, will unstructured cloud file systems are really just cloud file storage (which we will address later). So inserting DLP agents into an application stack isn’t the most efficient or effective way to protect an application. Compounding the problem, traditional network DLP don’t fit IaaS well either. You have limited visibility into the cloud network; to inspect traffic, you would need to route it through an inspection point, which is likely to be expensive and/or lose key cloud advantages – particularly elasticity and anywhere access. Further, cloud network traffic is encrypted more often, so even with access to full traffic, inspection at scale presents serious implementation challenges. So we will focus our cloud DLP discussion on SaaS and cloud file storage. Cloud Versus Traditional Data Protection The cloud is clearly different, but what exactly does that mean? If we boil it down to its fundamental core, you still need to perform the same underlying functions – whether the data resides in a 20-year-old mainframe or the ether of a multi-cloud SaaS environment. To protect data you need to know where it is (discover), understand how it’s being used (monitor), and then enforce policies to govern what is allowed and by whom – along with any additional necessary security controls (protect). When looking at cloud DLP many users equate protection with encryption but that’s a massive topic with a lot of complexity, especially in SaaS. A good start is our recent research on Multi-Cloud Key Management. There is considerable detail in that paper, but managing keys across cloud and on-premise environments is significantly more complicated; you’ll need to rely more heavily on your provider, and architect data protection and encryption directly into your cloud technology stack. Thinking about discovery, do you remember the olden days – back as far as 7 years ago – when your critical data was either in your data centers or on devices you controlled? To be fair, even then it wasn’t easy to find all your critical data, but at least you knew where to look. You could search all your file servers and databases for critical data, profile and/or fingerprint it, and then look for it across your devices and your network’s egress points. But as critical data started moving to SaaS applications and cloud file storage (sometimes embedded within SaaS apps), controlling data loss became more challenging because data need not always traverse a monitored egress point. So we saw the emergence of Cloud Access Security Brokers (CASB), to figure out which cloud services were in use, so you could understand (kind of) where your critical data might be. At least you had a place to look, right? Enforcement of data usage policies is also a bit different in the cloud – you don’t completely control SaaS apps, nor do you have an inspection/enforcement point on the network where you can look for sensitive data and block it from leaving. We keep hearing about lack of visibility in the cloud, and this is another case where it breaks the way we used to do security. So what’s the answer? It’s found in 3 letters you should be familiar with. A. P. I. API Are Your Friends Fortunately many SaaS apps and cloud file storage services provide APIs which allow you to interact with their environments, providing visibility and some degree of enforcement for your data protection policies. Many DLP offerings have integrated with the leading SaaS and cloud file storage vendors to offer you the ability to: Know when files are uploaded to the cloud and analyze them. Know who is doing what with the files. Encrypt or otherwise protect the files. With this access you don’t need to see the data pass

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.