Securosis

Research

Cracking the Confusion: Encryption Layers

Picture enterprise applications as a layer cake: applications sit on databases, databases on files, and files are mapped onto storage volumes. You can use encryption at each of these layers in your application stack: within the application, in the database, on files, or on storage volumes. Where you use an encryption engine dominates security and performance. Higher up the stack can offer more security, with higher complexity and performance cost. There is a similar tradeoff with encryption engine and key manager deployments: more tightly coupled systems offer less complexity, but less security and reliability. Building an encryption system requires a balance between security, complexity, and performance. Let’s take a closer look at each layer and their tradeoffs. Application Encryption One of the more secure ways to encrypt application data is to collect it in the application, send it to an encryption server or appliance (an encryption library embedded in the application), and then store the encrypted data in a separate database. The application has full control over who sees what and can secure data without depending on the security of the underlying database, file system, or storage volumes. The keys themselves might be on the encryption server or could even be stored in yet another system. The separate key store increases security, simplifies management of multiple encryption appliances, and helps keep keys safe for data movement – backup, restore, and migration/synchronization to other data centers. Database Encryption Relational database management systems (RDBMS) typically have two encryption options: transparent and column. In our layer cake above columnar encryption occurs as applications insert data into a database, whereas transparent encryption occurs as the database writes data out. Transparent encryption is applied automatically to data before it is stored at the file or disk layer. In this model encryption and key management happen behind the scenes, without the user’s knowledge or requiring application programming. The database management system handles encryption and decryption operations as data is read (or written), ensuring all data is secured, and offering very good performance. When you need finer control over data access, you can encrypt single columns, or tables, within the database. This approach offers the advantage that only authenticated users of encrypted data are able to gain access, but requires changing database or application code to manage encryption operations. With either approach there is less burden on application developers to build a crypto system, but slightly less control over who can access sensitive data. Some third-party tools also offer transparent database encryption by automatically encrypting data as it is stored in files. These tools aren’t part of the database management system itself, so they can work with databases that don’t support TDE directly, and provide greater separation of duties for database administrators. File Encryption Some applications, such as payment systems and web applications, do not use databases and instead store sensitive data in files. Encryption is applied transparently as data is written to files. This type of encryption is offered as a third-party add-on to the file system, or embedded within the operating system. Encryption and decryption are transparent to both users and applications. Data is decrypted when a user requests a file, after they have authenticated to the system. If the user does not have permission to read the file, or has not provided proper credentials, they only get encrypted data. File encryption is commonly used to protect “data at rest” in applications that do not include encryption capabilities, including legacy enterprise applications and many big data platforms. Disk/Volume Encryption Many off-the-shelf disk drives and Storage Area Network (SAN) arrays include automatic data encryption. Encryption is applied as data is written to disk, and decrypted by authenticated users/apps when requested. Most enterprise-class systems hold encryption keys locally to support encryption operations, but rely on external key management services to manage keys and provide advanced key services such as key rotation. Volume encryption protects data in case drives are physically stolen. Authenticated users and applications are provided unencrypted copies of files and data. Tradeoffs In general, the further “up the stack” you deploy encryption, the more secure your data is. The price of that extra security is more difficult integration, usually in the form o application code changes. Ideally we would encrypt all data at the application layer and fully leverage user authentication, authorization, and business context to determine who can see sensitive data. In the real world the code changes required for this level of precision control are often insurmountable engineering challenges and/or cost prohibitive. Surprisingly, transparent encryption often perform faster than application-layer encryption, even with larger data sets. The tradeoff is moving high enough “up the stack” to address relevant threats while minimizing the pain of integration and management. Later in this series we will walk you through the selection process in detail. Next up in this series: key management options. Share:

Share:
Read Post

Friday Summary: February 13, 2015

Welcome to the Friday the 13th edition of the Friday Summary! It has been a while since I wrote the summary so there is lots to cover … My favorite external post this week is a research paper, Mongo Databases At Risk, outlining a very common trend among MongoDB users: not using basic user authentication to secure their databases. Well, that, and putting them on the Internet. On the default port. Does this sound like SQL Server circa 2003 to anyone else? One angle I found important was the number of instances discovered: nearly 40k databases. That is a freakin’ lot! Remember, this is MongoDB. And just those running on the Internet at the default port. Yes, it’s one of the top NoSQL platforms, but during our inquiries we spoke with 4 Hadoop users for every MongoDB user. MongoDB was also behind Hadoop and Cassandra. I don’t know if anyone publishes download or usage numbers for the various platforms, but extrapolating from those numbers, there are a lot of NoSQL databases in use. Someone with more time on their hands might decide to scan the Internet for instances of the other platforms (the default port for Hadoop, Cassandra, CouchDB, and Redis is 6380; RIAK is 8087). I would love to know what you find. Back to security… I have had conversations with several firms trying to figure out how to monitor NoSQL usage; we know how to apply DAM principles to SQL, but MapReduce and other types of queries are much more difficult to parse. I expect several vendors to introduce basic monitoring for Hadoop in the next year, but it will take time to mature, and even more to cover other platforms. What I haven’t heard discussed is the easier – and no less pressing – issue of configuration and vulnerability assessment. The enterprise distributions are providing best practices but automated scans are scarce – and usually custom. That is a free hint for any security vendors looking for a quick way to help big data customers get secure. Mobile security consumes much more of my time than it should. I geek out on it, often engaging Gunnar in conversation on everything from the inner workings of secure elements to the apps that make payments happen. And I read everything I can find. This week I ran across Why Banks Will Win the Battle for the Mobile Wallet, by John Gunn – the guy who runs the wonderfully helpful twitter feed @AuthNews. But on this I think he has missed the point. Banks are not battling to win mobile wallets. In fact those I have spoken with don’t care about wallets. They care about transactions. And moving more transactions from cash to credit means a growing stream of revenue for merchant banks and payment processors, which makes them very happy. Wallets in and of themselves don’t fosters adoption – as Google is well aware – and in fact many users don’t really trust wallets at all. What gets people to move from a plastic card or cash, at least in the US, is a combination of convenience and trust. Starbucks leveraged their brand affinity into seven million subscribers for their app and an impressive 2.1 million transactions per week. Banks benefit directly when more transactions move away from cash, and they are happy to let others own the user experience. But things get really interesting in overseas markets, which make US adoption of mobile payments look like a payments backwater. Nations without traditional banking or payment infrastructure can now move money in ways they previously could not, so adoption rates have been huge. Leveraging cellular infrastructure makes it faster and safer to move money, with fewer worries about carrying cash. Nations like Kenya – which is not often considered on the cutting edge of technology, but had 25 million mobile payment users and moved $26 billion in 2014 via mobile payments and mobile money subscriptions. Sometimes technology really does make the world a better place. The banks don’t care which wallets, apps, technology, or carriers wins – they just want someone to make progress. In January I normally publish my research calendar for the coming year. But Rich has been hogging the Friday Summary for weeks now, so I finally get a chance to talk about what I am seeing and doing. Tokenization: I am – finally – going to publish some thoughts on the latest trends in tokenization. I want to talk about changes in the technology, adoption on mobile platforms, how the latest PCI specification is changing compliance, and some customer user cases. Risk-Based Authentication and Authorization: We see many more organizations looking at risk-based approaches to augment the security of web-based applications. Rather than rewrite applications they use metadata, behavioral information, business context, and… wait for it… big data analytics to better determine the acceptability of a request. And it is often cheaper and easier to bolt this on externally than to change applications. Gunnar and I have wanted to write this paper for a year, and now we finally have the time. Building a Security Analytics Platform: I have been briefed by many of security analytics startups, and each is putting together some basic security analysis capabilities, usually built on big data databases. I have, in that same period, also spoken with many large enterprises who decided not to wait for the industry to innovate, and are building their own in-house systems. The last couple even asked me what I thought of certain architectural choices, and which data elements should they use as hash keys! So there is considerable demand for consumer education; I will cover off-the-shelf and DIY options. I am still on the fence about some secure code development ideas, so if you have an idea, let’s talk. Even the security vendors I have visited in the last year have pulled me aside to ask about transitioning to Agile, or how to fix a failed transition to Agile. Most want to know what

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.