Login  |  Register  |  Contact

Database Encryption

Thursday, May 31, 2012

Pragmatic Key Management: Understanding Data Encryption Systems

By Rich

One of the common problems in working with encryption is getting caught up with the intimate details of things like encryption algorithms, key lengths, cipher modes, and other minutiae. Not that these details aren’t important – depending on what you’re doing they might be critical – but in the larger scheme of things these aren’t the aspects most likely to trip up your implementation. Before we get into different key management strategies, let’s take a moment to look at crypto systems at the macro level. We will stick to data encryption for this paper, but these principles apply to other types of cryptosystems as well.

Note: For simplicity I will often use “encryption” instead of “cryptographic operation” through this series. If you’re a crypto geek, don’t get too hung up… I know the difference – it’s for readability.

The three components of a data encryption system

Three major components define the overall structure of an encryption system:

  • The data: The object or objects to encrypt. It might seem silly to break this out, but the security and complexity of the system are influenced by the nature of the payload, as well as by where it is located or collected.
  • The encryption engine: The component that handles the actual encryption (and decryption) operations.
  • The key manager: The component that handles key and passes them to the encryption engine.

In a basic encryption system all three components are likely located on the same system. Take personal full disk encryption (the default you might use on your home Windows PC or Mac) – the encryption key, data, and engine are all kept and run on the same hardware. Lose that hardware and you lose the key and data – and the engine, but that isn’t normally relevant.

But once we get into SMB and the enterprise we tend to split out the components for security, management, reliability, and compliance.

Building a data encryption system

Where you place these components define the structure, security, and manageability of your encryption system:

Full Disk Encryption

Our full disk encryption example above isn’t the sort of approach you would want to take for an organization of any size greater than 1. All major FDE systems do a good job of protecting the key if the device is lost, so we aren’t worried about security too much from that perspective, but managing the key on the local system means the system is much less manageable and reliable than if all the FDE keys are stored together.

Enterprise-class FDE manages the keys centrally – even if they are also stored locally – to enable a host of more advanced functions; including better recovery options, audit and compliance, and the ability to manage hundreds of thousands of systems.

Database encryption

Let’s consider another example: database encryption. By default, all database management systems (DBMS) that support encryption do so with the data, the key, and the encryption engine all within the DBMS. But you can mix and match those components to satisfy different requirements.

The most common alternative is to pull the key out of the DBMS and store it in an external key manager. This can protect the key from compromise of the DBMS itself, and increases separation of duties and security. It also reduces the likelihood of lost keys and enables extensive management capabilities – including easier key rotation, expiration, and auditing.

But the key could be exposed to someone on the DBMS host itself because it must be stored in memory at before it can be used to encrypt or decrypt. One way to protect against this is to pull both the encryption engine and key out of the DBMS. This could be handled through an external proxy, but more often custom code is developed to send the data to an external encryption server or appliance. Of course this adds complexity and latency…

Cloud encryption

Cloud computing has given rise to a couple additional scenarios.

  • To protect an Infrastructure as a Service (IaaS) storage volume running at an external cloud provider you can place the encryption engine in a running instance, store the data in a separate volume, and use an external key manager which could be a hardware appliance connected through VPN and managed in your own data center.
  • To protect enterprise files in an object storage service like Amazon S3 or RackSpace Cloud Files, you can encrypt them on a local system before storing them in the cloud – managing keys either on the local system or with a centralized key manager. While some of these services support built-in encryption, they typically store and manage the key themselves, which means the provider has the (hopefully purely theoretical) ability to access your data. But if you control the key and the encryption engine the provider cannot read your files.

Backup and storage encryption

Many backup systems today include some sort of an encryption option, but the implementations typically offer only the most basic key management. Backup up in one location and restoring in another may be a difficult prospect if the key is stored only in the backup system.

Additionally, backup and storage systems themselves might place the encryption engine in any of a wide variety of locations – from individual disk and tape drives, to backup controllers, to server software, to inline proxies.

Some systems store the key with the data – sometimes in special hardware added to the tape or drive – while others place it with the engine, and still others keep it in an external key management server.

Between all this complexity and poor vendor implementations, I tend to see external key management used for backup and storage more than for just about any other data encryption usage.

Application encryption

Our last example is application encryption. One of the more secure ways to encrypt application data is to collect it in the application, send it to an encryption server or appliance, and then store the encrypted data in a separate database. The keys themselves might be on the encryption server, or could even be stored in yet another system. The separate key store offers greater security, simplifies management for multiple encryption appliances, and helps keep keys safe for data movement – backup, restore, and migration/synchronization to other data centers.

In each of these examples we see multiple options for where to place the components – with the corresponding tradeoffs in security, manageability, and other aspects.

This list isn’t comprehensive but should give you a taste of the different ways these bits and pieces can be mixed and matched for different data encryption systems.

Next we will build on this to define the four major key management strategies, followed by recommendations for how to pick the right options to suit on our needs.

–Rich

Monday, August 22, 2011

Cloud Security Q&A from the Field: Questions and Answers from the DC CCSK Class

By Rich

One of the great things about running around teaching classes is all the feedback and questions we get from people actively working on all sorts of different initiatives. With the CCSK (cloud security) class, we find that a ton of people are grappling with these issues in active projects and different things in various stages of deep planning.

We don’t want to lose this info, so we will to blog some of the more interesting questions and answers we get in the field. I’ll skip general impressions and trends today to focus on some specific questions people in last week’s class in Washington, DC, were grappling with:

  • We currently use XXX Database Activity Monitoring appliance, is there any way to keep using it in Amazon EC2?

This is a tough one because it depends completely on your vendor. With the exception of Oracle (last time I checked – this might have changed), all the major Database Activity Monitoring vendors support server agents as well as inline or passive appliances. Adrian covered most of the major issues between the two in his Database Activity Monitoring: Software vs. Appliance paper. The main question for cloud )especially public cloud) deployments is whether the agent will work in a virtual machine/instance. Most agents use special kernel hooks that need to be validated as compatible with your provider’s virtual machine hypervisor. In other words: yes, you can do it, but I can’t promise it will work with your current DAM product and cloud provider. If your cloud service supports multiple network interfaces per instance, you can also consider deploying a virtual DAM appliance to monitor traffic that way, but I’d be careful with this approach and don’t generally recommend it. Finally, there are more options for internal/private cloud where you can route the traffic even back to a dedicated appliance if necessary – but watch performance if you do.

  • How can we monitor users connecting to cloud services over SSL?

This is an easy problem to solve – you just need a web gateway with SSL decoding capabilities. In practice, this means the gateway essentially performs a man in the middle attack against your users. To work, you install the gateway appliance’s certificate as a trusted root on all your endpoints. This doesn’t work for remote users who aren’t going through your gateway. This is a fairly standard approach for both web content security and Data Loss Prevention, but those of you just using URL filtering may not be familiar with it.

  • Can I use identity management to keep users out of my cloud services if they aren’t on the corporate network?

Absolutely. If you use federated identity (probably SAML), you can configure things so users can only log into the cloud service if they are logged into your network. For example, you can configure Active Directory to use SAML extensions, then require SAML-based authentication for your cloud service. The SAML token/assertion will only be made when the user logs into the local network, so they can’t ever log in from another location. You can screw up this configuration by allowing persistent assertions (I’m sure Gunnar will correct my probably-wrong IAM vernacular). This approach will also work for VPN access (don’t forget to disable split tunnels if you want to monitor activity).

  • What’s the CSA STAR project?

STAR (Security, Trust & Assurance Registry) is a Cloud Security Alliance program where cloud providers perform and submit self assessments of their security practices.

  • How can we encrypt big data sets without changing our applications?

This isn’t a cloud-specific problem, but does come up a lot in the encryption section. First, I suggest you check out our paper on encryption: Understanding and Selecting a Database Encryption or Tokenization Solution. The best cloud option is usually volume encryption for IaaS. You may also be able to use some other form of transparent encryption, depending on the various specifics of your database and application. Some proxy-based in-the-cloud encryption solutions are starting to appear.

That’s it from this class… we had a ton of other questions, but these stood out. As we teach more we’ll keep posting more, and I should get input from other instructors as they start teaching their own classes.

–Rich

Wednesday, May 05, 2010

Download Our Kick-Ass Database Encryption and Tokenization Paper

By Rich

It’s kind of weird, but our first white paper to remain unsponsored is also the one I consider our best yet. Adrian and I have spent nearly two years pulling this one together – with more writes, re-writes, and do-overs than I care to contemplate.

We started with a straight description of encryption options, before figuring out that it’s all too complex, and what people really need is a better way to make sense of the options and figure out which will work best in their environments. So we completely changed our terminology, and came up with an original way to describe and approach the encryption problem – we realized that deciding how to best encrypt a database really comes down to managing credentialed vs. non-credentialed users.

Then, based on talking with users & customers, we noticed that tokenization was being thrown into the mix, so we added it to the “decision tree” and technology description sections. And to help it all make sense, we added a bunch of use cases (including a really weird one based on an actual situation Adrian found himself in).

We are (finally) pretty darn happy with this report, and don’t want to leave it in a drawer until someone decides to sponsor.

On the landing page you can leave comments, or you can just download the paper.

We could definitely use some feedback – we expect to update this material fairly frequently – and feel free to spread the word…

–Rich

Monday, July 13, 2009

Database Encryption, Part 6: Use Cases

By Adrian Lane

  • Rich
  • Encrypting data within a database doesn’t always present a clear-cut value proposition. Many of the features/functions of database encryption are also available through external tools, creating confusion as to why (or even whether) database encryption is needed. In many cases, past implementations have left DBAs and IT staff with fears of degraded performance and broken applications – creating legitimate wariness the moment some security manager mentions encryption. Finally, there is often a blanket assumption that database encryption disrupts business processes and mandates costly changes to applications (which isn’t necessarily the case). To make good database encryption decisions, you’ll first need to drill down into the details of what threats you want to address, and how your data is used. Going back to our decision tree from Part 2, look at the two basic options for database encryption, as well the value of each variation, and apply that to your situation to see what you need. Only then can you make an educated decision on which database encryption best suits your situation, if you even need it at all.

    Use the following use cases to illustrate where and how problems are addressed with database encryption, and to walk you through the decision-making process.

    Use Case 1: Real Data, Virtual Database

    Company B is a telephony provider with several million customers, and services user accounts through their web site. The company is considering virtualizing their server environment to reduce maintenance costs, adapt to fluctuations in peak usage, and provide more options for disaster recovery. The database is used directly by customers through a web application portal, as well as by customer support representatives through a customer care application; it’s periodically updated by the billing department through week-end batch jobs.

    Company B is worried that if virtual images of the database are exported to other sites within the company or to partner sites, those images could be copied and restored outside the company environment and control. The principal threat they are worried about is off-site data inspection or tampering with the virtual images. As secondary goals they would like to keep key management simple, avoid introducing additional complexity to the disaster recovery process, and avoid an increased burden for day-to-day database management.

    In this scenario, a variant of transparent encryption would be appropriate. Since the threat is non-database users accessing data by examining backups or virtual images, transparent encryption protects against viewing or altering data through the OS, file system, or image recovery tools. Which variant to choose – external or internal – depends on how the customer would like to deploy the database. The deciding factors in this case are two-fold: Company B wants separation of duties between the OS administrative user and the database users, and in the virtualized environment the availability of disk encryption cannot be ensured. Native database encryption is the best fit for them: it inherently protects data from non-credentialed users, and removes any reliance on the underlying OS or hardware. Further, additional computational overhead for encryption can be mitigated by allocation of more virtual resources.

    While the data would not be retrievable simply by examining the media, a determined attacker in control of the virtual machine images could launch many copies of the database, and has an indefinite period to guess DBA passwords to obtain the decryption keys stored within the database, but using current techniques this isn’t a significant risk (assuming no one uses default or easy to guess passwords). Regardless, native transparent encryption is a cost-effective method to address the company’s primary concerns, without interfering with IT operations.

    Use Case 2: Near Miss

    Company A is a very large technology vendor, concerned about the loss of sensitive company information. During an investigation of missing test equipment from one of their QA labs, a scan of public auction sites revealed that not only had their stolen equipment been recently auctioned off, but several servers from the lab were actively listed for sale. With the help of law enforcement they discovered and arrested the responsible employee, but that was just the beginning of their concern. As the quality assurance teams habitually restored production data provided to them by DBAs and IT admins onto test servers to improve the realism of their test scenarios, a forensic investigation showed that most of their customer data was on the QA servers up for auction. The data in this case was not leaked to the public, but the executive team was shocked to learn they had very narrowly avoided a major data breach, and decided to take proactive steps against sensitive data escaping the company.

    Company A has a standing policy regarding the use of sensitive information, but understands the difficulty of enforcing of this policy across the entire organization and forever. The direct misuse of the data was not malicious – the QA staff were working to improve the quality of their simulations and indirectly benefiting end users by projecting demand – but had the data been leaked this fine distinction would be irrelevant. To help secure data at rest in the event of accidental or intentional disregard for data security policy, the management team has decided to encrypt sensitive content within these databases. The question becomes which option would be appropriate: user or transparent encryption.

    The primary goal here is to protect data at rest, and secondary is to provide some protection from misuse by internal users. In this particular case, the company decided to use user-based encryption with key management internal to the database. Encrypted tables protect against data breach in the case that should servers, backup tapes, or disks leave the company; they also address the concern of internal groups importing and using data in non-secured databases.

    At the time this analysis took place, the customer’s databases were older versions that did not support separation of roles for database admin accounts. Further, the databases were installed under domain administration accounts – providing full access to both application developers and IT personnel; this access is integral to the data backup & archiving system. At the time tying the encryption keys to the user and service accounts was considered an effective way to address the threats, and performance was superior to full database encryption because sensitive data was constrained to a few columns. This use case reflects a real customer, and how they chose to deal with the issue at the time. If the decision had been made today, this would have been the wrong choice. Transparent encryption, proper deployment, and the modification of access control settings would have been sufficient to remedy both problems and would be the optimal choice today.

    Use Case 3: PCI Compliance Strategy

    Company C is a class 2 merchant who needs to comply with PCI-DSS guidelines. They store customer billing information, credit card information, and some password recovery information in the customer database. Some of the transactional information with customer name and address information is also propagated to other databases, with foreign key references from the customer table into the billing department database. The customer wants to comply with the PCI-DSS standard and would like to keep the data segmented from all users with the exception of a single administrative account and the lone service account which processes requests from the application server.

    The customer’s decision was to break this into a two-phase effort. As they were behind the curve, the first goal was to attain PCI compliance, before migrate to a more secure, more sustainable solution. To get compliant quickly, they chose a file-based encryption product that externally secured database contents. This was implemented without alteration to the application and database logic. Company C decoupled access control from the native OS platform by leveraging an existing centralized service. This is a stop-gap measure, providing sufficient time to move to a different architecture.

    As the long term solution, Company C is removing the use of credit card numbers from business processing applications, and moving to a tokenized model. Every credit card transaction will generate a token that is used in lieu of the credit card number. As this number is nothing more than an internal reference, it cannot be used as a credit card in the event it is stolen. The company will continue to collect and store credit card numbers from customers, but these numbers will be stored in a single, highly secured database. The primary reason is for remediation without breaking prior transactions, but a homegrown solution will reduce costs and external dependencies as well. All other internal operations that reference credit card numbers will be supplanted with tokens provided by the internal software. As the structure of the token is similar in size and type it will require few changes to supporting applications, and none to database structure.

    All credit card details will be moved into this single data repository, reducing the scope of the problem, but Company C still needs to make a decision on how to encrypt existing credit card data. The choice came down to encryption at the application level through an external hardware appliance vs. database encryption with external key management. Company C examined factors including ease of integration, quality of security, ease of use, and scope of changes to the application and development platform, but these considerations were more or less a wash. The decision was made to go with database native encryption with external key management. The deciding factors came down to price, reliability, and performance. The database included the encryption engine and required no additional purchase of software or hardware. Additionally, while the hardware platform offered hardware acceleration of cryptography operations, it was slowed by network latency and occasional network availability hiccups.

    While this use case reflects a single company’s strategic decision, many of the smaller firms we have spoken to will deploy a similar token replacement strategy. But smaller firms are opting for a total replacement of credit card data. Rather than keeping PAN and related information in a single database, it is more efficient for them to totally remove most liability by substituting web-based collection of credit card numbers and moving that responsibility to a third party product.

    –Adrian Lane

  • Rich
  • Wednesday, July 01, 2009

    Database Encryption, Part 5: Key Management

    By Adrian Lane

    This is Part 5 of our Database Encryption Series. Part 1, Part 2, Part 3, Part 4, and the supporting posts on Database vs. Application Encryption, & Database Encryption: Fact or Fiction are online.

    I think key management scares people.

    Application developers, IT managers, and database administrators all know effective key management support for encryption is critical, but it remains scary for most practitioners. Despite the incredible mathematical complexity behind the ciphers and the finesse required to implement those ciphers in a secure fashion, they don’t have to understand the gears and cogs inside the machine. Encryption is provided as libraries or fully functional services, so you send out clear text, and you get back encrypted data – easy. Key management worries people because if you don’t get the key management piece right, the whole system fails, and you are the responsible party. To illustrate what I mean, I want to share a couple stories about developers and IT practitioners who manage these systems.

    Building database applications from scratch, developers have access to good crypto libraries, but generally little understanding of key management practices and few key management resources. The application developers I know took great pride in securing database fields through encryption, but when I asked them how they stored the key, the answer was usually “in the properties file”. That meant the key was stored on the disk, unencrypted, and in a directory readable by anyone who could access the application. When I pressed the point, I was assured that the key ‘needed’ to be there, otherwise the application would not be able to get the key and thus fail to restart. I have even had developers tell me this is a “chicken vs. egg” conundrum, that if you encrypt the key you cannot access it, therefore a key needed to be kept in clear text somewhere. I kid you not, with my last employer (who, by the way, developed security products), this was the reason the ‘senior’ programmer implemented key management this way, and why he didn’t see a problem with it. The argument always ends the same: a key as a tangible object is fine, but obfuscated and hidden is not. The unspoken reason is something every programmer knows: code has bugs, and a key management bug could be devastating and unrecoverable.

    On the IT side, administrators I know have a different, equally frightening, set of problems with key management. Every IT manager I have spoken with has one or more of these questions: What happens if/when I lose keys? How to I back keys up securely? How do I replicate keys across multiple key servers for redundancy? I have 1,000 users reliant on public key cryptography, so how do I share these keys for all these users? If I expire and rotate keys, do I lose access to data archives? If I try to recover data from a tape, how do I get the right key? If I am using specialized key management hardware, how do I recover from fire or other disasters? All are risks in the minds of IT professionals every day.

    Lose your key, lose your data. Lose your data, lose your job. And that scares the heck out of people!

    Our goal in this section is to discuss key management options for database encryption. The introduction is meant to underscore the state of key management services today, and help illustrate why key management products are deployed the way they are. Yes, you can download excellent encryption tools for free, you can mix and match best of breed features, and you can develop your own operational key management process that is application agnostic, but this approach is becoming a rarity. And that’s a good thing because key management needs to performed by people who know what they are doing. Centralized, automated, embedded, pre-packaged and available as a complete service is the common choice. This removes the complexity and the responsibility of management, and much of the worry about reliability of your developers and IT administrators. Make no mistake, this trade-off comes at a price. Let’s dig into some key management practices for databases and how they are used.

    Internally Managed

    For database encryption, we define “internal key management” as key services within the database and provided by the database vendor. All of the relational database management platforms provide encryption packages, and included in these packages are key management functions. Typical services include key creation, storage, and retrieval, security; and most systems can handle symmetric and public key encryption options. Usage of the keys can be handled by proxy, as with the transparent encryption options, or through direct API calls to the database package. The keys are stored within the database, usually within a special table that resides in the administrative database or schema. Each vendor’s approach to securing keys varies significantly. Some vendors rely upon simple access controls tied to administrative accounts, some encrypt individuals’ keys with a single master key, while others do not allow any access to keys at all, and perform all key functions within a proxied service.

    If you are using transparent encryption options (see Part 2: Selection Process Overview for terminology definitions) provided by the database vendor, all key operations is performed on the users’ behalf. For example, when a query for data within an encrypted column is made, the database performs the typical authorization checks, and when successfully authorized, automatically decrypts the data for the users. Neither the users nor the application need be aware the data is encrypted, needs to make a specific request to decrypt, or needs to supply a decryption key to the database. It is all handled on their behalf, and often performed without their knowledge.

    Transparent key management’s greatest asset is just that: its transparency. The storage, management, security, sharing, and backup of the keys is handled by the database. With internally managed encryption keys, there is not a lot for the application to do, or even care about, since all of the use and management of the keys is performed inside the database. The services runs autonomously and is turned on by simple alteration of database parameters, so the DBA need only worry about backing up the system, and running the occasional disaster recovery drill so they are no unexpected ‘gotchas’. The database vendors have worked hard to perfect these operations and made them fast, efficient and reliable. As the functions are bundled with the core database by most vendors, and priced attractively. Finally this flavor has the flexibility to support API calls as well as transparent operation.

    On the downside, these keys are typically only as secure as your least trustworthy DBA. In every platform that stores the keys in a database table, they will be accessible to database administrators. Despite vendor advertisements to the contrary, there are attacks that can sniff the key operations within the database, or even scan through database memory structures where the keys reside during use. When you boil it down, transparent encryption and associated internal key management is a very simple and cost effective way to secure data at rest, and not particularly effective at securing keys from database administrators.

    Externally Managed

    With external key management all key operations are performed outside the database in a dedicated key management server. External key management servers are either dedicated software or hardware appliances (aka Hardware Security Modules, or HSMs), and available through network or application procedure calls. Both provide a full range of functions including key creation, storage, and retrieval; they also support both for symmetric and public key cryptosystems. But they usually have a lot more to offer as well, with administrative support for access management, sharing keys across multiple key managers, and help with policies for key expiration/rotation. They provide additional advantages over internally managed keys in that the platform is dedicated to this function and provides faster cryptographic operations, they produce better results, and they store keys more securely than is possible within the database. They can perform the encryption and decryption operations on the users’ behalf, outside the database, as well.

    External key management can be used in two ways: by calling the key management server directly, or by having the database make the requests on behalf of users and applications. All relational database now have their own Cryptographic API support. By calling the database through an established database communication port, application developers get external key management services within the same ‘database’ connection. These API calls are performed in the same context and in the same programmatic language as other database queries. This approach offers the security advantage of enforcing separation of duties, keeping the encryption keys out of the hands of database administrators, and puts them into the hands of the application developers. It allows application developers to choose just how granular they want to get with data encryption, and provides a great deal of flexibility in access control enforcement and key usage. That approach still leverages the database itself to perform the encryption, and can still take advantage of database features to address many structural and systems management challenges.

    The other external key management option is calling the key manager directly, and having it perform all of the cryptographic functions on your behalf. In this case you’re outside the realm of database encryption and have moved to external, or application layer, encryption. By performing all encryption outside the database, leveraging none of the built in functions, the database becomes a simple repository for encrypted data. The advantages are that the database does not suffer the performance impact associated with encryption, the network latency of a database API call is not incurred, remote key storage enables separation of admin duties, and these facilities can be used across many different systems. It also means that all of the cryptographic operations are now the responsibility of the application developer, all user and key management must be performed within the application logic, and your database design must be altered to accommodate required changes.

    External key management will pick up the key sharing, backup and other services that you lose by not using the database internals. Most customers we speak with now are opting for dedicated hardware to support key management operations. Performance is on advantage of HSMs, as hardware acceleration can generally handle encryption tasks an order of magnitude faster than their software routines on general-purpose (database) servers. Another is high-quality entropy to seed cryptographic ciphers. Additionally, sharing of keys across servers is well secured, and the APIs can be accessed by multiple applications. Finally, HSMs are well protected from physical tampering and electronic eavesdropping, making them the most secure key storage options discussed here.

    That is not to say that dedicated software key management does not have its own advantages. Simplicity is major factor; as is support for multiple platforms, which provides some flexibility and conformity in hardware requisition and maintenance; the biggest advantage of dedicated software key management is the flexibility to run anywhere. Running on generic hardware helps to rapidly recover from disasters, and is far easier to use in virtual environments than hardware. Downsides are that the quality of the cryptography is dependent upon its supporting hardware for entropy, and that the keys are not as strongly secured as with dedicated devices.

    Hardware based HSMs tend to be very expensive, compared to software (either DBMS features or software encryption applications). With any of these architectures, it’s important to confirm that the key management functions you need are supported by the database and encryption APIs.

    One final word about key management database APIs: Each database vendor supports the basic requirements, but not all available functions, so you need to verify what is available through the API is what you need.

    In our next post we will discuss common use cases that customers are trying to solve, and how each variant addresses these problems.

    –Adrian Lane

    Monday, June 22, 2009

    Database Encryption: Fact vs. Fiction

    By Adrian Lane

    A good friend of mine has, for many years, said “Don’t let the facts get in the way of a good story.” She has led a very interesting life and has thousands of funny anecdotes, but is known to embellish a bit. She always describes real life events, but uses some imagination and injects a few spurious details to spice things up a little bit. Not false statements, but tweaking the facts to make a more engaging story. Several of the comments on the blog in regards to our series on Database Encryption, as well as some of those made during product briefings, fall into the later category. Not completely false, but true only from a limited perspective, so I am calling them ‘fiction’.

    It’s ironic that I am working on a piece called “Truth, Lies, and Fiction in Encryption” that will be published later this summer or early fall. I am getting a lot of good material that will go into that project, but there are a couple fictional claims that I want to raise in this series to highlight some of the benefits, weaknesses, and practical realities that come into play with database encryption.

    One of the private comments made in response to Part 4: Credentialed User protection was: “Remember that in both cases (Re: general users and database administrators), encryption is worthless if an authorized user account itself is compromised.” I classify this as fiction because it is not totally correct. Why? I can compromise a database account, let’s say the account that an application uses to connect to the database. But that does not mean I have credentials to obtain the key to decrypt data. I have to compromise both the database and the key/application user credentials for this.

    For example, when I create a key in Microsoft SQL Server, I protect that key with a password or encrypt it with a different key. MSDN shows the SQL Server calls. If someone compromises the database account “SAP_Conn_Pool_User” with the password “Password1”, they still have not obtained the decryption keys. You still need to supply a password as a parameter to the ‘EncryptByKey’ or ‘DecryptByKey’ commands. A hacker would need to guess the password or gain access to the key that has encrypted the user’s key. But with connection pooling, there will be many users keys passed in context of the query operations, meaning that the hacker must compromise several keys before the correct one is obtained. A DBA can gain access to this key if internal to the database, and I believe can intercept it if the value is passed through the database to an external HSM via database API (I say ‘believe’ because I have not personally written exploit code to do so). With the latest release of SQL Server, you can segregate the DBA role to limit access to stored key data, but not eliminate it altogether.

    Another example: With IBM DB2, the user connection to the database is one set of credentials, while access to encryption keys uses a second set of credentials. To gain access you need to gain both sets. Here is a reference for Encrypting Data Values in DB2 Universal Database.

    Where this statement is true is with Transparent Encryption, such as the various derivatives of Oracle Transparent Encryption. Once a database user is validated to the database, the user session is supplied with an encryption key, and encryption operations are automatically mapped to the issued queries, thus the user automatically has access to the table that stores the key and does not need to credentials for access. Transparent Encryption from all vendors will be similar. You can use the API of the DBMS_Crypto package to provide this additional layer of protection, but like the rest of the platforms, you must separate the implicit binding of database user to encryption key, and this means altering your program to some degree. As with SQL Server, an Oracle DBA may or may not be able to obtain keys based upon a segregated DBA role.

    We have also received a comment on the blog that stated “encrypting data in any layer but the application layer leaves your data insecure.” Once again, a bit of fiction. If you view the problem as protecting data when database accounts have been compromised, then this is a true statement. Encryption credentials in the application layer are safe. But applications provide application users the same type of transparency that Transparent Encryption provides database users, thus a breached application account will also bypass encryption credentials and access some portion of the data stored in the database. Same problem, different layer.

    –Adrian Lane

    Thursday, June 18, 2009

    Database Encryption, Part 4: Credentialed User Protection

    By Adrian Lane

  • Rich
  • In this post we will detail the other half of the decision tree for selecting a database encryption strategy: securing data from credentialed database users. Specifically, we are concerned with preventing misuse of data through individual or group accounts that provide access to data either directly or through another application. For the purpose of this discussion, we will be most interested in differentiating between accounts assigned users who use the data stored within the database, from accounts assigned to users who administer the database system itself. These are the two primary types of credentialed database users, and each needs to be treated differently because their access to database functions is radically different. As administrative accounts have far more capabilities and tools at their disposal, those threats are more varied and complex, making it much more difficult to insulate sensitive data. Also keep in mind that a ‘user’ in context of database accounts may be a single person, or it may be a group account associated with a number of users, or it may be an account utilized by a service or program.

    With User Encryption, we assign access rights to the data we want secured on a user by user basis, and provide decryption keys only to the specified users who own that information, typically through a secondary authentication and authorization process. We call this User Encryption because we are both protecting sensitive data associated with each user account, and also responding to threats by type of user. This differs from Transparent Encryption in two important ways. First, we are now protecting data accessed through the normal database communication protocols as opposed to methods that bypass the database engine. Second, we are no longer encrypting everything in the database; rather it’s quite the opposite – we want to encrypt as little as possible so unsensitive information remains available to the rest of the database community. Conceptually this is very similar to the functionality provided by database groups, roles, and user authorization features. In practice it provides an additional layer of security and authentication where, in the event of a mistake or account compromise, exposed data remains encrypted and unreadable. As you can probably tell, since most regular users can be restricted using access controls, encryption at this level is mostly used to restrict administrative users.

    They say if all you have is a hammer, everything begins to look like a nail. That statement is relevant to this discussion of database encryption because the database vendors begin the conversation with their capabilities for column, table, row, and even cell level encryption. But these are simply tools. In fact, for what we want to accomplish, they may be the wrong tools. We need to fully understand the threat first, in this case credentialed users, and build our tool set and deployment model based upon that and not the other way around. We will discuss these encryption options in our next post on Implementation and Deployment, but need to fully understand the threat to be mitigated before selecting a technology. Interestingly enough, in the case of credentialed user threat analysis, we are proceeding from the assumption that something will go wrong, and someone will attempt to leverage credentials in such a way that they gain access to sensitive information within the database. In Part 2 of this series, we posed the questions “What do you want to protect?” and “What threat do you want to protect the data from?” Here, we add one more question: “Who do you want to protect the data from?” General users of the data or administrators of the system? Let’s look at these two user groups in detail:

    • Users: This is the general class of users who call upon the database to store, retrieve, report, and analyze data. They may do this directly through queries, but far more likely they connect to the database through another application. There are several common threats companies look to address for this class of user: providing protection against inadvertent disclosure from sloppy privilege management, inherited trust relationships, meeting a basic compliance requirement for encrypting sensitive data, or even providing finer-grained access control than is otherwise available through the application or database engine. Applications commonly use service accounts to connect to the database; those accounts are shared by multiple users, so the permissions may not be sufficiently granular to protect sensitive data. Users do not have the same privileges and access to the underlying infrastructure that administrators do, so the threat is exploitation of laxity in access controls. If protecting against this is our goal, we need to identify the sensitive information, determine who may use it, and encrypt it so that only the appropriate users have access. In these cases deployment options are flexible, as you can choose key management that is internal or external to the database, leverage the internal database encryption engine, and gain some latitude as to how much of the encryption and authentication is performed outside the database. Keep in mind that access controls are highly effective with much less performance impact, and they should be your first choice. Only encrypt when encryption really buys you additional security.
    • Administrators: The most common concern we hear companies discuss is their desire to mitigate damage in the event that a database administrator (DBA) account is compromised or misused by an employee. This is the single most difficult database security challenge to solve. The DBA role has rights to perform just about every function in the database, but no legitimate need to examine or use most of the data stored there. For example, the DBA has no need to examine Social Security Numbers, credit card data, or any customer data to maintain the database itself. This threat model dictates many of the deployment options. When the requirement is to protect the data from highly privileged administrators, enforcing separation of duties and providing a last line of defense for breached DBA accounts, then at the very least external key management is required. By encrypting and removing key management functions from DBA control, that sensitive information can be kept secure. External Key Management provides separation of duties between the management of the database and use of the data therein. The orchestration of the encryption/decryption functions are typically performed at the application and not inside the database, requiring modification to the application code. Use of the database engine’s built-in encryption capabilities may be possible, depending upon the vendor implementation, and most vendors provide API calls for third party encryption support while maintaining a single database ‘conversation’. This design addresses user level threats as well, and should be considered a superset.

    Once we’ve decided which of these threats to address; we select tools, technologies, and deployment options that support our goals. We have established that at the very least, the two drivers will be distinguished by calling for internal vs. external key management. Depending upon your answers to the question “What data do you want to protect?”, we can now decide on what level we need to encrypt a (table, column, row, or cell), the type of key management we need, whether we will leverage the internal database encryption engine or use external services, and what changes need to be made to the business processing logic. We will cover these topics in our next post, and will follow that up with several common customer use cases.

    One parting point we want to make with user encryption strategies, as it is a question that comes up over and over: “How is this different than access controls?” It’s all in how you use it. Encryption’s key value is in providing a level of granularity beyond what’s possible with access controls. This almost always translates to restricting administrators, since access controls are very effective for all other kinds of users. Another, more complex, option with encryption is to tie it to digital certificates outside the database, adding (essentially) another authentication factor. This increases security, because simply compromising a username and password isn’t sufficient to read the data, and so is particularly useful for protecting data utilized by service accounts.

    For the most part, as you’ll see in our use cases, we only recommend user level database encryption under extremely limited circumstances. It’s a complex topic, and we haven’t even dug into the technology yet, but please don’t assume because we are spending so much time on it that it’s your best option. Just because it’s complicated and takes a long time to describe, doesn’t mean that’s what you should look at first.

    –Adrian Lane

  • Rich
  • Monday, June 15, 2009

    Database Encryption, Part 3: Transparent Encryption

    By Adrian Lane

  • Rich
  • Adrian Lane
  • In our previous post in this Database Encryption series (Introduction, Part 2) we provided a decision tree for selecting a database encryption strategy. Our goal in this process is to map the encryption selection process to the security threats to protect against. Yes, that sounds simple enough, but it is tough to wade through vendor claims, especially when everyone from network storage to database vendors claims to provide the same value. We need to understand how to deal with the threats conceptually before we jump into the more complex technical and operational issues that can confuse your choices. In this post we are going to dig into the first branch of the tree, Non-credentialed threats – protecting against attacks from the outside, rather than from authenticated database users. We call this “Transparent/External Encryption”, since we don’t have to muck with database user accounts, and the encryption can sometimes occur outside the datbase. Transparent Encryption won’t protect sensitive content in the database if someone has access to it thought legitimate credentials, but it will protect the information on storage and in archives, and provides a significant advantage as it is deployed independent of your business applications. If you need to protect things like credit card numbers where you need to restrict even an administrator’s ability to see them, this option isn’t for you. If you are only worried about lost media, stolen files, a compromised host platform, or insecure storage, then Transparent Encryption is a good option. By not having to muck around with the internal database structures and application logic, it often provides huge savings in time and investment over more involved techniques.

    We have chosen the term Transparent Encryption, as many of the database vendors have, to describe the capability to encrypt data stored in the database without modification to the applications using that database. We’ve also added “External” to distinguish from external encryption at the file or media level. If you have a database then you already have access controls that protect that data from unwanted viewing through database communications. The database itself screens queries or applications to make sure that only appropriate users or groups are permitted to examine and use data. The threat we want to address here is protecting data from physical loss or theft (including some forms of virtual theft) through means that are outside the scope of access controls. Keep in mind that even though the data is “in” a database, that database maintains permanent records on disk drives, with data being archived to many different types of low cost, long term storage. There are many ways for data to be accessed without credentials being supplied at all. These are cases where the database engine is by-passed altogether – for example, examination of data on backup tapes, disks, offline redo log files, transaction logs, or any other place data resides on storage media.

    Transparent/External Encryption for protecting database data uses the following techniques & technologies:

    • Native Database Object (Transparent) Encryption: Database management systems, such as Oracle, Microsoft SQL Server, and IBM DB2, include capabilities to encrypt either internal database objects (tables and other structures) or the data stores (files). These encryption operations are managed from within the database, using native encryption functions built into the database, with keys being stored internally by default. This is good overall option in many scenarios as long as performance meets requirements. Depending on the platform, you may be able to offload key management to an external key management solution. The disadvantage is that it is specific to each database platform, and isn’t always available.
    • External File/Folder Encryption: The database files are encrypted using an external (third party) file/folder encryption tool. Assuming the encryption is configured properly, this protects the database files from unauthorized access on the server and those files are typically still protected as they are backed up, copied, or moved. Keys should be stored off the server and no access provided to local accounts, which protect against the server becoming compromised by an external attacker. Some file encryption tools, such as Vormetric and BitArmor, can also restrict access to the protected files based on application. Thus only the database processes can access the file, and even if an attacker compromises the database’s user account, they will only be able to access the decrypted data through the database itself. File/folder encryption of the database files is a good option as long as performance is acceptable and keys can be managed externally. Any file/folder encryption tool supports this option (including Microsoft EFS), but performance needs to be tested since there is wide variation among the different tools. Remember that any replication or distribution of data handled from within the database won’t be protected unless you also encrypt those destinations.
    • Media encryption: This includes full drive encryption or SAN encryption; the entire storage media is encrypted, and thus the database files are protected. Depending on the method used and the specifics of your environment, this may or may not provide protection for the data as it moves to other data stores, including archival (tape) storage. For example, depending on your backup agent, you may be backing up the unencrypted files or the encrypted storage blocks. This is best suited for high performance databases where the primary concern is physical loss of the media (e.g., a database on a managed SAN where the service provider handles failed drives potentially containing sensitive data). Any media encryption product supports this option.

    Which option to choose depends on your performance requirements, threat model, exiting architecture, and security requirements. Unless you have a high-performance system that exceeds the capabilities of file/folder encryption, we recommend you look there first. If you are managing heterogeneous databases, you will likely look at a third party product over native encryption. In both cases, it’s very important to use external key management and not allow access by any local accounts. We will outline selection criteria and use cases to support the decision process in a future post.

    You will notice that, depending upon which technique you choose, the initiation of the encryption, the engine that performs the encryption, and the key management server may all reside in different places. In fact, the latter two techniques encrypt the database but are not in the database at all; the engine that performs the encryption, as well as the processes responsible for managing the encryption operations, are outside of the database. In all three cases encryption remains transparent to business processing functions. Using the three technology variables (eEngine location, key management location, and who orchestrates) we introduced in Part Two of this series will help guide you in differentiating vendor offerings, and understand the strengths and weaknesses of each variation.

    Next up we will discuss the techniques for securing data against credentialed users and some implementation considerations for user encryption.

    –Adrian Lane

  • Rich
  • Adrian Lane
  • Thursday, June 11, 2009

    Application vs. Database Encryption

    By Rich

    There’s a bit of debate brewing in the comments on the latest post in our database encryption series. That series is meant to focus only on database encryption, so we weren’t planning about talking much about other options, but it’s an important issue.

    Here’s an old diagram I use a lot in presentations to describe potential encryption layers. What we find is that the higher up the stack you encrypt, the greater the overall protection (since it stays encrypted through the rest of the layers), but this comes with the cost of increased complexity. It’s far easier to encrypt an entire hard drive than a single field in an application; at least in real world implementations. By giving up granularity, you gain simplicity. For example, to encrypt the drive you don’t have to worry about access controls, tying in database or application users, and so on.

    image

    In an ideal world, encrypting sensitive data at the application layer is likely your best choice. Practically speaking, it’s not always possible, or may be implemented entirely wrong. It’s really freaking hard to design appropriate application level encryption, even when you’re using crypto libraries and other adjuncts like external key management. Go read this post over at Matasano, or anything by Nate Lawson, if you want to start digging into the complexity of application encryption.

    Database encryption is also really hard to get right, but is sometimes slightly more practical than application encryption. When you have a complex, multi-tiered application with batch jobs, OLTP connections, and other components, it may be easier to encrypt at the DB level and manage access based on user accounts (including service accounts). That’s why we call this “user encryption” in our model.

    Keep in mind that if someone compromises user accounts with access, any encryption is worthless. Additional controls like application-level logic or database activity monitoring might be able to mitigate a portion of that risk, but once you lose the account you’re at least partially hosed.

    For retail/PCI kinds of transactions I prefer application encryption (done properly). For many users I work with that’s not an immediate option, and they at least need to start with some sort of database encryption (usually transparent/external) to deal with compliance and risk requirements.

    Application encryption isn’t a panacea – it can work well, but brings additional complexities and is really easy to screw up. Use with caution.

    –Rich

    Wednesday, June 10, 2009

    Database Encryption, Part 2: Selection Process Overview

    By Adrian Lane

  • Adrian Lane
  • In the selection process for database encryption solutions, too often the discussion devolves straight into the encryption technologies: the algorithms, computational complexity, key lengths, merits of public vs. private key cryptography, key management, and the like.

    In the big picture, none of these topics matter.

    While these nuances may be worth considering, that conversation sidesteps the primary business driver of the entire effort: what threat do you want to protect the data from? In this second post in our series on database encryption, we’ll provide a simple decision tree to guide you in selecting the right database encryption option based on the threat you’re trying to protect against. Once we’ve identified the business problem, we will then map that to the underlying technologies to achieve that goal. We think it’s safe to say that if you are looking at database encryption as an option, you have already come to the decision that you need to protect your data in some way. Since there’s always some expense and/or potential performance impact on the database, there must be some driving force to even consider encryption. We will also make the assumption that, at the very least, protecting data at rest is a concern. Let’s start the process by asking the following questions:

    What do you want to protect? The entire contents of the database, a specific table, or a data field?

    What do you want to protect the data from? Accidental disclosure? Data theft?

    Once you understand these requirements, we can boil the decision process into the following diagram:

    image

    Whether your primary driver is security or compliance, the breakdown will be the same. If you need to provide separation of duties for Sarbanes-Oxley, or protect against account hijacking, or keep credit card data from being viewed for PCI compliance, you are worried about credentialed users. In this case you need a more granular approach to encryption and possibly external key management. In our model, we call this user encryption. If you are worried about missing tapes, physical server theft, copying/theft of the database files via storage compromise, or un-scrubbed hard drives being sold on eBay, the threat is outside of the bounds of access control. In these cases use of transparent/external encryption through native database methods, OS support, file/folder encryption, or disk drive encryption is appropriate.

    Once you have decided which method is appropriate, we need to examine the basic technology variables that affect your database system and operations. Which you select corresponds to how much of an impact it will have on applications, database performance, and so on. With any form of database encryption there are many technology variables to consider for your deployment, but for the purpose of selecting which strategy is right for you, there are only three to worry about. These three effect the performance and type of threats you can address. In each case we will want to investigate if these options are performed internally by the database, or externally. They are:

    1. Where does the encryption engine reside? [inside/outside]
    2. Where is the key management performed? [inside/outside]
    3. Who/what performs the encryption operations? [inside/outside]

    In a nutshell, the more secure you want to be and the more you need separation of duties, the more you will need granular enforcement and changes to your applications. Each option that is moved outside the database means you get more complexity and less application transparency. We hate to phrase it like this because it somehow implies that what the database provides is less secure when that is absolutely not the case. But it does mean that the more we manage inside the database, the greater the vulnerability in the event of a database or DBA account compromise. It’s called “putting all your eggs in one basket”. Throughout the remainder of the week we will discuss the major branches of this tree, and how they map to threats. We will follow that up with a set of use case discussions to contrast the models and set realistic expectations on security this will and will not provide, as well as some comments on the operational impact of using these technologies.

    By the end you’ll be able to walk through our decision tree and pick the best encryption option based on what threat you’re trying to manage, and operational criteria ranging from what database platform you’re on to management requirements.

    –Adrian Lane

  • Adrian Lane
  • Thursday, June 04, 2009

    Introduction To Database Encryption - The Reboot!

    By Adrian Lane

    Updated June 4th to reflect terminology change.

    This is the Re-Introduction to our Database Encryption series. Why are we re-introducing this series? I’m glad you asked. The more we worked on the separation of duties and key management sections, the more dissatisfied we became. Rich and I got some really good feedback from vendors and end users, and we felt we were missing the mark with this series. And not just because the stuff I drafted when I was sick completely lacked clarity of thought, but there are three specific reasons we were unhappy. The advice we were giving was not particularly pragmatic, the terminology we thought worked didn’t, and we were doing a poor job of aligning end-user goals with available options. So yeah, this is an apology to our audience as the series was not up to our expectations and we failed to achieve some of our own Totally Transparent Research concepts. But we’re ‘fessing up to the problem and starting from scratch.

    So we want to fix these things in two ways. First we want to change some of the terminology we have been using to describe database encryption. Using ‘media encryption’ and ‘separation of duties’ is confusing the issues, and we want to differentiate between the threat we are trying to protect against vs. what is being encrypted. And as we are talking to IT, developers, DBAs, and other audiences, we wanted to reduce confusion as much as possible. Second, we will create a simple guide for people to select a database encryption strategy that addresses their goals. Basically we are going to outline a decision tree of user requirements and map those to the available database encryption choices. Rich and I think that will aid end users to both clarify their goals and determine the correct implementation strategy.

    In our original introduction we provided a clear idea of where we wanted to go with this series, but we did adopt our own terminology in order to better encapsulate the database encryption options vendors provide. We chose “Encryption for Separation of Duties” and “Encryption for Media Protection”. This is a bit of an oversimplification, and mapped to the threat rather than to the feature. Plus, if you asked your RDBMS vendor for ‘media encryption’, they would not know what they heck you were talking about. We are going to change the terminology back to the following:

    1. Database Transparent/External Encryption: Encryption of the entire database. This is provided by native encryption functions within the database. The goal is to prevent exposure of information due to loss of the physical media. This can also be done through drive or OS/file system encryption, although they lack some of the protections of native database encryption. The encryption is invisible to the application and does not require alterations to the code or schema.

    2. Data User Encryption: Encrypting specific columns, tables, or even data elements in the database. The classic example is credit card numbers. The goal is to provide protection against inadvertent disclosure, or to enforce separation of duties. How this is accomplished will depend upon how key management is utilized and (internal/external) encryption services, and will affect the way the application uses the database, but provides more granular access control.

    While we’re confident we’ve described the two options accurately, we’re not convinced the specific terms “database encryption” and “data encryption” are necessarily the best, so please suggest any better options.

    Blanket encryption of all database content for media protection is much easier than encrypting specific columns & tables for separation of duties, but it doesn’t offer the same security benefits. Knowing which to choose will depend upon three things:

    • What do you want to protect?
    • What do you want to protect it from?
    • What application changes and management tasks will you tolerate?

    Thus, the first thing we need to decide when looking at database encryption is what are we trying to protect and why. If we’re just going after the ‘PCI checkbox’ or are worried about losing data from swapping out hard drives, someone stealing the files off the server, or misplacing backup tapes, then database encryption (for media protection) is our answer. If the goal is to protect data in the event of compromised accounts, rogue DBAs, or inadvertent disclosure; then things get a lot more complicated. We will go into the details of ‘why’ and ‘how’ in a future post, as well as the issues of application alterations, after we have introduced the decision tree overview. If you have any comments, good, bad, or indifferent, please share. As always, we want the discussion to be as open as possible.

    –Adrian Lane

    Thursday, May 14, 2009

    Database Encryption: Option 2, Enforcing Separation of Duties

    By Adrian Lane

    This is the next installment in what is now officially the longest running blog series in Securosis history: Database Encryption. In case you have forgotten, Rich provided the Introduction and the first section on Media Protection, and I covered the threat analysis portion to help you determine which threats to consider when developing a database encryption strategy. You may want to peek back at those posts as a refresher if this is a subject that interests you, as we like to use our own terminology. It’s for clarity, not because we’re arrogant. Really!

    For what we are calling “database media protection” as described in Part 1, we covered the automatic encryption of the data files or database objects through native encryption built into the database engine. Most of the major relational database platforms provide this option, which can be “seamlessly” deployed without modification to applications and infrastructure that use the database. This is a very effective way to prevent recovery of data stored on lost or stolen media. And it is handy when you have renegade IT personnel who hate managing separate encryption solutions. Simple. Effective. Invisible. And only a moderate performance penalty. What more could you want?

    If you have to meet compliance requirements, probably a lot more. You need to secure credit card data within the database to comply with the PCI Data Security Standard. You are unable to catalog all of the applications that use sensitive data stored in your database, so you want to stop data leakage at the source. Your DBAs want to be ‘helpful’, but their ad-hoc adjustments break the accounting system. Your quality assurance team exports production data into unsecured test systems. Medical records need to be kept private. While database media protection is effective in addressing problems with data at rest, it does not help enforce proper data usage. Requirements to prevent misuse by credentialed users or compromised user accounts, or enforce separation of duties, are outside the scope of basic database encryption. For these reasons and many others, you decide that you need to protect the data within the database through more granular forms of database encryption; table, column, or row level security. This is where the fun starts! Encrypting for separation of duties is far more complex than encrypting for media protection; it involves protecting data from legitimate database users, requiring more changes to the database itself. It’s still native database encryption, but this simple conceptual change creates exceptional implementation issues. It will be harder to configure, your performance will suffer, and you will break your applications along the way. Following our earlier analogy, this is where we transition from hanging picture hooks to a full home remodeling project. In this section we will examine how to employ granular encryption to support separation of duties within the database itself, and the problems this addresses. Then we will delve into the problems you will to run into and what you need to consider before taking the plunge.

    Before we jump in, note that each of these options are commonly referred to as a ‘Level’ of encryption; this does not mean they offer more or less security, but rather identifies where encryption is applied within the database storage hierarchy (element, row, column, table, tablespace, database, etc). There are three major encryption options that support separation of duties within the database. Not every database vendor supports all of these options, but generally at least two of the three, and that is enough to accomplish the goals above. The common options are:

    1. Column Level Encryption: As the name suggests, column level encryption applies to all data in a single, specific column in a table. This column is encrypted using a single key that supports one or more database users. Subsequent queries to examine or modify encrypted columns must possess the correct database privileges, but additionally must provide credentials to access the encryption/decryption key. This could be as simple as passing a different user ID and password to the key manager, or as sophisticated as a full cryptographic certificate exchange, depending upon the implementation. By instructing the database to encrypt all data stored in a column, you focus on specific data that needs to be protected. Column level encryption is the popular choice for compliance with PCI-DSS by restricting access to a very small group. The downside is that the column is encrypted as a whole, so every select requires the entire column to be deencrypted, and every modification requires the entire column to be re-encrypted and certified. This is the most commonly available option in relational database platforms, but has the poorest performance.
    2. Table / Tablespace Encryption: Table level encryption is where the entire contents of a table or group of tables are encrypted as one element. Much like full database encryption, this method protects all the data within the table, and is a good option when all more than one column in the table contains sensitive information. While it does not offer fine-grained access control to specific data elements, it more efficient option than column encryption when multiple columns contain sensitive data, and requires fewer application and query modification. Examples of when to use this technique include personally identifiable information grouped together – like medical records or financial transactions – and this is an appropriate approach for HIPAA compliance. Performance is manageable, and is best when the sensitive tables can be fully segregated into their own tablespace or database.
    3. Field/Cell/Row Level Encryption, Label Security: Row level encryption is where a single row in a table is encrypted, and field or cell level encryption is where individual data elements within a database table are encrypted. They offer very fined control over data access, but can be a management and performance nightmare. Depending upon the implementation, there might be one key used for all elements or a key for each row. The performance penalty is a sharp limitation, especially when selecting or modifying multiple rows. More commonly, separation of duties is supported by label security. This strategy involves structural modifications to the database to support “labeling” each row with attributes corresponding to access rights or user groups. Additionally, each user is assigned access rights that map to one or more of these labels. When a user makes a request, they are only allowed to retrieve/view a subset of the rows with matching label attributes. The query is only applied to a subset of database independent of any action on the user’s part or query modifications. This offers much higher performance and works well with large databases. It can be used in conjunction with field/cell level encryption to provide high security, but as this is often sufficient to address separation of duties, it is used in conjunction with transparent forms of database encryption.

    These advantages come at a cost, and one of these costs is the re-engineering effort required for the applications that rely upon the data that has been encrypted. Most database queries rely on functions to format the results or derive information, and fail when referencing encrypted data. For example, grouping functions like ‘summation’ or ‘average’, and more advanced comparisons such as ‘like’ and ‘range’ not longer work. Indices on encrypted columns fail as they are not trying to arrange randomized data. Foreign key relationships and compound keys cause errors and unintended side effects with both application and database functions. Reporting applications and batch jobs run under generic accounts lack permissions to perform their intended functions. The full effect of retrofitting tables and queries designed under a different set of assumptions cannot be adequately estimated, and requires complete regression testing and data verification.

    The single biggest complaint we hear from companies when implementing granular encryption regards the performance impact. Depending upon the specific vendor implementation, column level encryption may require anything from several blocks to the entire column of data being decrypted before the query results can be returned. In cases where there are millions of rows scattered across millions of data blocks, the processing overhead is staggering. Encryption also precludes use of several standard performance optimizations, further reducing performance and throughput. For example, establishing a database connection is a time consuming effort for the database, often far exceeding the time needed to execute the user’s query. “Connection Pooling” is a common database feature where connections are pre-established under a generic application user account and remain idle until a user makes a request. But when access to encrypted data requires a complete user ID and credentials, generic service accounts cannot access the encrypted data. Each request needs to be established with a credentialed user account, or the connection modified such that the credentials are passed and authenticated. Another example is data caching, where the database fetches commonly accessed information and stores it in memory. With encryption and label security, what each user sees may be different, and caching is less effective.

    Many of these issues can be mitigated or completely addressed, but only when designing encryption into the application and database structures from scratch. If you are moving forward with an encryption project, it is far better to implement these changes into new tables and functions rather than attempt to retrofit new functions into tables and applications designed under a different set of assumptions.

    In our next post we will take a closer look at key management options. There are several variants available to support encryption functions, performance, and even separation of duties.

    –Adrian Lane

    Wednesday, October 15, 2008

    My Take On The Database Security Market Challenges

    By Rich

    Yesterday, Adrian posted his take on a conversation we had last week. We were headed over to happy hour, talking about the usual dribble us analyst types get all hot and bothered about, when he dropped the bombshell that one of our favorite groups of products could be in serious trouble.

    For the record, we hadn’t started happy hour yet.

    Although everyone on the vendor side is challenged with such a screwed up economy, I believe the forces affecting the database security market place it in particular jeopardy. This bothers me, because I consider these to be some of the highest value tools in our information-centric security arsenal.

    Since I’m about to head off to San Diego for a Jimmy Buffett concert, I’ll try and keep this concise.

    • Database security is more a collection of markets and tools than a single market. We have encryption, Database Activity Monitoring, vulnerability assessment, data masking, and a few other pieces. Each of these bits has different buying cycles, and in some cases, different buying centers. Users aren’t happy with the complexity, yet when they go shopping the tend to want to put their own car together (due to internal issues) than buy the full product.
    • Buying cycles are long and complex due to the mix of database and security. Average cycles are 9-12 months for many products, unless there’s a short term compliance mandate. Long cycles are hard to manage in a tight economy.
    • It isn’t a threat driven market. Sure, the threats are bad, but as I’ve talked about before they don’t keep people from checking their email or playing solitaire, thus they are perceived as less.
    • The tools are too technical. I’m sorry to my friends on the vendor side, but most of the tools are very technical and take a lot of training. These aren’t drop in boxes, and that’s another reason buying cycles are long. I’ve been talking with some people who have gone through vendor product training in the last 6 months, and they all said the tools required DBA skills, but not many on the security side have them.
    • They are compliance driven, but not compliance mandated. These tools can seriously help with a plethora of compliance initiatives, but there is rarely a checkbox requiring them. Going back to my economics post, if you don’t hit that checkbox or clearly save money, getting a sale will be rough.
    • Big vendors want to own the market, and think they have the pieces. Oracle and IBM have clearly stepped into the space, even when products aren’t as directly competitive (or capable) as the smaller vendors. Better or not, as we continue to drive towards “good enough” many clients will stop with their big vendor first (especially since the DBAs are so familiar with the product line).
    • There are more short-term acquisition targets than acquirers. The Symantecs and McAfees of the world aren’t looking too strongly at the database security market, mostly leaving the database vendors themselves. Only IBM seems to be pursuing any sort of acquisition strategy. Oracle is building their own, and we haven’t heard much in this area out of Microsoft. Sybase is partnered with a company that seems to be exiting the market, and none of the other database companies are worth talking about. The database tools vendors have hovered around this area, but outside of data masking (which they do themselves) don’t seem overly interested.
    • It’s all down to the numbers and investor patience. Few of the startups are in the black yet, and some have fairly large amounts of investment behind them. If run rates are too high, and sales cycles too low, I won’t be surprised to see some companies dumped below their value. IPLocks, for example, didn’t sell for nearly it’s value (based on the numbers alone, I’m not even talking product).

    There are a few ways to navigate through this, and the companies that haven’t aggressively adjusted their strategies in the past few weeks are headed for trouble.

    I’m not kidding, I really hated writing this post. This isn’t a “X is Dead” stir the pot kind of thing, but a concern that one of the most important linchpins of information centric security is in probable trouble. To use Adrian’s words:

    But the evolutionary cycle coincides with a very nasty economic downturn, which will be long enough that venture investment will probably not be available to bail out those who cannot maintain profitability. Those that earn most of their revenue from other products or services may be immune, but the DB Security vendors who are not yet profitable are candidates for acquisition under semi-controlled circumstances, fire-sale or bankruptcy, depending upon how and when they act.

    –Rich