It is possible that 2014 will be the death of data security. Not only because we analysts can’t go long without proclaiming a vibrant market dead, but also thanks to cloud and mobile devices. You see, data security is far from dead, but is is increasingly difficult to talk about outside the context of cloud, mobile, or… er… Snowden. Oh yeah, and the NSA – we cannot forget them.
Organizations have always been worried about protecting their data, kind of like the way everyone worries about flossing. You get motivated for a few days after the most recent root canal, but you somehow forget to buy new floss after you use up the free sample from the dentist. But if you get 80 cavities per year, and all your friends get cavities and walk complaining of severe pain, it might be time for a change.
Buy us or the NSA will sniff all your Snowden
We covered this under key themes, but the biggest data security push on the marketing side is going after one headlines from two different angles:
- Protect your stuff from the NSA.
- Protect your stuff from the guy who leaked all that stuff about the NSA.
Before you get wrapped up in this spin cycle, ask yourself whether your threat model really includes defending yourself from a nation-state with an infinite budget, or if you want to consider the kind of internal lockdown that the NSA and other intelligence agencies skew towards. Some of you seriously need to consider these scenarios, but those folks are definitely rare.
If you care about these things, start with defenses against advanced malware, encrypt everything on the network, and look heavily at File Activity Monitoring, Database Activity Monitoring, and other server-side tools to audit data usage. Endpoint tools can help but will miss huge swaths of attacks.
Really, most of what you will see on this topic at the show is hype. Especially DRM (with the exception of some of the mobile stuff) and “encrypt all your files” because, you know, your employees have access to them already.
Mobile isn’t all bad
We talked about BYOD last year, and it is still clearly a big trend this year. But a funny thing is happening – Apple now provides rather extensive (but definitely not perfect) data security. Fortunately Android is still a complete disaster. The key is to understand that iOS is more secure, even though you have less direct control. Android you can control more visibly, but its data security is years behind iOS, and Android device fragmentation makes it even worse. (For more on iOS, check out our a deep dive on iOS 7 data security. I suppose some of you Canadians are still on BlackBerry, and those are pretty solid.
For data security on mobile, split your thinking into MDM as the hook, and something else as the answer. MDM allows you to get what you need on the device. What exactly that is depends on your needs, but for now container apps are popular – especially cross-platform ones. Focus on container systems as close to the native device experience as possible, and match your employee workflows. If you make it hard on employees, or force them into apps that look like they were programmed in Atari BASIC (yep, I used it) and they will quickly find a way around you. And keep a close eye on iOS 7 – we expect Apple to close its last couple holes soon, and then you will be able to use nearly any app in the App Store securely.
Cloud cloud cloud cloud cloud… and a Coke!
Yes, we talk about cloud a lot. And yes, data security concerns are one of the biggest obstacles to cloud deployments. On the upside, there are a lot of legitimate options now.
For Infrastructure as a Service look at volume encryption. For Platform as a Service, either encrypt before you send it to the cloud (again, you will see products on the show floor for this) or go with a provider who supports management of your own keys (only a couple of those, for now). For Software as a Service you can encrypt some of what you send these services, but you really need to keep it granular and ask hard questions about how they work. If they ask you to sign an NDA first, our usual warnings apply.
We have looked hard at some of these tools, and used correctly they can really help wipe out compliance issues. Because we all know compliance is the reason you need to encrypt in cloud.
Big data, big budget
Expect to see much more discussion of big data security. Big data is a very useful tool when the technology fits, but the base platforms include almost no security. Look for encryption tools that work in distributed nodes, good access management and auditing tools for the application/analysis layer, and data masking. We have seen some tools that look like they can help but they aren’t necessarily cheap, and we are on the early edge of deployment. In other words it looks good on paper but we don’t yet have enough data points to know how effective it is.
Posted at Tuesday 18th February 2014 8:00 am
(0) Comments •
As anyone reading this site knows, I have been spending a ton of time looking at practical approaches to cloud security. An area of particular interest is infrastructure encryption. The cloud is actually spurring a resurgence in interest in data encryption (well, that and the NSA, but I won’t go there).
This paper is the culmination of over 2 years of research, including hands-on testing. Encrypting object and volume storage is a very effective way of protecting data in both public and private clouds. I use it myself.
From the paper:
Infrastructure as a Service (IaaS) is often thought of as merely a more efficient (outsourced) version of traditional infrastructure. On the surface we still manage things that look like traditional virtualized networks, computers, and storage. We ‘boot’ computers (launch instances), assign IP addresses, and connect (virtual) hard drives. But while the presentation of IaaS resembles traditional infrastructure, the reality underneath is decidedly not business as usual.
For both public and private clouds, the architecture of the physical infrastructure that comprises the cloud – as well as the connectivity and abstraction components used to provide it – dramatically alter how we need to manage security. The cloud is not inherently more or less secure than traditional infrastructure, but it is very different.
Protecting data in the cloud is a top priority for most organizations as they adopt cloud computing. In some cases this is due to moving onto a public cloud, with the standard concerns any time you allow someone else to access or hold your data. But private clouds pose the same risks, even if they don’t trigger the same gut reaction as outsourcing.
This paper will dig into ways to protect data stored in and used with Infrastructure as a Service. There are a few options, but we will show why the answer almost always comes down to encryption in the end – with a few twists.
The permanent home of the paper is here , and you can download the PDF directly
We would like to thank SafeNet and Thales e-Security for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you without cost, without companies supporting our research.
Posted at Monday 22nd July 2013 12:58 pm
(1) Comments •
From Declan McCullagh at CNet:
Google has begun experimenting with encrypting Google Drive files, a privacy-protective move that could curb attempts by the U.S. and other governments to gain access to users’ stored files.
Two sources told CNET that the Mountain View, Calif.-based company is actively testing encryption to armor files on its cloud-based file storage and synchronization service. One source who is familiar with the project said a small percentage of Google Drive files is currently encrypted.
Tough technical problem for usability, but very positive if Google rolls this out to consumers. I am uncomfortable with Google’s privacy policies but their security team is top-notch, and when ad tracking isn’t in the equation they do some excellent work. Chrome will encrypt all your sync data – the only downside is that you need to be logged into Google, so ad tracking is enabled while browsing.
Posted at Wednesday 17th July 2013 2:27 pm
(0) Comments •
I am doing some work on FDE (if you are using the Securosis Nexus, I just added a small section on it), and during my research one of our readers sent in some great advice.
Here are some suggestions from Guillaume Ross @gepeto42:
Things to Check before Deploying FDE
Ensure the support staff that provides support during business days is able to troubleshoot any type of issue or view any type of logs. If the main development of the product is in a different timezone, ensure this will have no impact on support. I have witnessed situations where logs were in binary formats that support staff could not read. They had to be sent to developers on a different continent. The back and forth for a simple issue can quickly turn into weeks when you can only send and receive one message per day.
If you are planning a massive deployment, ensure the vendor has customers with similar types of deployments using similar methods of authentication.
- Look for a vendor who makes documentation available easily. This is no different than for any enterprise software, but due to the nature of encryption and the impact software with storage related drivers can have on your endpoint deployments and support, this is critical.
(Rich: Make sure the documentation is up to date and accurate. We had another reader report on a critical feature removed from a product but still in the documentation – which lead to every laptop being encrypted with the same key. Oops.)
Local and remote recovery
Some solutions offer a local recovery solution that allow the user to resolve forgotten password issues without having to call support to obtain a one time password. Think about what this means for security if it is based on “secret questions/answers”.
Test the remote recovery process and ensure support staff have the proper training on recovery.
- If you have to support users in multiple languages and/or multiple language configurations, ensure the solution you are purchasing has a method for detecting what keyboard should be used. It can be frustrating for users and support staff to realize a symbol isn’t in the same place on the default US keyboard and on a Canadian French keyboard. Test this.
(Rich: Some tools have on-screen keyboards now to deal with this. Multiple users have reported this as a major problem.)
Password complexity and expiration
- If you sync with an external source such as Active Directory, consider the fact that most solutions offer offline pre-boot authentication only. This means that expired passwords combined with remote access solutions such as webmail, terminal services, etc. could create support issues.
The user goes home. Brings his laptop. From home, on his own computer or tablet, uses an application published in Citrix, which prompts him to change his Active Directory password which expired.
The company laptop still has the old password cached.
Consider making passwords expire less often if you can afford it, and consider trading complexity for length as it can help avoid issues between minor keyboard mapping differences.
Consider the management features offered by each vendor and see how they can be tied to your current endpoint management strategy. Most vendors offer easy ways to configure machines for automatic booting for a certain period or number of boots to help with patch management, but is that enough for you to perform an OS refresh?
Does the vendor provide all the information you need to build images with the proper drivers in them to refresh over an OS that has FDE enabled?
If you never perform OS refreshes and provide users with new computers that have the new OS, this could be a lesser concern. Otherwise, ask your vendor how you will upgrade encrypted workstations to the next big release of the OS.
There are countless ways to deal with FDE authentication. It is very possible that multiple solutions need to be used in order to meet the security requirements of different types of workstations.
TPM: Some vendors support TPMs combined with a second factor (PIN or password) to store keys and some do not. Determine what your strategy will be for authentication. If you decide that you want to use TPM, be aware that the same computer, sold in different parts of the world, could have a different configuration when it comes to cryptographic components. Some computers sold in China would not have the TPM.
Apple computers do not include a TPM any more, so a hybrid solution might be required if you require cross-platform support.
USB Storage Key: A USB storage key is another method of storing the key separately from the hard drive. Users will leave these USB storage keys in their laptop bags. Ensure your second factor is secure enough. Assume USB storage will be easier to copy than a TPM or a smart card.
Password sync or just a password: A solution to avoid having users carry a USB stick or a smart card, and in the case of password sync, two different sets of credentials to get up and running. However, it involves synchronization as well as keyboard mapping issues. If using sync, it also means a simple phishing attack on a user’s domain account could lead to a stolen laptop being booted.
Smart cards: More computers now include smart card readers than ever before. As with USB and TPM, this is a neat way of keeping the keys separate from the hard drive. Ensure you have a second factor such as a PIN in case someone loses the whole bundle together.
Automatic booting: Most FDE solutions allow automatic booting for patch management purposes. While using it is often necessary, turning it on permanently would mean that everything needed to boot the computer is just one press of the power button away.
Depending on your environment, FDE on desktops can have value. However, do not rush to deploy it on workstations used by multiple users (meeting rooms, training, workstations used by multiple shifts) until you have decided on the authentication method.
Test your recovery process often.
If you will be deploying Windows 8 tablets in the near future, the availability of an on-screen keyboard that can work with a touchscreen could be important.
Standby and hibernation: Do not go through all the trouble of deploying FDE and then allow everyone to leave their laptop in standby for extended periods of time. On a Mac, set the standby delay to something shorter than the default. On Windows, disable standby completely. Prefer hibernation, and test that your FDE solution properly handles hibernation and authentication when booting back up.
On the other hand, if you were doing things such as clearing temp drives and pagefiles/swap for security or compliance reasons prior to that, ask yourself if it is still required. If you were wiping the Windows pagefile on shutdown to protect against offline attacks, it is probably not needed any more as the drive is encrypted. This can speed up shutting down considerably, especially on machines with a lot of RAM and a big page file.
Posted at Thursday 20th June 2013 4:37 pm
(0) Comments •
There is no single right way to pick the best encryption option. Which is ‘best’ depends on a ton of factors including the specifics of the cloud deployment, what you already have for key management or encryption, the nature of the data, and so on. That said, here are some guidelines that should work in most cases.
- Always use external key management. Instance-managed encryption is only acceptable for test/development systems you know will never go into production.
- For sensitive data in public cloud computing choose a system with protection for keys in volatile memory (RAM). Don’t use a cloud’s native encryption capabilities if you have any concern that a cloud administrator is a risk.
- In private clouds you may also need a product that protects keys in memory if sensitive data is encrypted in instances sharing physical hosts with untrusted instances that could perform a memory attack.
- Pick a product designed to handle the more dynamic cloud computing environment. Specifically one with workflow for rapidly provisioning keys to cloud instances and API support for the cloud platform you use.
- If you need to encrypt boot volumes and not just attached storage volumes, select a product with a client that includes that capability, but make sure it works for the operating systems you use for your instances. On the other hand, don’t assume you need boot volume support – it all depends on how you architect cloud applications.
- The two key features to look for, after platform/topology support, are granular key management (role-based with good isolation/segregation) and good reporting.
- Know your compliance requirements and use hardware (such as an HSM) if needed for root key storage.
- Key management services may reduce the overhead of building your own key infrastructure if you are comfortable with how they handle key security. As cloud natives they may also offer other performance and management advantages, but this varies widely between products and cloud platforms/services.
It is hard to be more specific without knowing more about the cloud deployment but these questions should get you moving in the right direction. The main things to understand before you start looking for a product are:
- What cloud platform(s) are we on?
- Are we using public or private cloud, or both? Does our encryption need to be standardized between the two?
- What operating systems will our instances run?
- What are our compliance and reporting requirements?
- Do we need boot volume encryption for instances? (Don’t assume this – it isn’t always a requirement).
root keys need to be stored in hardware? (Generally a compliance requirement because virtual appliances or software servers are actually quite secure).
- What is our cloud and application topology? How often (and where) will we be provisioning keys?
- For server-based object storage, such as you use to back an application, a cloud encryption gateway is likely your best option. Use a system where you manage the keys – not your cloud provider – and don’t store those keys in the cloud.
- For supporting users on services like Dropbox, use a software client/agent with centralized key management. If you want to support mobile devices make sure the product you select has apps for the mobile platforms you support.
As you can see, figuring out object storage encryption is usually much easier than volume storage.
Encryption is our best tool protecting cloud data. It allows us to separate security from the cloud infrastructure without losing the advantages of cloud computing. By splitting key management from the data storage and encryption engines, it supports a wide array of deployment options and use cases. We can now store data in multi-tenant systems and services without compromising security.
In this series we focused on protecting data in IaaS (Infrastructure as a Service) environments but keep in mind that alternate encryption options, including encrypting data when you collect it in an application, might be a better choice or an additional option for greater granularity.
Encrypting cloud data can be more complex than on traditional infrastructure, but once you understand the basics adapting your approach shouldn’t be too difficult. The key is to remember that you shouldn’t try to merely replicate how you encrypt and manage keys (assuming you even do) in your traditional infrastructure. Understand how you use the cloud and adapt your approach so encryption becomes an enabler – not an obstacle to moving forward with cloud computing.
Posted at Thursday 9th May 2013 12:26 pm
(1) Comments •
Sorry, but the title is a bit of a bait and switch. Before we get into object storage encryption we need to cover using proxies for volume encryption.
The last encryption option uses an inline software encryption proxy to encrypt and decrypt data. This option doesn’t work for boot volumes, but may allow you to encrypt a wider range of storage types, and offers an alternate technical architecture for connecting to external volumes.
The proxy is a virtual appliance running in the same zone as the instance accessing the data and the storage volume. We are talking about IaaS volumes in this section, so that will be our focus.
The storage volume attaches to the proxy, which performs all cryptographic operations. Keys can be managed in the proxy or extended to external key management using the options we already discussed. The proxy uses memory protection techniques to resist memory parsing attacks, and never stores unencrypted keys in its own persistent storage.
The instance accessing the data then connects to the proxy using a network file system/sharing protocol like iSCSI. Depending on the pieces used this could, for example, allow multiple instances to connect to a single encrypted storage volume.
Protecting object storage
Object storage such as Amazon S3, Openstack Swift, and Rackspace Cloud Files, is fairly straightforward to encrypt, with three options:
- Server-side encryption
- Client/agent encryption
- Proxy encryption
As with our earlier examples overall security is dependent on where you place the encryption agent, key management, and data. Before we describe these options we need to address the two types of object storage. Object storage itself, like our examples above, is accessed and managed only via APIs and forms the foundation of cloud data storage (although it might use traditional SAN/NAS underneath).
There are also a number of popular cloud storage services including Dropbox, Box.com, and Copy.com – as well as applications to build private internal systems – which include basic object storage but layer on PaaS and SaaS features. Some of these even rely on Amazon, Rackspace, or another “root” service to handle the actual storage. The main difference is that these services tend to add their own APIs and web interfaces, and offer clients for different operating systems – including mobile platforms.
With this option all data is encrypted in storage by the cloud platform itself. The encryption engine, keys, and data all run within the cloud platform and are managed by the cloud administrators. This option is extremely common at many public cloud object storage providers, sometimes without additional cost.
Server-side encryption really only protects against a single threat: lost media. It is more of a compliance tool than an actual security tool because the cloud administrators have the keys. It may offer minimal additional security in private cloud storage but still fails to disrupt most of the dangerous attack paths for access to the data.
So server-side encryption is good for compliance and may be good useful in private clouds; but it offers no protection against cloud administrators and depending on configuration it may provide little protection for your data in case of management plane compromise.
If you don’t trust the storage environment your best option is to encrypt the data before sending it up. We call this Virtual Private Storage because, as with a Virtual Private Network, we turn a shared public resource into a private one by encrypting the information on it while retaining the keys. The first way to do this is with an encryption agent on the host connecting to the cloud service.
This is architecturally equivalent to externally-managed encryption for storage volumes. You install a local agent to encrypt/decrypt the data before it moves to the cloud, but manage the keys in an external appliance, service, or server. Technically you could manage locally, as with instance-managed encryption, but it is even less useful here than for volume encryption because object storage is normally accessed by multiple systems, so we always need to manage keys in multiple locations.
The minimum architecture is comprised of encryption agents and a key management server. Agents implement the cloud’s native object storage API, and provide logical volumes or directories with decrypted access to the encrypted volume, so applications do not need to handle cloud storage or encryption APIs. This option is most often used with cloud storage and backup services rather than for direct access to root object storage.
Some agents are advances on file/folder encryption, especially for tools like Dropbox or Box.com which are accessed as a normal directory on client systems. But stock agents need to be tuned to work with the specific platform in question – which is outside our object storage focus.
One of the best options for business-scale use of object storage, especially public object storage, is an inline or cloud-hosted proxy.
There are two main topologies:
- The proxy resides on your network, and all data access runs through it for encryption and decryption. The proxy uses the cloud’s native object storage APIs.
- The proxy runs as a virtual appliance in either a public or private cloud.
You also set two key management options: internal to the proxy or external; and the usual deployment options: hardware/appliance, virtual appliance, or software.
Proxies are especially useful for object storage because they are a very easy way to implement Virtual Private Storage. You route all approved connections through the proxy, which encrypts the data and then passes it on to the object storage service.
Object storage encryption proxies are evolving very quickly to meet user needs. For example, some tie into the Amazon Web Services Storage Gateway to keep some data local and some in the cloud for faster performance. Others not only proxy to the cloud storage service, but function as a normal network file share for local users.
Posted at Tuesday 7th May 2013 1:13 pm
(0) Comments •
Deployment and topology options
The first thing to consider is how you want deploy external key management. There are four options:
- An HSM or other hardware key management appliance. This provides the highest level of physical security but the appliance will need to be deployed outside the cloud. When using a public cloud this means running the key manager internally, relying on a virtual private cloud, and connecting the two with a VPN. In private clouds you run it somewhere on the network near your cloud, which is much easier.
- A key management virtual appliance. Your vendor provides a pre-configured virtual appliance (instance) for you to run in your private cloud. We do not recommend you run this in a public cloud because – even if the instance is encrypted – there is significantly more exposure to live memory exploitation and loss of keys. If you decide to go this route anyway, use a vendor that takes exceptional memory protection precautions. A virtual appliance doesn’t offer the same physical security as a physical server, but they do come hardened and support more flexible deployment options – you can run it within your cloud.
- Key management software, which can run either on a dedicated server or within the cloud on an instance. The difference between software and a virtual appliance is that you install the software yourself rather than receiving a configured and hardened image. Otherwise it offers the same risks and benefits as a virtual appliance, assuming you harden the server (instance) as well as the virtual appliance.
- Key management Software as a Service (SaaS). Multiple vendors now offer key management as a service specifically to support public cloud encryption. This also works for other kinds of encryption, including private clouds, but most usage is for public clouds. There are a few different deployment topologies, which we will discuss in a moment.
When deploying a key manager in a cloud there are a few wrinkles to consider. The first is that if you have hardware security requirements your only option is to deploy a HSM or encryption/key management appliance compatible with the demands of cloud computing – where you may have many more dynamic network connections than in a traditional network (note that raw key operations per second is rarely the limiting factor). This can be on-premise with your private cloud, or remote with a VPN connection to the virtual private cloud. It could also be provided by your cloud provider in their data center, offered as a service, with native cloud API support for management. Another option is to store the root key on your own hardware, but deploy a bastion provisioning and management server as a cloud instance. This server handles communications with encryption clients/agents and orchestrates key exchanges, but the root key database is maintained outside the cloud on secure hardware.
If you don’t have hardware security requirements a number of additional options open up. Hardware is often required for compliance reasons, but isn’t always necessary.
Virtual appliances and software servers are fairly self-explanatory. The key issue (no pun intended) is that you are likely to need additional synchronization and orchestration to handle multiple virtual appliances in different zones and clouds. We will talk about this more in a moment, when we get to features.
Like deploying a hardware appliance, some key management service providers also deploy a local instance to assist with key provisioning (this is provider dependent and not always needed). In other cases the agents will communicate directly with the cloud provider over the Internet. A final option is for the security provider to partner with the cloud provider and install some components within the cloud to improve performance, to enhance resilience, and/or to reduce Internet traffic – which cloud providers charge for.
To choose an appropriate topology answer the following questions:
- Do you need hardware-level key security?
- How many instances and key operations will you need to support?
- What is the topology of your cloud deployment? Public or private? Zones?
- What degree of separation of duties and keys do you need?
- Are you willing to work with a key management service provider?
For a full overview of key management servers, see our paper Understanding and Selecting a Key Management Solution. Rather than copying and pasting an 18-page paper we will focus on a few cloud-specific requirements we haven’t otherwise covered yet.
- If you use any kind of key management service, pay particular attention to how keys are segregated and isolated between cloud consumers and from service administrators. Different providers have different architectures and technologies to manage this, and you should to map your security requirements agains how they manage keys. In some cases you might be okay with a provider having the technical ability to get your keys, but this if often completely unacceptable. Ask for technical details of how they manage key isolation and the root of trust.
- Even if you deploy your own encryption system you will need granular isolation and segregation of keys to support cloud automation. For example if a business unit or development team is spinning up and shutting down instances dynamically, you will likely want to provide the capability to manage some of their own keys without exposing the rest of the organization.
- Cloud infrastructure is more dynamic than traditional infrastructure, and relies more on Application Programming Interfaces (APIs) and network connectivity – you are likely to have more network connections from a greater number of instances (virtual machines). Any cloud encryption tool should support APIs and a high number of concurrent network connections for key provisioning.
- For volume encryption look for native clients/agents designed to work with your specific cloud platform. These are often able to provide information above and beyond standard encryption agents to ensure only acceptable instances access keys. For example they might provide instance identifiers, location information, and other indicators which do not exist on a non-cloud encryption agent. When they are available you might use them to only allow an instance to access encrypted storage if it is located in the correct availability zone, to verify that an authorized user launched the instance, etc. They may also work more effectively with the peculiarities of IaaS storage. For boot volume encryption this is mandatory.
- Cloud-specific management and reporting enables you to better manage keys for the cloud and manually provision keys as needed. The encryption tool should report instance-level details of key provisioning, such as instance and zone identifiers. This information is critical for manual provisioning or approval of key releases to make sure someone doesn’t just clone an instance, modify it, and then use it to steal keys.
- Cloud encryption agents should pay particular attention to minimizing key exposure in volatile memory (RAM). This is essential to reduce exposure of keys to cloud administrator and other tenant on the same physical server, depending on the memory protection features of the cloud platform.
These are merely the cloud-specific features to look for, in addition to all the standard key management and encryption features.
Posted at Wednesday 1st May 2013 2:18 pm
(0) Comments •
As we mentioned in our last post, there are three options for encrypting entire storage volumes:
We will start with the first two today, then cover proxy encryption and some deeper details on cloud key managers (including SaaS options) next.
This is the least secure and manageable option; it is generally only suitable for development environments, test instances, or other situations where long-term manageability isn’t a concern.
Here is how it works:
- The encryption engine runs inside the instance. Examples include TrueCrypt and the Linux
- You connect a second new storage volume.
- You log into your instance, and using the encryption engine you encrypt the new storage volume. Everything is inside the instance except the raw storage, so you use a passphrase, file-based key, or digital certificate for the key.
- You can also use this technique with a tool like TrueCrypt and create and mount a storage volume that’s really just a encrypted large file stored on your boot volume.
Any data stored on the encrypted volume is protected from being read directly from the cloud storage (for instance if a physical drive is lost or a cloud administrator tries to access your files using their native API), but is accessible from the logged-in instance while the encrypted volume is mounted. This protects you from many cloud administrators, because only someone with actual access to log into your instance can see the data, which is something even a cloud administrator can’t do without the right credentials.
This option also protects data in snapshots. Better yet, you can snapshot a volume and then connect it to a different instance so long as you have the key or passphrase. Instance-managed encryption also works well for public and private cloud.
The downside is that this approach is completely unmanageable. The only moderately secure option is to use a passphrase when you mount the encrypted volume, which requires manual intervention every time you reboot the instance or connect it (or a snapshot) to a different instance. For security reasons you can’t store the key (or passphrase) in a file in the instance, or use a stored digital certificate, because anything stored on the unencrypted boot volume of the instance is exposed. Especially since, as of this writing, we know of no options to use this to encrypt a bootable instance – it only works for ‘external’ storage volumes.
In other words this is fine for test and development, or to exchange data with someone else by whole volumes, but should otherwise be avoided.
Externally-managed encryption is similar to instance-managed, but the keys are handled outside the instance in a key management server or Hardware Security Manager (HSM). This is an excellent option for most cloud deployments.
With this option the encryption engine (typically a client/agent for whatever key management tool you are using) connects to an extermal key manager or HSM. The key is provided subject to the key manager’s security checks, and then used by the engine or client to access the storage volume. The key is never stored on disk in the instance, so the only exposure is in RAM (or snapshots of RAM). Many products further reduce this exposure by overwriting keys’ memory when the keys aren’t in active use.
As with instance-managed encryption, storage volumes and snapshots are protected from cloud administrators. But using an external key manager offers a wealth of new benefits, such as:
- This option supports reboots, autoscaling, and other cloud operations that instance-managed encryption simply cannot. The key manager can perform additional security checks, which can be quite in-depth, to ensure only approved instances access keys. It can then provide keys automatically or alert a security administrator for quick approval.
- Auditing and reporting are centralized, which is essential for security and compliance.
- Keys are centrally managed and stored, which dramatically improves manageability and resiliency at enterprise scale.
- Externally-managed encryption supports a wide range of deployment options, such as hybrid clouds and even managing keys for multiple clouds.
- This approach works well for both public and private clouds.
- A new feature just becoming available even you to encrypt a boot volume, similar to laptop full disk encryption (FDE). This isn’t currently possible with any other volume encryption option, and it is only available in some products.
There are a few downsides, including:
- The capital investment is greater – you need a key management server or HSM, and a compatible encryption engine.
- You must install and maintain a key management server or HSM that is accessible to your cloud infrastructure.
- You need to ensure your key manager/HSM will scale with your cloud usage. This isn’t less an issue of how many keys it stores than how well it performs in a cloud, or when connecting to a cloud (perhaps due to network latency).
This is often the best option for encrypting volume storage, but our next post will dig into the details a bit more – there are many deployment and feature options.
Posted at Tuesday 30th April 2013 2:36 pm
(1) Comments •
I have been learning a lot lately about password hashing since we realized our own site used an inadequate mechanism (SHA256). I am also a major fan of 1Password for password generation and management. So I held my breath while reading how to use Hashcat on 1Password data:
The reason for the high speed is what I think this might be a design flaw. Here is why:
But if you take a close look now you see these both mechanisms do not match in combination. To find out if the masterkey is correct, all we need is to match the padding, so all we need to satisfy the CBC is the previous 16 byte of data of the 1040 byte block. This 16 byte data is provided in the keychain! In other words, there is no need to calculate the IV at all.
I have an insanely long random master password, so this isn’t a risk for me (it sucks to type on my iPhone), but it’s darn creative and interesting. The folks at AgileBits posted a great response in the comments. Rather than denying the issue, they discussed the risk around it and how they already have an alternative because they recognized issues with their implementation:
I could plead that we were in reasonably good company in making that kind of error, but as I’ve since learned, research in academic cryptography had been telling people not to use unauthenticated encryption for more than a decade. This is why today we aren’t just looking at the kinds of attacks that seem practical, but we are also paying attention to security theorems.
In other words, they owned up and didn’t deny it, which is what we should all do.
For more details, read this deeper response on the AgileBits site. It’s worth it for a sense of these password hashing issues, which are something all security pros need to start absorbing.
Posted at Thursday 18th April 2013 8:26 am
(0) Comments •
Now that we have covered all the pesky background information, we can start delving into the best ways to actually protect data.
Securing the Storage Infrastructure and Management Plane
Your first step is to lock down the management plane and the infrastructure of your cloud storage. Encryption can compensate for many configuration errors and defend against many management plane attacks, but that doesn’t mean you can afford to skip the basics. Also, depending on which encryption architecture you select, a poorly-secured cloud deployment could obviate all those nice crypto benefits by giving away too much access to portions of your encryption implementation.
We are focused on data protection so we don’t have space to cover all the ins and outs of management plane security, but here are some data-specific pieces to be aware of:
- Limit administrative access: Even if you trust all your developers and administrators completely, all it takes is one vulnerability on one workstation to compromise everything you have in the cloud. Use access controls and tiered accounts to limit administrative access, as you do for most other systems. For example, restrict snapshot privileges to a few designated accounts, and then restrict those accounts from otherwise managing instances. Integrate all this into your privileged user management.
- Compartmentalize: You know where flat networks get you, and the same goes for flat clouds. Except that here we aren’t talking about having everything on one network, but about segregation at the management plane level. Group systems and servers, and limit cloud-level access to those resources. So an admin account for development systems shouldn’t also be able to spin up or terminate instances in the production accounting systems.
- Lock down the storage architecture: Remember, all clouds still run on physical systems. If you are running a private cloud, make sure you keep everything up to date and configured securely.
- Audit: Keep audit logs, if your platform or provider supports them, of management-plane activities including starting instances, creating snapshots, and altering security groups.
- Secure snapshot repositories: Snapshots normally end up in object storage, so follow all the object storage rules we will offer later to keep them safe. In private clouds, snapshot storage should be separate from the object storage used to support users and applications.
- Alerts: For highly sensitive applications, and depending on your cloud platform, you may be able to generate alerts when snapshots are created, new instances are launched from particular instances, etc. This isn’t typically available out of the box but shouldn’t be hard to script, and may be provided by an intermediary cloud broker service or platform if you use one.
There is a whole lot more to locking down a management plane, but focusing on limiting admin access, segregating your environment at the cloud level with groups and good account privileges, and locking down the back-end storage architecture, together make a great start.
As a reminder, volume encryption protects from the following risks:
- Protects volumes from snapshot cloning/exposure
- Protects volumes from being explored by the cloud provider, including cloud administrators
- Protects volumes from being exposed by physical drive loss (more for compliance than a real-world security issue)
IaaS volumes can be encrypted three ways:
- Instance-managed encryption: The encryption engine runs within the instance and the key is stored in the volume but protected by a passphrase or keypair.
- Externally managed encryption: The encryption engine runs in the instance but keys are managed externally and issued to instances on request.
- Proxy encryption: In this model you connect the volume to a special instance or appliance/software, and then connect the application instance to the encryption instance. The proxy handles all crypto operations and may keep keys either onboard or external.
We will dig into these scenarios next week.
Posted at Thursday 4th April 2013 4:17 pm
(1) Comments •
Now that we have covered the basics of how IaaS platforms store data, we need to spend a moment reviewing the parts of an encryption system that are relevant for protecting cloud data. Encryption isn’t our only security tool, as we mentioned in our last post, but it is one of the only practical data-specific tools at our disposal in cloud computing.
The three components of a data encryption system
Cryptographic algorithms and implementation specifics are important at the micro level, but when designing encryption for cloud computing or anything else, the overall structure of the cryptographic system is just as important. There are many resources on which algorithm to select and how to use it, but far less on how to piece together an overall system.
When encrypting data in the cloud, knowing how and where to place these pieces is incredibly important, and one of the most common causes of failure. In a multi-tenant environment – even in a private cloud – with almost zero barriers to portability, we need to pay particular attention to where we manage keys.
Three major components define the overall structure of an encryption system are:
- The data: The object or objects to encrypt. It might seem silly to break this out, but the security and complexity of the system are influenced by the nature of the payload, as well as where it is located or collected.
- The encryption engine: The component that handles the actual encryption (and decryption) operations.
- The key manager: The component that handles key and passes them to the encryption engine.
In a basic encryption system all three components are likely to be located on the same system. As an example take personal full disk encryption (the built-in tools you might use on your home Windows PC or Mac): the encryption key, data, and engine are all stored and used on the same hardware. Lose that hardware and you lose the key and data – and the engine, but that is’t normally relevant. (Neither is the key, usually, because it is protected with another key that is not stored on the system – but if the system is lost while running, with the key is in memory, that becomes a problem).
In a traditional application we would more likely break out the components – with the encryption engine in an application server, the data in a database, and key management in an external service or appliance.
In cloud computing some interesting limitations force certain architectural models:
- As of this writing, we cannot typically encrypt boot instances the way we can encrypt the full disk of a server or workstation. So we have fewer options for where to put and how to secure our data.
- One risk to protect against is a rogue cloud administrator, or anyone with administrative access to the infrastructure, seeing your data. So we have fewer options for where to securely manage keys.
- Data is much more portable than in traditional infrastructure, thanks to native storage redundancy and data management tools such as snapshots.
- Encryption engines may run on shared resources with other tenants. So your engine may need special techniques to protect keys in live memory, or you may need to alter your architecture to reduce risk.
- Automation dramatically impacts your architecture, because you might have 20 instances of a server spin up at the same time, then go away. Provisioning of storage and keys must be as dynamic and elastic as the underlying cloud application itself.
- Automation also means you may manage many more keys than in a more static, traditional application environment.
As you will see in the next sections when we get into details, we will leverage the separation of these components in a few different ways to compensate for many of the different security risks in the cloud. Honestly, the end result is likely to be more secure than what you use in your traditional infrastructure and application architectures.
Posted at Monday 1st April 2013 1:30 pm
(0) Comments •
Infrastructure as a Service storage can be insanely complex when you include operational and performance requirements. First you need to create a resource pool, which might itself be a pool of virtualized and abstracted storage, and then you need to tie it all together with orchestration to support the dynamic requirements of the cloud – such as moving running virtual machines between servers, instantly snapshotting multi-terabyte virtual drives, and other insanity.
For security we don’t need to know all the ins and outs of cloud storage, but we do need to understand the high-level architecture and how it affects our security controls. And keep in mind that the implementations of our generic architecture vary widely between different public and private cloud platforms.
Public clouds are roughly equivalent to provider-class private clouds, except that they are designed to support multiple external tenants. We will focus on private cloud storage, with the understanding that public clouds are about the same except that customers have less control.
IaaS Storage Overview
Here’s a diagram to work through:
- At the lowest level is physical storage. This can be nearly anything that satisfies the cloud’s performance and storage requirements. It might be commodity hard drives in commodity rack servers. It could be high-performance SSD drives in high-end specialized datacenter servers. But really it could be nearly any storage appliance/system you can think of.
- Some physical storage is generally pooled by a virtual storage controller, like a SAN. This is extremely common in production clouds but isn’t limited to traditional SAN. Basically, as long as you can connect it to the cloud storage manager, you can use it. You could even dedicate certain LUNs from a larger shared SAN to cloud, while using other LUNs for non-cloud applications. If you aren’t a storage person just remember there might be some sort of controller/server above the hard drives, outside your cloud servers, that needs to be secured.
That’s the base storage. On top of that we then build out:
- Object storage controllers (also called managers) connect to assigned physical or virtual storage and manage orchestration and connectivity. Above this level they communicate using APIs. Some deployments include object storage connectivity software running on distributed commodity servers to tie the servers’ hard drives into the storage pool.
- Object storage controllers create virtual containers (also called buckets) which are assigned to cloud users. A container is a pool of storage in which you can place objects (files). Every container stores each bit in multiple locations. This is called data dispersion, and we will talk more about it in a moment.
Object storage is something of a cross between a database and a file share. You move files into and out of it; but instead of being managed by a file system you manage it with APIs, at an abstracted layer above whatever file systems actually store the data. Object storage is accessed via APIs (almost always RESTful HTTP APIs) rather than classic network file protocols, which offers tremendous flexibility for integration into different applications and services. Object storage includes logic below the user-accessible layer for features such as quotas, access control, and redundancy management.
- Volume storage controllers (also called managers) connect to assigned physical (or virtual) storage and manage orchestration and connectivity. Above this level they communicate using APIs. The volume controller creates volumes on request and assigns them to specific cloud instances. To use traditional virtualization language, it creates a virtual hard drive and connects it to a virtual machine. Data dispersion is often used to provide redundancy and robustness.
- A volume is essentially a persistent virtual hard drive. It can be of any size supported by the cloud platform and underlying resources, and a volume assigned to a virtual machine exists until it is destroyed (note that tearing down an instance often automatically also returns the associated volume storage back to the free storage pool).
- Physical servers run hypervisors and cloud connectivity software to tie them into the compute resource pool. This is where instances (virtual machines) run. These servers typically have local hard drives which can be assigned to the volume controller to expand the storage pool, or used locally for non-persistent storage. We call this ‘ephemeral’ storage, and it’s great for swap files and other higher-performance operations that don’t require the resiliency of a full storage volume. If your cloud uses this model, the cloud management software places swap on these local drives. When you move or shut down your instance this data is always lost, although it might be recoverable until overwritten.
We like to discuss volumes as if they were virtual hard drives, but they are a bit more complex. Volumes may be distributed and data dispersed across multiple physical drives. There are also implications which we will consider later for considering volumes in the context of your cloud, and how they interact with object storage and things like snapshots and live migrations.
How object and volume storage interact
Most clouds include both object and volume storage, even if object storage isn’t available directly to users. Here are the key examples:
- A snapshot is a near-instant backup of a volume that is moved into object storage. The underlying technology varies widely and is too complex for my feeble analyst brain, but a snapshot effectively copies a complete set of the storage blocks in your volume, into a file stored in an object container which has been assigned to snapshots. Since every block in your volume is likely stored in multiple physical locations, typically 3 or more times, taking a snapshot tells the volume controller to copy a complete set of blocks over to object storage. The operation can take a while but it looks instantaneous because the snapshot accurately reflects the state of the volume at that point in time, while the volume is stil fully usable – running on another set of blocks while the snapshot is moved over (this is a (major oversimplification of something that makes my head hurt).
- Images are pre-defined storage volumes in object storage, which contain operating systems or other virtual hard drives used to launch instances. An image might be a base version of Windows, or a completely configured server in an n-tier application stack. When you launch an instance the volume controller creates a volume of the required size, then pulls the requested image from the object controller and loads it up into the virtual machine.
- Because snapshots and images are no different than any other objects or files in object storage, they are very portable and (in public clouds) can be made available to the Internet with a single API call or mouse click.
- You can quickly create images from running instances. These images contain everything stored “on disk” unless you deliberately exclude particular locations such as swap files.
Understanding of these components is essential for securing cloud resources. A snapshot is a near-instant backup of a (virtual) hard drive that is incredibly portable, and easily made public. A few years ago I co-wrote a script that, if run on a cloud administrator’s computer, would snapshot every single volume that administrator could access and make the snapshots public. With a nice metadata tag to make them easy to find. A few API calls from an unprotected developer or administrator system could expose all the data in your cloud.
Also, if you allow instances to store data in local ephemeral storage, sensitive data such as encryption keys may be left behind when you move or terminate an instance.
- Data dispersion is equivalent to RAID protection, but implemented differently. Any storage block is replicated in multiple physical locations across your cloud. In private clouds you configure this yourself, but in public clouds it is likely an opaque feature. Dispersion is great for resiliency and valuable for security – any given file might be broken up and stored on multiple hard drives. So losing one drive might not matter much, but you can rarely figure out exactly what data is stored on which drives.
Cloud storage networks
All this runs on multiple networks (at least, if you built your cloud for performance and reliability). Some of them might be:
- If you use virtual storage (e.g., SAN) this likely runs over its own storage network.
- A management network ties together the cloud controller components, particularly object and volume managers and agents.
- A data/storage network for connecting volumes to instances, to improve performance. This may also connect object and volume storage.
- The external public network for managing cloud controllers via API.
- A service network for communicating between outside clients and instances, as well as between instances – typically the Internet.
You will likely have at least one network to the outside world, one for storage (between volumes and instances), and another for management.
Some or all of these might be the same physical network, segregated with VLANs, but consider how much you trust VLANs with unknown parties running their own operating systems adjacent to your equipment. Lastly, these networks might violate your expectations for networks, due to new physical platforms for cloud hosting, which may run storage and communications traffic over the same physical connections.
This isn’t an attempt to scare you – the ins and outs of designing and securing these networks are fodder for another day – but you need to be aware of what is under the surface.
The architecture and resiliency of cloud storage models create new and interesting risks:
- Cloud managers, either in your environment or your cloud provider’s, can access any data stored in the cloud over the network. This is very different than traditional infrastructure where storage access typically requires physical connectivity.
- Snapshots become ubiquitous because they are effectively instantaneous, highly portable, and accessible over the network. They pose a significantly increased risk of exposure compared to traditional infrastructure, where snapshots are less common, less portable, and less exposed.
- Images of instances may contain and expose sensitive data.
- All this is managed with networks and APIs, which remove some of our traditional security controls and conceptions. Someone accessing a cloud administrator or developer’s system could, depending on things are set up, access literally an entire datacenter.
- Cloud data can be incredibly resilient, with any given bit stored in multiple places across the cloud.
- You may have 3 or more networks to secure (for storage) and segregate. Don’t trust VLANs.
- You have far less visibility into where things are actually stored, although some cloud platforms are beginning to offer more transparency – this is an evolving area.
- You still have physical and virtual storage to keep secure, underneath everything else.
Due to all this complexity and portability, encryption is the best tool available for most cloud data security. Encryption, implemented properly, protects data as it moves through your environment. It doesn’t matter if there are 3 versions of a particular block exposed on multiple hard drives, because without the key the data is meaningless. It doesn’t matter if someone makes a snapshot of an encrypted volume public. Only exposure of the keys and data would be problematic.
Of course encryption cannot wipe all security issues away. As we will discuss, you cannot use it for certain applications such as boot volumes, and data on unencrypted volumes is still exposed. But in combination with our other recommendations, encryption enables you to store and process even sensitive data in the cloud.
Posted at Thursday 28th March 2013 5:09 pm
(0) Comments •
Infrastructure as a Service (IaaS) is often thought of as merely as a more efficient (outsourced) version of our traditional infrastructure. On the surface you still manage things that look like simple virtualized networks, computers, and storage. You ‘boot’ computers (launch instances), assign IP addresses, and connect (virtual) hard drives. But while the presentation of IaaS resembles traditional infrastructure, the reality underneath is anything but business as usual.
For both public and private clouds, the architecture of the physical infrastructure that comprises the cloud, as well as the connectivity and abstraction components used to provide it, dramatically alter how we need to manage our security. It isn’t that the cloud is more or less secure than traditional infrastructure, but it is very different.
Protecting data in the cloud is a top priorities of most organizations as they adopt cloud computing. In some cases this is due to moving onto a public cloud, with the standard concerns any time you allow someone else to access or hold your data. But private clouds also comes with the same risk changes, even if they don’t trigger the same gut reaction as outsourcing.
This series will dig into protecting data stored in and used with Infrastructure as a Service. There are a few options, but we will show why in the end the answer almost always comes down to encryption … with some twists.
What Is IaaS Storage?
Infrastructure as a Service includes two primary storage models:
- Object storage is a file repository. This is higher-latency storage with lower performance requirements, which stores individual files (‘objects’). Examples include Amazon S3 and RackSpace Cloud Files for public clouds, and OpenStack Swift for private clouds. Object storage is accessed using an API, rather than a network file share, which opens up a wealth of new uses – but you can layer a file browsing interface on top of the API.
- Volume storage is effectively a virtual hard drive. These higher-performing volumes attach to virtual machines and are used just like a physical hard drive or array. Examples include VMWare VMFS, Amazon EBS, RackSpace RAID, and OpenStack Cinder.
To (over)simplify, object storage replaces file servers and volume storage is a substitute for hard drives. In both cases you take a storage pool – which could be anything from a SAN to hard drives on individual servers – and add abstraction and management layers. There are other kinds of cloud storage such as cloud databases, but they fall under either Platform as a Service (PaaS) or Software as a Service (SaaS). For this IaaS series, we will stick to object and volume storage.
Due to the design of Infrastructure as a Service, data storage is very different than keeping it in ‘regular’ file repositories and databases. There are substantial advantages such as resilience, elasticity, and flexibility; as well as new risks in areas such as management, transparency, segregation, and isolation.
How IaaS Is Different
We will cover details in the next post, but at a high level:
In private cloud infrastructure our data is co-mingled extensively, and the physical locations of data are rarely as transparent as before. You can’t point to a single server and say, “there are my credit card numbers” any more. Often you can set things up that way, at the cost of all the normal benefits of cloud computing.
Any given piece of data may be located in multiple physical systems or even storage types. Part of the file might be on a server, some of it in a SAN, and the rest in a NAS, but it all looks like it’s in a single place. Your sensitive customer data might be on the same hard drive that, through layers of abstraction, also supports an unsecured development system. Plan incorrectly and your entire infrastructure can land in your PCI assessment scope – all mixed together at a physical level.
To top it off, your infrastructure is now managed by a web-based API that, if not properly secured could allow someone on the other side of the planet unfettered access to your (virtual) data center.
We are huge proponents of cloud computing, but we are also security guys. It is our job to help you identify and mitigate risks, and we’ll let infrastructure experts tell you why you should use IaaS in the first place.
Public cloud infrastructure brings the same risks with additional complications because you no longer control ‘your’ infrastructure, your data might be mingled with anyone else on the Internet, and you lose most or all visibility into who (at your provider) can access your data.
Whether private or public, you need to adjust security controls to manage the full abstraction of resources. You cannot rely on knowing where network cables plug into boxes anymore.
Here are a few examples of how life changes:
- In private clouds, any virtual system that connects to any physical system holding credit card account numbers is within the scope of a PCI assessment. So if you run an application that collects credit cards in the same cloud as one that holds unsecured internal business systems, both are within assessment scope. Unless you take precautions we will talk about later.
- In public clouds an administrator at your cloud provider could access your virtual hard drives. This would violate all sorts of policies and contracts, but it is still technically quite possible.
- In most IaaS clouds a single command or API call can make an instant copy (snapshot) of an entire virtual hard drive, and then move it around your environment or make it public on the Internet.
- If your data is on the same hard drive as a criminal organization using the same cloud provider, and ‘their’ hardware is seized as part of an investigation, your data may be exposed. Yes, this has happened.
It comes down to less visibility below the abstraction layer, and data from multiple tenants mixed on the same physical infrastructure. This is all manageable – it’s just different.
Most of what we want to do, from a security standpoint, is use encryption and other techniques to either restore this visibility, or eliminate the need for it entirely.
Our next post will dig into a generalized model for how data is stored in IaaS, followed by detailed security recommendations.
Posted at Wednesday 27th March 2013 4:52 pm
(1) Comments •
Yep – we are doing our very best to overload you with research this year. Here’s my latest. From the paper’s home page:
Between new initiatives such as cloud computing, and new mandates driven by the continuous onslaught of compliance, managing encryption keys is evolving from something only big banks worry about into something which pops up at organizations of all sizes and shapes. Whether it is to protect customer data in a new web application, or to ensure that a lost backup tape doesn’t force you to file a breach report, more and more organizations are encrypting more data in more places than ever before. And behind all of this is the ever-present shadow of managing all those keys.
Data encryption can be a tricky problem, especially at scale. Actually all cryptographic operations can be tricky; but we will limit ourselves to encrypting data rather than digital signing, certificate management, or other uses of cryptography. The more diverse your keys, the better your security and granularity, but the greater the complexity. While rudimentary key management is built into a variety of products – including full disk encryption, backup tools, and databases – at some point many security professionals find they need a little more power than what’s embedded in the application stack.
This paper digs into the features, functions, and a selection process for key managers.
Understanding and Selecting a Key Manager (PDF)
Special thanks to Thales for licensing the content.
Posted at Tuesday 5th February 2013 1:38 pm
(0) Comments •
It’s one thing to collect, secure, and track a wide range of keys; but doing so in a useful, manageable manner demonstrates the differences between key management products.
Managing disparate keys from distributed applications and systems, for multiple business units, technical silos, and IT management teams, is more than a little complicated. It involves careful segregation and management of keys; multiple administrative roles; abilities to organize and group keys, users, systems, & administrators; appropriate reporting; and an effective user interface to tie it all together.
Role management and separation of duties
If you are managing more than a single set of keys for a single application or system you need a robust role-based access control system (RBAC) – not only for client access, but for the administrators managing the system. It needs to support ironclad separation of duties, and multiple levels of access and administration.
Role management and separation of duties
An enterprise key manager should support multiple roles, especially multiple administrative roles. Regular users never directly access the key manager, but system and application admins, auditors, and security personnel may all need some level of access at different points of the key management lifecycle. For instance:
- A super-admin role for administration of the key manager itself, with no access to the actual keys.
- Limited administrator roles that allow access to subsets of administrative functions such as backup and restore, creating new key groups, and so on.
- An audit and reporting role for viewing reports and audit logs. This may be further subsetted to allow access only to certain audit logs (e.g., a specific application).
- System/application manager roles for individual systems and application administrators who need to generate and manage keys for their respective responsibilities.
- Sub-application manager roles which only have access to a subset of the rights of a system or application manager (e.g., create new keys only but not view keys).
- System/application roles for the actual technical components that need access to keys.
Any of these roles may need access to a subset of functionality, and be restricted to groups or individual key sets. For example, a database security administrator for a particular system gains full access to create and manage keys only for the databases associated with those systems, but not to manage audit logs, and no ability to create or access keys for any other applications or systems.
Ideally you can build an entitlement matrix where you take a particular role, then assign it to a specific user and group of keys. Such as assigning the “application manager” role to “user bob” for group “CRM keys”.
Split administrative rights
There almost always comes a time where administrators need deeper access to perform highly-sensitive functions or even directly access keys. Restoring from backup, replication, rotating keys, revoking keys, or accessing keys directly are some functions with major security implications which you may not want to trust to a single administrator.
Most key managers allow you to require multiple administrators to apporve for these functions, to limit the ability of any one administrator to compromise security. This is especially important when working with the master keys for the key manager, which are needed for taks including replication and restoration from backup.
Such functions which involve the master keys are often handled through a split key. Key splitting provides each administrator with a portion of a key, all or some of which are required. This is often called “m of n” since you need m sub-keys out of a total of n in existence to perform an operation (e.g., 3 of 5 admin keys).
These keys or certificates can be stored on a smart card or similar security device for better security.
Key grouping and segregation
Role management covers users and their access to the system, while key groups and segregation manage the objects (keys) themselves. No one assigns roles to individual keys – you assign keys to groups, and then parcel out rights from there (as we described in some examples above).
Assigning keys and collections of keys to groups allows you to group keys not only by system or application (such as a single database server), but for entire collections or even business units (such as all the databases in accounting). These groups are then segregated from each other, and rights are assigned per group. Ideally groups are hierarchical so you can group all application keys, then subset application keys by application group, and then by individual application.
Auditing and reporting
In our compliance-driven security society, it isn’t enough to merely audit activity. You need fine-grained auditing that is then accessible with customized reports for different compliance and security needs.
Type of activity to audit include:
- All access to keys
- All administrative functions on the key manager
- All key operations – including generating or rotating keys
A key manager is about as security-sensitive as it gets, and so everything that happens to it should be auditable. That doesn’t mean you will want to track every time a key is sent to an authorized application, but you should have the ability for when you need it.
Some tools support
Raw audit logs aren’t overly useful on a day to day basis, but a good reporting infrastructure helps keep the auditors off your back while highlighting potential security issues.
Key managers may include a variety of pre-set reports and support creation of custom reports. For example, you could generate a report of all administrator access (as opposed to application access) to a particular key group, or one covering all administrative activity in the system.
Reports might be run on a preset schedule, emailing summaries of activity out on a regular basis to the appropriate stakeholders.
In the early days of key management everything was handled using command line interfaces. Most current systems implement graphical user interfaces (often browser based) to improve usability. There are massive differences in look and feel across products, and a GUI that fits the workflow of your staff can save a great deal of time and cost.
Common user interface components include:
- A systems administration section for managing the key manager configuration, backup/restore, and other system settings.
- A user management section for defining users, roles, and entitlements.
- A key management section for creating key collections and groups, and defining access to objects.
- A system/application manager section to allow system and application managers access to the key manager functions under their control.
- A section for managing which systems and applications access keys, and how those are configured and authenticated.
- Filtering of user interface elements so people only see what they have access to.
Depending on how you use the key manager you might never allow access to anyone other than security administrators – with everyone else submitting requests or managing keys through the API, client, or command line interface. But if other people need access, you will want to ensure the user interface supports whatever limited access you want to grant, with an appropriate interface.
Posted at Monday 3rd December 2012 8:07 am
(2) Comments •