As much as we enjoy being the masters of the obvious, we don’t really need to discuss the move to cloud computing. It’s happening. It’s disruptive. Blah blah blah. People love to quibble about the details but it’s obvious to everyone. And of course, when the computation and storage behind your essential IT services might not reside in a facility under your control, things change a bit. The idea of a privileged user morphs in the cloud context, by adding another layer of abstraction via the cloud management environment. So regardless of your current level of cloud computing adoption, you need to factor the cloud into your PUM (privileged user management) initiative.
Or do you? Let’s play a little Devil’s advocate here. When you think about it, isn’t cloud computing just more happening faster? You still have the same operating systems running as guests in public and/or private clouds, but with a greatly improved ability to spin up machines, faster than ever before. If you are able to provision and manage the entitlements of these new servers, it’s all good, right? In the abstract, yes. But the same old same old doesn’t work nearly as well in the new regime. Though we do respect the ostrich. Unfortunately burying your head in the sand doesn’t really remove the need to think about cloud privileged users. So let’s walk through some ways cloud computing differs fundamentally than the classical world of on-premise physical servers.
Cloud Risks
First of all, any cloud initiative adds another layer of management abstraction. You manage cloud resources though either a virtualization console (such as vCenter or XenCenter) or a public cloud management interface. This means a new set of privileged users and entitlements which require management. Additionally, this cloud stuff is (relatively) new, so management capability lags well behind a traditional data center. It’s evolving rapidly but hasn’t yet caught up with tools and processes for management of physical servers on a local physical network – and that immaturity poses a risk.
For example, without entitlements properly configured, anyone with access to the cloud console can create and tear down any instance in the account. Or they can change access keys, add access or entitlements, change permissions, etc. – for the entire virtual data enter. Again, this doesn’t mean you shouldn’t proceed and take full advantage of cloud initiatives. But take care to avoid unintended consequences stemming from the flexibility and abstraction of the cloud.
We also face a number of new risks driven by the flexibility of provisioning new computing resources. Any privileged user can spin up a new instance, which might not include proper agentry & instrumentation to plug into the cloud management environment. You don’t have the same coarse control of network access we had before, so it’s easier for new (virtual) servers to pop up, which means it’s also easier to be exposed accidentally. Management and security largely need to be implemented within the instances – you cannot rely on the cloud infrastructure to provide them. So cloud consoles absolutely demand suitable protection – at least as much as the most important server under their control.
You will want to take a similar lifecycle approach to protecting the cloud console as you do with traditional devices.
The Lifecycle in the Clouds
To revisit our earlier research, the Privileged User Lifecycle involves restricting access, protecting credentials, enforcing entitlements, and monitoring P-user activity – but what does that look like in a cloud context?
Restrict Access (Cloud)
As in the physical world, you have a few options for restricting access to sensitive devices, which vary dramatically between private and public clouds. To recap: you can implement access controls within the network, on the devices themselves (via agents), or by running all connections through a proxy and only allowing management connections from the proxy.
- Private cloud console: The tactics we described in Restrict Access generally work, but there are a few caveats. Network access control gets a lot more complicated due to the inherent abstraction of the cloud. Agentry requires pre-authorized instances which include properly configured software. A proxy requires an additional agent of some kind on each instance, to restrict management connections to the proxy. That is actually as in the traditional datacenter – but now it must be tightly integrated with the cloud console. As instances come and go, knowing which instances are running and which policy groups each instance requires becomes the challenge. To fill this gap, third party cloud management software providers are emerging to add finer-grained access control in private clouds.
- Public cloud console: Restricting network access is an obvious non-starter in a public cloud. Fortunately you can set up specific security groups to restrict traffic and have some granularity on which IP addresses and protocols can access the instances, which would be fine in a shared administrator context. But you aren’t able to restrict access to specific users on specific devices (as required by most compliance mandates) at the network layer, because you have little control over the network in a public cloud. That leaves agentry on the instances, but with little ability to stop unauthorized parties from accessing instances. Another less viable option is a proxy, but you can’t really restrict access per se – the console literally lives on the Internet. To protect instances in a public cloud environment, you need to insert protections into other segments of the lifecycle.
Fortunately we are seeing some innovation in cloud management, including the ability to manage on demand. This means access to manage instances (usually via ssh
on Linux instances) is off by default. Only when management is required does the cloud console open up a management port(s) via policy, and only for authorized users at specified times. That approach address a number of the challenges of always on and always accessible cloud instances, and so it’s a promising model for cloud management.
Protect Credentials (Cloud)
When we think about protecting credentials for cloud computing resources, we use got an expanded concept of credentials. We now need to worry about three types of credentials:
- Credentials for the cloud console(s)
- Credentials for instances
- Credentials for API access
The real question is which of these groups can (and should) be stored in a password vault as described in Protect Credentials. Optimally the answer is yes: everything goes in the fault. But it’s rarely so simple. The most straightforward credential to store in the vault is the console credential. In a private cloud access to the vault is no different than access to traditional data center devices, so that’s not an issue.
It’s a little more complicated with a public cloud, as you would need to log into the proxy or have the credentials transferred to an agent on the device if you opt for a device-based approach. Either way it’s achievable and recommended, given the capabilities of the cloud console. Another option is to rely on federation to allow existing trusted credentials to be used for federated access to the cloud console. For example, the Identity Management features of Amazon AWS support federation, so another option is to use existing credentials for access to the cloud console.
Accessing cloud instances through a vault is possible but requires some work. Basically, to start an instance, you need to have the credentials sent to and stored in the vault. So the instance needs a way to bootstrap the credential generation and storage process – clearly an area that demands automation. The proxy (or agent on an administrator’s device) manages the keys for access, as well as the administrative credentials. This may add some latency – especially in a public cloud context – and so needs to be weighed against the security risks of sharing root
access, as we discussed in the introduction.
Finally, what about API access? This would require a similar capability to current application support in password vaulting products. Basically, credentials are stored in the vault, and the application calls to the vault to get the credential, which is then securely transferred to the application, which uses the credential. In the case of the cloud management API, the automation would need to call the vault to get the API credentials and then use them accordingly. So this requires a bunch of integration by either you or the vault vendor. We are not currently aware of any vault vendor that has done this integration with a cloud services API yet, but the popularity and adoption curve of the cloud lead us to expect this capability soon.
Enforce Entitlements (Cloud)
Once the administrator gains access to the instance, how can you enforce finer grained controls? Until you refine entitlements, anyone with access to the cloud console can do pretty much anything. Some of the more mature and robust public cloud providers do offer finer grained control, but it’s largely a manual process. Private clouds are still a rapidly maturing environment, where many of the capabilities simply aren’t there yet, including limitations on the capabilities offered by the virtualization platform vendors. Thus the emergence of third party offerings to fill this gap for both private and public clouds. Of course we expect these finer-grained controls to gradually be integrated into cloud management consoles. But for now, to get real control over who can do what within your cloud environment, you need a third-party offering.
In terms of command blocking features to control what administrators can do to instances – as described in Enforce Entitlements – the approach is the same as in the physical world. You need to either install an agent on the instance to pull the policy from the management console or route management traffic through a proxy that blocks unauthorized commands. Of course the cloud complicates things a bit – routing management traffic through a central point can add latency – but again, that is the price of security.
Monitor Privileged Users (Cloud)
As with enforcing entitlements, privileged user monitoring in the cloud involves the same decision points as in the physical world. Routing through the proxy to record sessions entails the same tradeoffs: latency and possible inconvenience, weighed against security. And recording via a device agent can exact a performance toll on the instance.
But the real tradeoff is about storing logs and sessions. Do you aggregate in the cloud or send back to a central location, perhaps increasing latency or generating additional network traffic? Of course there are storage security and integrity requirements required for any cloud-based repository, which may favor a centralized, on-premise option that can be more easily controlled.
The Need for Consistency
Regardless of how you decide to manage privileged users for cloud-based instances, we cannot stress the importance of consistency enough. The eventual goal is to set one policy for privileged users and enforce it consistently everywhere. Right now, that requires multiple tools for the various steps in the PUM lifecycle, which is unsurprising given the immaturity of cloud computing. Over time we expect better integration between PUM offerings focused on traditional data centers and those built for the cloud. But beware the vendor tendency to cloud-wash their offerings: claiming superior support for cloud computing, when actually offering the exact same product running on cloud instances.
You will want to look for integration with cloud consoles and APIs as a start. Management tools without such support miss an entire area of exposure.
Speaking of integration, no management capability can stand alone. So we will wrap up this series with a look at enterprise integration and what other system management functions PUM tools need in order to be useful in an enterprise context.
Comments