As we discussed in the Privileged User Lifecycle post, there are a number of aspects to Watching the Watchers. Our first today is Restricting Access. This is first mostly because it reduces your attack surface. We want controls to ensure administrators only access devices they are authorization to manage.
There are a few ways to handle restriction:
- Device-centricy (Status Quo): Far too many organizations rely on their existing controls, which include authentication and other server-based access control mechanisms.
- Network-based Isolation: Tried and true network segmentation approaches enable you to isolate devices (typically by group) and only allow authorized administrators access to the networks on which they live.
- PUM Proxy: This entails routing all management communications through a privileged user management proxy server or service which enforces access policies. The devices only accept management connections from the proxy server, and do not allow direct management access.
There are benefits and issues to each approach, so ultimately you’ll be making some kind of compromise. So let’s dig into each approach and highlight what’s good and what’s not so good.
Device-centricity (Status Quo)
There are really two levels of status quo; the first is common authentication, which we understand in this context is not really “restricting access” effectively. Obviously you could do a bit to make the authentication more difficult, including strong passwords and/or multi-factor authentication. You would also integrate with an existing identity management platform (IDM) to keep entitlements current. But ultimately you are relying on credentials as a way to keep unauthorized folks from managing your critical devices. And basic credentials can be defeated.
Many other organizations use server access control capabilities, which are fairly mature. This involves loading an agent onto each managed device and enforcing the access policy on the device. The agent-based approach offers rather solid security – the risk becomes compromise of the (security) agent. Of course there is management overhead to distribute and manage the agents, as well as the additional computational load imposed by the agent.
But any device-based approach is in opposition to one of our core philosophies: “If you can’t see it, it’s much harder to compromise.” Device-centric access approaches don’t affect visibility at all. This is suboptimal because in the real world new vulnerabilities appear every month on all operating systems – and many of them can be exploited via zero-day attacks. And those attacks provide a “back door” into servers, giving attackers control without requiring legitimate credentials – regardless of agentry on the device. So any device-based method fails if the device is rooted somehow.
Network Segmentation
This entails using network-layer technologies such as virtual LANs (VLANs) and network access control (NAC) to isolate devices and restrict access based on who can connect to specific protected networks. The good news is that many organizations (especially those subject to PCI) have already implemented some level of segmentation. It’s just a matter of building another enclave, or trust zone, for each group of servers to protect.
As mentioned, it’s much harder to break something you can’t see. Segmentation requires the attacker to know exactly what they are looking for and where it resides, and to have a mechanism for gaining access to the protected segment. Of course this is possible – there have been way to defeat VLANs for years – but vendors have closed most of the very easy loopholes.
More problematic to us is that this relies on the networking operations team. Managing entitlements and keeping devices on the proper segment in a dynamic environment, such as your data center, can be challenging. It is definitely possible, but it’s also difficult, and it puts direct responsibility for access restriction in the hands of the network ops team. That can and does work for some organizations, but organizationally this is complicated and somewhat fragile.
The other serious complication for this approach is cloud computing – including both private and public clouds. The cloud is key and everybody is jumping on the bandwagon, but unfortunately it largely removes visibility at the physical layer. If you don’t really know where specific instances are running, this approach becomes difficult or completely unworkable. We will discuss this in detail later in the series, when we discuss the cloud in general.
PUM Proxy
This approach routes all management traffic through a proxy server. Administrators authenticate to the PUM proxy, presumably using strong authentication. The authenticated administrator gets a view of the devices they can manage, and establishes a management session directly to the device. Another possible layer of security involves loading a lightweight agent on every managed devices to handle the handshake & mutual authentication with the PUM proxy, and to block management connections from unauthorized sources.
This approach is familiar to anyone who has managed cloud computing resources via vCenter (in VMware land) or a cloud console such as Amazon Web Services. You log in and see the devices/instances you can manage, and proceed accordingly.
This fits our preference for providing visibility to only those devices that can legitimately be managed. It also provides significant control over granular administrative functions, as commands can be blocked in real-time (it is a man in the middle, after all). Another side benefit is what we call the deterrent effect: administrators know all their activity is running through a central device and typically heavily monitored – as we will discuss in depth later.
But any proxy presents issues, including a possible single point of failure, and additional latency for management sessions. Some additional design & architecture work is required to ensure high availability and reasonable efficiency. It’s a bad day for the security team if ops can’t do their jobs. And periodic latency testing is called for, to make sure the proxy doesn’t impair productivity. And finally: as with virtualization and cloud consoles, if you own the proxy server, you own everything in the environment. So the security of the proxy is paramount.
All these approaches are best in different environments, and each entails its own compromises. For those just starting to experiment with privileged user management, a PUM proxy is typically the path of least resistance for getting started. It’s a question of what works best in your environment, based on the sophistication of required controls and the culture of your IT environment.
Next we will talk about protecting credentials – regardless of the approach you choose, you need to be sure management credentials are protected.
Comments