recent post tying segmented web browsing to DMZs by Daniel Miessler got me thinking more about the network segmentation that is lacking in most organizations. The concept behind that article is to establish a browser network in a DMZ, wherein nothing is trusted. When a user wants to browse the web, the article implies that the user fires up a connection into the browser network for some kind of proxy out onto the big, bad Internet. The transport for this connection is left to the user’s imagination, but it’s easy to envision something along the lines of Citrix Xenapp filling this gap. Fundamentally this may offset some risk initially, but don’t get too excited just yet.

First let’s clear up what a DMZ should look like conceptually. From the perspective of most every organization, you don’t want end users going directly into a DMZ. This is because, by definition, a DMZ should be as independent and self-contained as possible. It is a segment of your network that is Internet facing and allows specific traffic to (presumably) external, untrusted parties. If something were to go wrong in this environment, such as a server being compromised, the breach shouldn’t expose the internal organization’s networks, servers, and data; and doesn’t provide the gold mine of free reign most end users have on the inside.

The major risk of the DMZ network architecture is an attacker poking (or finding) holes and building paths back into your enterprise environment. Access from the inside into the primary DMZ should be restricted solely to some level of bastion hosts, user access control, and/or screened transport regimens. While Daniel’s conceptual diagram might be straightforward, it leaves out a considerable amount of magic that’s required to scale in the enterprise, given that the browser network needs to be segregated from the production environments. This entails pre-authenticating a user before he/she reaches the browser network, requiring a repository of internal user credentials outside the protected network. There must also be some level of implied trust between the browser network and that pre-authentication point, because passing production credentials from the trusted to untrusted network would be self-defeating. Conversely, maintaining a completely different user database (necessarily equivalent to the main non-DMZ system, and possibly including additional accounts if the main system is not complete) is out of the question in terms of scalability and cost (why build it once when you can build it twice, at twice the price?), so at this point we’re stuck in an odd place: either assuming more risk of untrusted users or creating more complexity – and neither is a good answer.

Assuming we can get past the architecture, let’s poke at the browser network itself. Organizations like technologies they can support, which costs money. Since we assume the end user already has a supported computer and operating system, the organization now has to take on another virtual system to allow them to browse the Internet. Assuming a similar endpoint configuration, this roughly doubles the number of supported systems and required software licenses. It’s possible we could deploy this for free (as in beer) using an array of open source software, but that brings us back to square one for supportability. Knock knock. Who’s there? Mr. Economic Reality here – he says this dog won’t hunt.

What about the idea of a service provider basically building this kind of browser network and offering access to it as part of a managed security service, or perhaps as part of an Internet connectivity service? Does that make this any more feasible? If an email or web security service is already in place, the user management trust issue is eliminated since the service provider already has a list of authorized users. This also could address the licensing/supportability issue from the enterprise’s perspective, since they wouldn’t be licensing the extra machines in the browser network. But what’s in it for the service provider? Is this something an enterprise would pay for? I think not. It’s hard to make the economics work, given the proliferation of browsers already on each desktop and the clear lack of understanding in the broad market of how a proxy infrastructure can increase security.

Supportability and licensing aside, what about the environment itself? Going back to the original post, we find the following:

  • Browsers are banks of virtual machines
  • Sandboxed
  • Constantly patched
  • Constantly rebooted
  • No access to the Internet except through the browser network
  • Untrusted, just like any other DMZ

Here’s where things start to fall apart even further. The first two don’t really mean much at this point. Sandboxing doesn’t buy us anything because we’re only using the system for web browsing anyway, so protecting the browser from the rest of the system (which does nothing but support the browser) is a moot point. To maintain a patch set makes sense, and one could argue that since the only thing on these systems is web browsing, patching cycles could be considerably more flexible. But since the patching is completely different than inside our normal environment, we need a new process and a new means of patch deployment. Since we all know that patching is a trivial slam-dunk which everyone easily gets right these days, we don’t have to worry, right? Yes, I’m kidding. After all, it’s one more environment to deal with. Check out Project Quant for Patch Management to understand the issues inherent to patching anything.

But we’re not done yet. You could mandate access to the Internet only through the browser network for the general population of end users, but what about admins and mobile users? How do we enforce any kind of browser network use when they are at Starbucks? You cannot escape exceptions, and exceptions are the weakest link in the chain. My personal favorite aspect of the architecture is that the browser network should be considered untrusted. Basically this means anything that’s inside the browser network can’t be secured. We therefore assume something in the browser network is always compromised. That reminds me of those containment units the Ghostbusters used to put Slimer and other nefarious ghosts in. Why would I, by architectural construct, force my users to use an untrusted environment? Yes, you can probably make a case that internal networks are untrusted as well, but not untrustworthy by design.

In the end, the browser network won’t work in today’s enterprise, and Lord knows a mid-sized business doesn’t have the economic incentive to embrace such an architecture. The concept of moving applications to a virtualized context to improve control has merit – but this deployment model isn’t feasible, doesn’t increase the security posture much (if at all), creates considerable rework, and entails significant economic impact. Not a recipe for success, eh? I recommend you allocate resources to more thorough network segmentation, accelerated patch management analysis, and true minimal-permission user access controls. I’ll be addressing these issues in upcoming research.

Share: