This post will focus on the ‘runtime’ aspects of container security. Unlike the tools and processes discussed in previous sections, here we will focus on containers in production systems. This includes which images are moved into production repositories, security around selecting and running containers, and the security of the underlying host systems.

Runtime Security

  • The Control Plane: Our first order of business is ensuring the security of the control plane – the platforms for managing host operating systems, the scheduler, the container engine(s), the repository, and any additional deployment tools. Again, as we advised for build environment security, we recommend limiting access to specific administrative accounts: one with responsibility for operating and orchestrating containers, and another for system administration (including patching and configuration management). We recommend network segregation and physical (for on-premise) or logical segregation (for cloud and virtual) systems.
  • Running the Right Container: We recommend establishing a trusted image repository and ensuring that your production environment can only pull containers from that trusted source. Ad hoc container management is a good way to facilitate bypassing of security controls, so we recommend scripting the process to avoid manual intervention and ensure that the latest certified container is always selected. Second, you will want to check application signatures prior to putting containers into the repository. Trusted repository and registry services can help, by rejecting containers which are not properly signed. Fortunately many options are available, so find one you like. Keep in mind that if you build many containers each day, a manual process will quickly break down. You’ll need to automate the work and enforce security policies in your scripts. Remember, it is okay to have more than one image repository – if you are running across multiple cloud environments, there are advantages to leveraging the native registry in each. Beware the discrepancies between platforms, which can create security gaps.
  • Container Validation and BOM: What’s in the container? What code is running in your production environment? How long ago did we build this container image? These are common questions asked when something goes awry. In case of container compromise, a very practical question is: how many containers are currently running this software bundle? One recommendation – especially for teams which don’t perform much code validation during the build process – is to leverage scanning tools to check pre-built containers for common vulnerabilities, malware, root account usage, bad libraries, and so on. If you keep containers around for weeks or months, it is entirely possible a new vulnerability has since been discovered, and the container is now suspect. Second, we recommend using the Bill of Materials capabilities available in some scanning tools to catalog container contents. This helps you identify other potentially vulnerable containers, and scope remediation efforts.
  • Input Validation: At startup containers accept parameters, configuration files, credentials, JSON, and scripts. In some more aggressive scenarios, ‘agile’ teams shove new code segments into a container as input variables, making existing containers behave in fun new ways. Either through manual review, or leveraging a third-party security tool, you should review container inputs to ensure they meet policy. This can help you prevent someone from forcing a container to misbehave, or simply prevent developers from making dumb mistakes.
  • Container Group Segmentation: Docker does not provide container-level restriction on which containers can communicate with other containers, systems, hosts, IPs, etc. Basic network security is insufficient to prevent one container from attacking another, calling out to a Command and Control botnet, or other malicious behavior. If you are using a cloud services provider you can leverage their security zones and virtual network capabilities to segregate containers and specify what they are allowed to communicate with, over which ports. If you are working on-premise, we recommend you investigate products which enable you to define equivalent security restrictions. In this way each application has an analogue to a security group, which enables you to specify which inbound and outbound ports are accessible to and from which IPs, and can protect containers from unwanted access.
  • Blast Radius: An good option when running containers in cloud services, particularly IaaS clouds, is to run different containers under different cloud user accounts. This limits the resources available to any given container. If a given account or container set is compromised, the same cloud service restrictions which prevent tenants from interfering with each other limit possible damage between accounts and projects. For more information see our post on limiting blast radius with user accounts.

Platform Security

In Docker’s early years, when people talked about ‘container’ security, they were really talking about how to secure the Linux operating system underneath Docker. Security was more about the platform and traditional OS security measures. If an attacker gained control of the host OS, they could pretty much take control of anything they wanted in containers. The problem was that security of containers, their contents, and even the Docker engine were largely overlooked. This is one reason we focused our research on the things that make containers – and the tools that build them – secure.

That said, no discussion of container security can be complete without some mention of OS security. We would be remiss if we did not talk about host/OS/engine security, at least a bit. Here we will cover some of the basics. But we will not go into depth on securing the underlying OS. We could not do that justice within this research, there is already a huge amount of quality documentation available on the operating system of your choice, and there are much more knowledgable sources to address your concerns and questions on OS security.

  • Kernel Hardening: Docker security depends fundamentally on the underlying operating system to limit access between ‘users’ (containers) on the system. This resource isolation model is built atop a virtual map called Namespaces, which maps specific users or group of users to a subset of resources (e.g.: networks, files, IPC, etc.) within their Namespace. Containers should run under a specified user ID. Hardening starts with a secure kernel, strips out any unwanted services and features, and then configuring Namespaces to limit (segregate) resource access. It is essential to select an OS platform which supports Namespaces, to constrain which kernel resources the container can access and control user/group resource utilization. Don’t mix Docker and non-Docker services – the trust models don’t align correctly. You will want to script setup and configuration of your kernel deployments to ensure consistency. Periodically review your settings as operating system security capabilities evolve.
  • Docker Engine: Docker security has come a long way, and the Docker engine can now perform a lot of the “heavy lifting” for containers. Docker now has full support for Linux kernel features including Namespaces and Control Groups (cgroups) to isolate containers and container types. We recommend advanced isolation via Linux kernel features such as SELinux or AppArmor, on top of GRSEC compatible kernels. Docker exposes these Linux kernel capabilities at either the Docker daemon level or the container level, so you have some flexibility in resource allocation. But there is still work to do to properly configure your Docker deployment.
  • Container Isolation: We have discussed resource isolation at the kernel level, but you should also isolate Docker engine/OS groups – and their containers – at the network layer. For container isolation we recommend mapping groups of mutually trusted containers to separate machines and/or network security groups. For containers running critical services or management tools, consider running one container per VM/physical server for on-premise applications, or grouping them into into a dedicated cloud VPC to limit attack surface and minimize an attacker’s ability to pivot, should a service or container be compromised.
  • Cloud Container Services: Several cloud services providers offer to tackle platform security issues on your behalf, typically abstracting away some lower-level implementation layers by offering Containers as a Service. By delegating underlying platform-level security challenges to your cloud provider, you can focus on application-layer issues and realize the benefits of containers, without worrying about platform security or scalability.

Platform security for containers is a huge field, and we have only scratched the surface. If you want to learn more the OS platform providers, Docker, and many third-party security providers offer best practice guidance, research papers, and blogs which discuss this in greater detail.

Note that the majority of security controls in the post are preventative – efforts to prevent what we expect an attacker to attempt. We set a secure baseline to make it difficult for attackers to compromise containers – and if they do, to limit the damage they can cause. In our next and final post in this series we will discuss monitoring, logging, and auditing events in a container system. We will focus on examining what is really going on, and discovering what we don’t know in terms of security.

Share: