Securosis

Research

Cloud Security Automation: Code vs. CloudFormation or Terraform Templates

Right now I’m working on updating many of my little command line tools into releasable versions. It’s a mixed bag of things I’ve written for demos, training classes, clients, or Trinity (our mothballed product). A few of these are security automation tools I’m working on for clients to give them a skeleton framework to build out their own automation programs. Basically, what we created Trinity for, that isn’t releasable. One question that comes up a lot when I’m handing this off is why write custom Ruby/Python/whatever code instead of using CloudFormation or Terraform scripts. If you are responsible for cloud automation at all this is a super important question to ask yourself. The correct answer is there isn’t one single answer. It depends as much on your experience and preferences as anything else. Each option can handle much of the job, at least for configuration settings and implementing a known-good state. Here are my personal thoughts from the security pro perspective. CloudFormation and Terraform are extremely good for creating known good states and immutable infrastructure and, in some cases, updating and restoring to those states. I use CloudFormation a lot and am starting to also leverage Terraform more (because it is cross-cloud capable). They both do a great job of handling a lot of the heavy lifting and configuring pieces in the proper order (managing dependencies) which can be tough if you script programmatically. Both have a few limits: They don’t always support all the cloud provider features you need, which forces you to bounce outside of them. They can be difficult to write and manage at scale, which is why many organizations that make heavy use of them use other languages to actually create the scripts. This makes it easier to update specific pieces without editing the entire file and introducing typos or other errors. They can push updates to stacks, but if you made any manual changes I’ve found these frequently break. Thus they are better for locked-down production environments that are totally immutable and not for dev/test or manually altered setups. They aren’t meant for other kinds of automation, like assessing or modifying in-use resources. For example, you can’t use them for incident response or to check specific security controls. I’m not trying to be negative here – they are awesome awesome tools, which are totally essential to cloud and DevOps. But there are times you want to attack the problem in a different way. Let me give you a specific use case. I’m currently writing a “new account provisioning” tool for a client. Basically, when a team at the client starts up a new Amazon account, this shovels in all the required security controls. IAM, monitoring, etc. Nearly all of it could be done with CloudFormation or Terraform but I’m instead writing it as a Ruby app. Here’s why: I’m using Ruby to abstract complexity from the security team and make security easy. For example, to create new Identity and Access Management policies, users, and roles, the team can point the tool towards a library of files and the tool iterates through and builds them in the right order. The security team only needs to focus on that library of policies and not the other code to build things out. This, for them, will be easier than adding it to a large provisioning template. I could take that same library and actually build a CloudFormation template dynamically the same way, but… … I can also use the same code base to fix existing accounts or (eventually) assess and modify an account that’s been changed in the future. For example, I can (and will) be able to asses an account, and if the policies don’t match, enable the user to repair it with flexibility and precision. Again, this can be done without the security pro needing to understand a lot of the underlying complexity. Those are the two key reasons I sometimes drop from templates to code. I can make things simpler and also use the same ‘base’ for more complex scenarios that the infrastructure as code tools aren’t meant to address, such as ‘fixing’ existing setups and allowing more granular decisions on what to configure or overwrite. Plus, I’m not limited to waiting for the templates to support new cloud provider features; I can add capabilities any time there is an API, and with modern cloud providers, it there’s a feature it has an API. In practice you can mix and match these approaches. I have my biases, and maybe some of it is just that I like to learn the APIs and features directly. I do find that having all these code pieces gives me a lot more options for various use cases, including using them to actually generate the templates when I need them and they might be the better choice. For example, one of the features of my framework is installing a library of approved CloudFormation templates into a new account to create pre-approved architecture stacks for common needs. It all plays together. Pick what makes sense for you, and hopefully this will give you a bit of insight into how I make the decision. Share:

Share:
Read Post

Cloud Database Security: 2011 vs. Today

Adrian here. I had a brief conversation today about security for cloud database deployments, and their two basic questions encapsulated many conversations I have had over the last few months. It is relevant to a wider audience, so I will discuss them here. The first question I was asked was, “Do you think that database security is fundamentally different in the cloud than on-premise?” Yes, I do. It’s not the same. Not that we no longer need IAM, assessment, monitoring, or logging tools, but the way we employ them changes. And there will be more focus on things we have not worried about before – like the management plane – and far less on things like archival and physical security. But it’s very hard to compare apples to apples here, because of fundamental changes in the way cloud works. You need to shift your approach when securing databases run on cloud services. The second question was, “Then how are things different today from 2011 when you wrote about cloud database security?” Database security has changed in three basic ways: 1) Architecture: We no longer leverage the same application and database architectures. It is partially about applications adopting microservices, which both promotes micro-segmentation at the network and application layer, and also breaks the traditional approach of closely tying the application to a database. Architecture has also developed in response to evolving database services. We see need for more types of data, with far more dynamic lookup and analysis than transaction support. Together these architectural changes lead to more segmented deployment, with more granular control over access to data and database services. 2) Big Data: In 2011 I expected people to push their Oracle, MS SQL Server, and PostgreSQL installations into the cloud, to reduce costs and scale better. That did not happen. Instead firms prefer to start new projects in the cloud rather than moving existing projects. Additionally we see strong adoption of big data platforms such as Hadoop and Dynamo. These are different platforms with slightly different security issues and security tools than the relational platforms which dominated the previous two decades. And in an ecosystem like Hadoop applications running on the same data lake may be exposed to entirely different service layers. 3) Database as a Service: At Securosis we were a bit surprised by how quickly the cloud vendors embraced big data. Now they offer big data (along with other relational database platforms) as a service. “Roll your own” has become much less necessary. Basic security around internal table structures, patching, administrative access, and many other facets is now handled by vendors to reduce your headaches. We can avoid installation issues. Licensing is far, far easier. It has become so easy to stand up a new relational database or big data cluster this way running databases on Infrastructure as a Service now seems antiquated. I have not gone back through everything I wrote in 2011, but there are probably many more subtle differences. But the question itself overlook another important difference: Security is now embedded in cloud services. None of us here at Securosis anticipated how fast cloud platform vendors would introduce new and improved security features. They have advanced their security offerings much faster than any other platform or service offering I’ve ever seen, and done a much better job with quality and ease of use than anyone expected. There are good reasons for this. In most cases the vendors were starting from a clean slate, unencumbered by legacy demands. Additionally, they knew security concerns were an impediment to enterprise adoption. To remove their primary customer objections, they needed to show that security was at least as good as on-premise. In conclusion, if you are moving new or existing databases to the cloud, understand that you will be changing tools and process, and adjusting your biggest priorities. Share:

Share:
Read Post

Dynamic Security Assessment: The Limitations of Security Testing [New Series]

We have been fans of testing the security of infrastructure and applications as long as we can remember doing research. We have always known attackers are testing your environment all the time, so if you aren’t also self-assessing, inevitably you will be surprised by a successful attack. And like most security folks, we are no fans of surprises. Security testing and assessment has gone through a number of iterations. It started with simple vulnerability scanning. You could scan a device to understand its security posture, which patches were installed, and what remained vulnerable on the device. Vulnerability scanning remains a function at most organizations, driven mostly by a compliance requirement. As useful as it was to understand which devices and applications were vulnerable, a simple scan provides limited information. A vulnerability scanner cannot recognize that a vulnerable device is not exploitable due to other controls. So penetration testing emerged as a discipline to go beyond simple context-less vulnerability scanning, with humans trying to steal data. Pen tests are useful because they provide a sense of what is really at risk. But a penetration test is resource-intensive and expensive, especially if you use an external testing firm. To address that, we got automated pen testing tools, which use actual exploits in a semi-automatic fashion to simulate an attacker. Regardless of whether you use carbon-based (human) or silicon-based (computer) penetration testing, the results describe your environment at a single point in time. As soon as you blink, your environment will have changed, and your findings may no longer be valid. With the easy availability of penetration testing tools (notably the open source Metasploit), defending against a pen testing tool has emerged as the low bar of security. Our friend Josh Corman coined HDMoore’s Law, after the leader of the Metasploit project. Basically, if you cannot stop a primitive attacker using Metasploit (or another pen testing tool), you aren’t very good at security. The low bar isn’t high enough As we lead enterprises through developing security programs, we typically start with adversary analysis. It is important to understand what kinds of attackers will be targeting your organization and what they will be looking for. If you think your main threat is a 400-pound hacker in their parents’ basement, defending against an open source pen testing tool is probably sufficient. But do any of you honestly believe an unsophisticated attacker wielding a free penetration testing tool is all you have to worry about? Of course not. The key thing to understand about adversaries is simple: They don’t play by your rules. They will attack when you don’t expect it. They will take advantage of new attacks and exploits to evade detection. They will use tactics that look like a different adversary to raise a false flag. The adversary will do whatever it takes to achieve their mission. They can usually be patient, and will wait for you to screw something up. So the low bar of security represented by a pen testing tool is not good enough. Dynamic IT The increasing sophistication of adversaries is not your only challenge assessing your environment and understanding risk. Technology infrastructure seems to be undergoing the most significant set of changes we have ever seen, and this is dramatically complicating your ability to assess your environment. First, you have no idea where your data actually resides. Between SaaS applications, cloud storage services, and integrated business partner networks, the boundaries of traditional technology infrastructure have been extended unrecognizably, and you cannot assume your information is on a network you control. And if you don’t control the network it becomes much harder to test. The next major change underway is mobility. Between an increasingly disconnected workforce and an explosion of smart devices accessing critical information, you can no longer assume your employees will access applications and data from your networks. Realizing that authorized users needing legitimate access to data can be anywhere in the world, at any time, complicates assessment strategies as well. Finally, the push to public cloud-based infrastructure makes it unclear where your compute and storage are, as well. Many of the enterprises we work with are building cloud-native technology stacks using dozens of services across cloud providers. You don’t necessarily know where you will be attacked, either. To recap, you no longer know where your data is, where it will be accessed from, or where your computation will happen. And you are chartered to protect information in this dynamic IT environment, which means you need to assess the security of your environment as often as practical. Do you start to see the challenge of security assessment today, and how much more complicated it will be tomorrow? We Need Dynamic Security Assessment As discussed above, a penetration test represents a point in time snapshot of your environment, and is obsolete when complete, because the environment continues to change. The only way to keep pace with our dynamic IT environment is dynamic security assessment. The rest of this series will lay out what we mean by this, and how to implement it within your environment. As a little prelude to what you’ll learn, a dynamic security assessment tool includes: A highly sophisticated simulation engine, which can imitate typical attack patterns from sophisticated adversaries without putting production infrastructure in danger. An understanding of the network topology, to model possible lateral movement and isolate targeted information and assets. A security research team to leverage both proprietary and public threat intelligence, and to model the latest and greatest attacks to avoid unpleasant surprises. An effective security analytics function to figure out not just what is exploitable, but also how different workarounds and fixes will impact infrastructure security. We would like to thank SafeBreach as the initial potential licensee of this content. As you may remember, we research using our Totally Transparent Research methodology, which requires foresight on the part of our licensees. It enables us to post our papers in our Research Library without paywalls, registration, or any other blockage to you

Share:
Read Post

Assembling a Container Security Program: Monitoring and Auditing

Our last post in this series covers two key areas: Monitoring and Auditing. We have more to say, in the first case because most development and security teams are not aware of these options, and in the latter because most teams hold many misconceptions and considerable fear on the topic. So we will dig into these two areas essential to container security programs. Monitoring Every security control we have discussed so far had to do with preventative security. Essentially these are security efforts that remove vulnerabilities or make it hard from anyone to exploit them. We address known attack vectors with well-understood responses such as patching, secure configuration, and encryption. But vulnerability scans can only take you so far. What about issues you are not expecting? What if a new attack variant gets by your security controls, or a trusted employee makes a mistake? This is where monitoring comes in: it’s how you discover the unexpected stuff. Monitoring is critical to a security program – it’s how you learn what is effective, track what’s really happening in your environment, and detect what’s broken. For container security it is no less important, but today it’s not something you get from Docker or any other container provider. Monitoring tools work by first collecting events, and then examining them in relation to security policies. The events may be requests for hardware resources, IP-based communication, API requests to other services, or sharing information with other containers. Policy types are varied. We have deterministic policies, such as which users and groups can terminate resources, which containers are disallowed from making external HTTP requests, or what services a container is allowed to run. Or we may have dynamic – also called ‘behavioral’ – policies, which prevent issues such as containers calling undocumented ports, using 50% more memory resources than typical, or uncharacteristically exceeding runtime parameter thresholds. Combining deterministic white and black list policies with dynamic behavior detection provides the best of both worlds, enabling you to detect both simple policy violations and unexpected variations from the ordinary. We strongly recommend that your security program include monitoring container activity. Today, a couple container security vendors offer monitoring products. Popular evaluation criteria for differentiating products and determining suitability include: Deployment Model: How does the product collect events? What events and API calls can it collect for inspection? Typically these products use either of two models for deployment: an agent embedded in the host OS, or a fully privileged container-based monitor running in the Docker environment. How difficult is it to deploy collectors? Do the host-based agents require a host reboot to deploy or update? You will need to assess what type of events can be captured. Policy Management: You will need to evaluate how easy it is to build new policies – or modify existing ones – within the tool. You will want to see a standard set of security policies from the vendor to help speed up deployment, but over the lifetime of the product you will stand up and manage your own policies, so ease of management is key to your-long term happiness. Behavioral Analysis: What, if any, behavioral analysis capabilities are available? How flexible are they, meaning what types of data can be used in policy decisions? Behavioral analysis requires starting with system monitoring to determine ‘normal’ behavior. The criteria for detecting aberrations are often limited to a few sets of indicators, such as user ID or IP address. The more you have available – such as system calls, network ports, resource usage, image ID, and inbound and outbound connectivity – the more flexible your controls can be. Activity Blocking: Does the vendor provide the capability to block requests or activity? It is useful to block policy violations in order to ensure containers behave as intended. Care is required, as these policies can disrupt new functionality, causing friction between Development and Security, but blocking is invaluable for maintaining Security’s control over what containers can do. Platform Support: You will need to verify your monitoring tool supports the OS platforms you use (CentOS, CoreOS, SUSE, Red Hat, etc.) and the orchestration tool (such as Swarm, Kubernetes, Mesos, or ECS) of your choice. Audit and Compliance What happened with the last build? Did we remove sshd from that container? Did we add the new security tests to Jenkins? Is the latest build in the repository? Many of you reading this may not know the answer off the top of your head, but you should know where to get it: log files. Git, Jenkins, JFrog, Docker, and just about every development tool you use creates log files, which we use to figure out what happened – and often what went wrong. There are people outside Development – namely Security and Compliance – who have similar security-related questions about what is going on with the container environment, and whether security controls are functioning. Logs are how you get these external teams the answers they need. Most of the earlier topics in this research, such as build environment and runtime security, have associated compliance requirements. These may be externally mandated like PCI-DSS or GLBA, or internal security requirements from internal audit or security teams. Either way the auditors will want to see that security controls are in place and working. And no, they won’t just take your word for it – they will want audit reports for specific event types relevant to their audit. Similarly, if your company has a Security Operations Center, in order to investigate alerts or determine whether a breach has occurred, they will want to see all system and activity logs over a period of time to in order reconstruct events. You really don’t want to get too deep into this stuff – just get them the data and let them worry about the details. The good news is that most of what you need is already in place. During our investigation for this series we did not speak with any firms which did not have

Share:
Read Post

Firestarter: How to Tell When Your Cloud Consultant Sucks

Mike and Rich had a call this week with another prospect who was given some pretty bad cloud advice. We spend a little time trying to figure out why we keep seeing so much bad advice out there (seriously, BIG B BAD not OOPSIE bad). Then we focus on the key things to look for to figure out w Mike and Rich had a call this week with another prospect who was given some pretty bad cloud advice. We spend a little time trying to figure out why we keep seeing so much bad advice out there (seriously, BIG B BAD not OOPSIE bad). Then we focus on the key things to look for to figure out when someone is leading you down the wrong path in your cloud migration. Oh… and for those with sensitive ears, time to engage the explicit flag. Watch or listen: Share:

Share:
Read Post

Assembling a Container Security Program: Container Validation

This post is focused on security testing your code and container, and verifying that both conform to security and operational practices. One of the major advances over the last year or so is the introduction of security features for the software supply chain, from both Docker itself and a handful of third-party vendors. All the solutions focus on slightly different threats to container construction, with Docker providing tools to certify that containers have made it through your process, while third-party tools are focused on vetting the container contents. So Docker provides things like process controls, digital signing services to verify chain of custody, and creation of a Bill of Materials based on known trusted libraries. In contrast, third-party tools to harden container inputs, analyze resource usage, perform static code analysis, analyze the composition of libraries, and check against known malware signatures; they can then perform granular policy-based container delivery based on the results. You will need a combination of both, so we will go into a bit more detail: Container Validation and Security Testing Runtime User Credentials: We could go into great detail here about runtime user credentials, but will focus on the most important thing: Don’t run the container processes as root, as that provides attackers access to attack other containers or the Docker engine. If you get that right you’re halfway home for IAM. We recommend using specific user accounts with restricted permissions for each class of container. We do understand that roles and permissions change over time, which requires some work to keep permission maps up to date, but this provides a failsafe when developers change runtime functions and resource usage. Security Unit Tests: Unit tests are a great way to run focused test cases against specific modules of code – typically created as your dev teams find security and other bugs – without needing to build the entire product every time. This can cover things such as XSS and SQLi testing of known attacks against test systems. Additionally, the body of tests grows over time, providing a regression testbed to ensure that vulnerabilities do not creep back in. During our research, we were surprised to learn that many teams run unit security tests from Jenkins. Even though most are moving to microservices, fully supported by containers, they find it easier to run these tests earlier in the cycle. We recommend unit tests somewhere in the build process to help validate the code in containers is secure. Code Analysis: A number of third-party products perform automated binary and white box testing, failing the build if critical issues are discovered. We recommend you implement code scans to determine if the code you build into a container is secure. Many newer tools have full RESTful API integration within the software delivery pipeline. These tests usually take a bit longer to run, but still fit within a CI/CD deployment framework. Composition Analysis: A useful technique is to check library and supporting code against the CVE (Common Vulnerabilities and Exposures) database to determine whether you are using vulnerable code. Docker and a number of third parties provide tools for checking common libraries against the CVE database, and they can be integrated into your build pipeline. Developers are not typically security experts, and new vulnerabilities are discovered in common tools weekly, so an independent checker to validate components of your container stack is essential. Resource Usage Analysis: What resources does the container use? What external systems and utilities does it depend upon? To manage the scope of what containers can access, third-party tools can monitor runtime access to environment resources both inside and outside the container. Basically, usage analysis is an automated review of resource requirements. These metrics are helpful in a number of ways – especially for firms moving from a monolithic to a microservices architecture. Stated another way, this helps developers understand what references they can remove from their code, and helps Operations narrow down roles and access privileges. Hardening: Over and above making sure what you use is free of known vulnerabilities, there are other tricks for securing applications before deployment. One is to check the contents of the container and remove items that are unused or unnecessary, reducing attack surface. Don’t leave hard-coded passwords, keys, or other sensitive items in the container – even though this makes things easy for you, it makes them much easier for attackers. Some firms use manual scans for this, while others leverage tools to automate scanning. App Signing and Chain of Custody: As mentioned earlier, automated builds include many steps and small tests, each of which validates that some action was taken to prove code or container security. You want to ensure that the entire process was followed, and that somewhere along the way some well-intentioned developer did not subvert the process by sending along untested code. Docker now provides the means to sign code segments at different phases of the development process, and tools to validate the signature chain. While the code should be checked prior to being placed into a registry or container library, the work of signing images and containers happens during build. You will need to create specific keys for each phase of the build, sign code snippets on test completion but before the code is sent onto the next step in the process, and – most importantly – keep these keys secured so an attacker cannot create their own code signature. This gives you some guarantee that the vetting process proceeded as intended. Share:

Share:
Read Post

More on Bastion Accounts and Blast Radius

I have received some great feedback on my post last week on bastion accounts and networks. Mostly that I left some gaps in my explanation which legitimately confused people. Plus, I forgot to include any pretty pictures. Let’s work through things a bit more. First, I tended to mix up bastion accounts and networks, often saying “account/networks”. This was a feeble attempt to discuss something I mostly implement in Amazon Web Services that can also apply to other providers. In Amazon an account is basically an AWS subscription. You sign up for an account, and you get access to everything in AWS. If you sign up for a second account, all that is fully segregated from every other customer in Amazon. Right now (and I think this will change in a matter of weeks) Amazon has no concept of master and sub accounts: each account is totally isolated unless you use some special cross-account features to connect parts of accounts together. For customers with multiple accounts AWS has a mechanism called consolidated billing that rolls up all your charges into a single account, but that account has no rights to affect other accounts. It pays the bills, but can’t set any rules or even see what’s going on. It’s like having kids in college. You’re just a checkbook and an invisible texter. If you, like Securosis, use multiple accounts, then they are totally segregated and isolated. It’s the same mechanism that prevents any random AWS customer from seeing anything in your account. This is very good segregation. There is no way for a security issue in one account to affect another, unless you deliberately open up connections between them. I love this as a security control: an account is like an isolated data center. If an attacker gets in, he or she can’t get at your other data centers. There is no cost to create a new account, and you only pay for the resources you use. So it makes a lot of sense to have different accounts for different applications and projects. Free (virtual) data centers for everyone!!! This is especially important because of cloud metastructure. All the management stuff like web consoles and APIs that enables you to do things like create and destroy entire class B networks with a couple API calls. If you lump everything into a single account, more administrators (and other power users) need more access, and they all have more power to disrupt more projects. This is compartmentalization and segregation of duties 101, but we have never before had viable options for breaking everything into isolated data centers. And from an operational standpoint, the more you move into DevOps and PaaS, the harder it is to have everyone running in one account (or a few) without stepping on each other. These are the fundamentals of my blast radius post. One problem comes up when customers need a direct connection from their traditional data center to the cloud provider. I may be all rah rah cloud awesome, but practically speaking there are many reasons you might need to connect back home. Managing this for multiple accounts is hard, but more importantly you can run into hard limits due to routing and networking issues. That’s where a bastion account and network comes in. You designate an account for your Direct Connect. Then you peer into that account (in AWS using cross-account VPC peering support) any other accounts that need data center access. I have been saying “bastion account/network” because in AWS this is a dedicated account with its own dedicated VPC (virtual network) for the connection. Azure and Google use different structures, so it might be a dedicated virtual network within a larger account, but still isolated to a subscription, or sub-account, or whatever mechanism they support to segregate projects. This means: Not all your accounts need this access, so you can focus on the ones which do. You can tightly lock down the network configuration and limit the number of administrators who can change it. Those peering connections rely on routing tables, and you can better isolate what each peered account or network can access. One big Direct Connect essentially “flattens” the connection into your cloud network. This means anyone in the data center can route into and attack your applications in the cloud. The bastion structure provides multiple opportunities to better restrict network access to destination accounts. It is a way to protect your cloud(s) from your data center. A compromise in one peered account cannot affect another account. AWS networking does not allow two accounts peered to the same account to talk to each other. So each project is better isolated and protected, even without firewall rules. For example the administrator of a project can have full control over their account and usage of AWS services, without compromising the integrity of the connection back to the data center, which they cannot affect – they only have access to the network paths they were provided. Their project is safe, even if another project in the same organization is totally compromised. Hopefully this helps clear things up. Multiple accounts and peering is a powerful concept and security control. Bastion networks extend that capability to hybrid clouds. If my embed works, below you can see what it looks like (a VPC is a virtual network, and you can have multiple VPCs in a single account). Share:

Share:
Read Post

Assembling a Container Security Program: Runtime Security

This post will focus on the ‘runtime’ aspects of container security. Unlike the tools and processes discussed in previous sections, here we will focus on containers in production systems. This includes which images are moved into production repositories, security around selecting and running containers, and the security of the underlying host systems. Runtime Security The Control Plane: Our first order of business is ensuring the security of the control plane – the platforms for managing host operating systems, the scheduler, the container engine(s), the repository, and any additional deployment tools. Again, as we advised for build environment security, we recommend limiting access to specific administrative accounts: one with responsibility for operating and orchestrating containers, and another for system administration (including patching and configuration management). We recommend network segregation and physical (for on-premise) or logical segregation (for cloud and virtual) systems. Running the Right Container: We recommend establishing a trusted image repository and ensuring that your production environment can only pull containers from that trusted source. Ad hoc container management is a good way to facilitate bypassing of security controls, so we recommend scripting the process to avoid manual intervention and ensure that the latest certified container is always selected. Second, you will want to check application signatures prior to putting containers into the repository. Trusted repository and registry services can help, by rejecting containers which are not properly signed. Fortunately many options are available, so find one you like. Keep in mind that if you build many containers each day, a manual process will quickly break down. You’ll need to automate the work and enforce security policies in your scripts. Remember, it is okay to have more than one image repository – if you are running across multiple cloud environments, there are advantages to leveraging the native registry in each. Beware the discrepancies between platforms, which can create security gaps. Container Validation and BOM: What’s in the container? What code is running in your production environment? How long ago did we build this container image? These are common questions asked when something goes awry. In case of container compromise, a very practical question is: how many containers are currently running this software bundle? One recommendation – especially for teams which don’t perform much code validation during the build process – is to leverage scanning tools to check pre-built containers for common vulnerabilities, malware, root account usage, bad libraries, and so on. If you keep containers around for weeks or months, it is entirely possible a new vulnerability has since been discovered, and the container is now suspect. Second, we recommend using the Bill of Materials capabilities available in some scanning tools to catalog container contents. This helps you identify other potentially vulnerable containers, and scope remediation efforts. Input Validation: At startup containers accept parameters, configuration files, credentials, JSON, and scripts. In some more aggressive scenarios, ‘agile’ teams shove new code segments into a container as input variables, making existing containers behave in fun new ways. Either through manual review, or leveraging a third-party security tool, you should review container inputs to ensure they meet policy. This can help you prevent someone from forcing a container to misbehave, or simply prevent developers from making dumb mistakes. Container Group Segmentation: Docker does not provide container-level restriction on which containers can communicate with other containers, systems, hosts, IPs, etc. Basic network security is insufficient to prevent one container from attacking another, calling out to a Command and Control botnet, or other malicious behavior. If you are using a cloud services provider you can leverage their security zones and virtual network capabilities to segregate containers and specify what they are allowed to communicate with, over which ports. If you are working on-premise, we recommend you investigate products which enable you to define equivalent security restrictions. In this way each application has an analogue to a security group, which enables you to specify which inbound and outbound ports are accessible to and from which IPs, and can protect containers from unwanted access. Blast Radius: An good option when running containers in cloud services, particularly IaaS clouds, is to run different containers under different cloud user accounts. This limits the resources available to any given container. If a given account or container set is compromised, the same cloud service restrictions which prevent tenants from interfering with each other limit possible damage between accounts and projects. For more information see our post on limiting blast radius with user accounts. Platform Security In Docker’s early years, when people talked about ‘container’ security, they were really talking about how to secure the Linux operating system underneath Docker. Security was more about the platform and traditional OS security measures. If an attacker gained control of the host OS, they could pretty much take control of anything they wanted in containers. The problem was that security of containers, their contents, and even the Docker engine were largely overlooked. This is one reason we focused our research on the things that make containers – and the tools that build them – secure. That said, no discussion of container security can be complete without some mention of OS security. We would be remiss if we did not talk about host/OS/engine security, at least a bit. Here we will cover some of the basics. But we will not go into depth on securing the underlying OS. We could not do that justice within this research, there is already a huge amount of quality documentation available on the operating system of your choice, and there are much more knowledgable sources to address your concerns and questions on OS security. Kernel Hardening: Docker security depends fundamentally on the underlying operating system to limit access between ‘users’ (containers) on the system. This resource isolation model is built atop a virtual map called Namespaces, which maps specific users or group of users to a subset of resources (e.g.: networks, files, IPC, etc.) within their Namespace. Containers should run under a specified user ID. Hardening starts with a secure

Share:
Read Post

Assembling a Container Security Program: Securing the Build

As we mentioned in our last post, most people don’t seem to consider the build environment when thinking about container security, but it’s important. Traditionally, the build environment is the domain of developers, and they don’t share a lot of details with outsiders (in this case, Operations folks). But this is beginning to change with Continuous Integration (CI) or full Continuous Deployment (CD), and more automated deployment. The build environment is more likely to go straight into production. This means that operations, quality assurance, release management, and other groups find themselves having to cooperate on building automation scripts and working together more closely. Collaboration means a more complex, distributed working environment, with more stakeholders having access. DevOps is rapidly breaking down barriers between groups, even getting some security teams to contribute test scripts and configuration updates. Better controls are needed to restrict who can alter the build environment and update code, and an audit process to validate who did what. Don’t forget why containers are so attractive to developers. First, a container simplifies building and packaging application code – abstracting the app from its physical environment – so developers can worry about the application rather than its supporting systems. Second, the container model promotes lightweight services, breaking large applications down into small pieces, easing modification and scaling – especially in cloud and virtual environments. Finally, a very practical benefit is that container startup is nearly instant, allowing agile scaling up and down in response to demand. It is important to keep these in mind when considering security controls, because any control that reduces one of these core advantages will not be considered, or is likely to be ignored. Build environment security breaks down into two basic areas. The first is access and usage of the basic tools that form the build pipeline – including source code control, build tools, the build controller, container management facilities, and runtime access. At Securosis we often call this the “management plane”, as these interfaces — whether API or GUI – are used to set access policies, automate behaviors and audit activity. Second is security testing of your code and the container, validating it conforms to security and operational practices. This post will focus on the former. Securing the Build Here we discuss the steps to protect your code – more specifically to protect build systems, to ensure they implement the build process you intended. This is conceptually very simple, but there are many pieces to this puzzle, so implementation can get complicated. People call this Secure Software Delivery, Software Supply Chain Management, and Build Server Security – take your pick. It is management of the assemblage of tools which oversee and implement your process. For our purposes today these terms are synonymous. Following is a list of recommendations for securing platforms in the build environment to ensure secure container construction. We include tools from Docker and others which that automate and orchestrate source code, building, the Docker engine, and the repository. For each tool you will employ a combination of identity management, roles, platform segregation, secure storage of sensitive data, network encryption, and event logging. Some of the subtleties follow. Source Code Control: Stash, Git, GitHub, and several variants are common. Source code control is one of the tools with a wide audience, because it is now common for Security, Operations, and Quality Assurance to all contribute code, tests, and configuration data. Distributed access means all traffic should run over SSL or VPN connections. User roles and access levels are essential for controlling who can do what, but we recommend requiring token-based or certificate-based authentication, with two-factor authentication a minimum for all administrative access. Build Tools and Controllers: The vast majority of development teams we spoke with use build controllers like Bamboo and Jenkins, with these platforms becoming an essential part of their automated build processes. These provide many pre-, post- and intra-build options, and can link to a myriad of other facilities, complicating security. We suggest full network segregation of the build controller system(s), and locking down network connections down to source code controller and docker services. If you can, deploy build servers as on-demand containers – this ensures standardization of the build environment and consistency of new containers. We recommend you limit access to the build controllers as tightly as possible, and leverage built-in features to restrict capabilities when developers need access. We also suggest locking down configuration and control data to prevent tampering with build controller behavior. We recommend keeping any sensitive data, such as ssh keys, API access keys, database credentials, and the like in a secure database or data repository (such as a key manager, .dmg file, or vault) and pulling credentials on demand to ensure sensitive data never sits on disk unprotected. Finally, enable logging facilities or add-ons available for the build controller, and stream output to a secure location for auditing. Docker: You will use Docker as a tool for pre-production as well as production, building the build environment and test environments to vet new containers. As with build controllers like Jenkins, you’ll want to limit Docker access in the build environment to specific container administrator accounts. Limit network access to accept content only from the build controller system(s) and whatever trusted repository or registry you use. Our next post will discuss validation of individual containers and their contents. Share:

Share:
Read Post

Bastion (Transit) Networks Are the DMZ to Protect Your Cloud from Your Datacenter

In an earlier post I mentioning bastion accounts or virtual networks. Amazon calls these “transit VPCs” and has a good description. Before I dive into details, the key difference is that I focus on using the concept as a security control, and Amazon for network connectivity and resiliency. That’s why I call these “bastion accounts/networks”. Here is the concept and where it comes from: As I have written before, we recommend you use multiple account with a partitioned network architecture structure, which often results in 2-4 accounts per cloud application stack (project). This limits the ‘blast radius’ of an account compromise, and enables tighter security control on production accounts. The problem is that a fair number of applications deployed today still need internal connectivity. You can’t necessarily move everything up to the cloud right away, and many organizations have entirely legitimate reasons to keep some things internal. If you follow our multiple-account advice, this can greatly complicate networking and direct connections to your cloud provider. Additionally, if you use a direct connection with a monolithic account & network at your cloud provider, that reduces security on the cloud side. Your data center is probably the weak link – unless you are as good at security as Amazon/Google/Microsoft. But if someone compromises anything on your corporate network, they can use it to attack cloud assets. One answer is to create a bastion account/network. This is a dedicated cloud account, with a dedicated virtual network, fo the direct connection back to your data center. You then peer the bastion network as needed with any other accounts at your cloud provider. This structure enables you to still use multiple accounts per project, with a smaller number of direct connections back to the data center. It even supports multiple bastion accounts, which only link to portions of your data center, so they only gain access to the necessary internal assets, thus providing better segregation. Your ability to do this depends a bit on your physical network infrastructure, though. You might ask how this is more secure. It provides more granular access to other accounts and networks, and enables you to restrict access back to the data center. When you configure routing you can ensure that virtual networks in one account cannot access another account. If you just use a direct connect into a monolithic account, it becomes much harder to manage and maintain those restrictions. It also supports more granular restrictions from your data center to your cloud accounts (some of which can be enforced at a routing level – not just firewalls), and because you don’t need everything to phone home, accounts which don’t need direct access back to the data center are never exposed. A bastion account is like a weird-ass DMZ to better control access between your data center and cloud accounts; it enables multiple account architectures which would otherwise be impossible. You can even deploy virtual routing hardware, as per the AWS post, for more advanced configurations. It’s far too late on a Friday for me to throw a diagram together, but if you really want one or I didn’t explain clearly enough, let me know via Twitter or a comment and I’ll write it up next week. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.