Securosis

Research

Container Security 2018: Build Pipeline Security

Most people fail to consider the build environment when thinking about container security, but it is critical. The build environment is traditionally the domain of developers, who don’t share much detail with outsiders (meaning security teams). But with Continuous Integration (CI) or full Continuous Deployment (CD), we’re shooting new code into production… potentially several times a day. An easy way for an attacker to hack an application is get into its development or build environment – usually far less secure than production – and alter code or add new code to containers. The risk is aggravated by DevOps rapidly breaking down barriers between groups, and operations and security teams given access so they can contribute to the process. Collaboration demands a more complex and distributed working environment, with more stakeholders. Better controls are needed to restrict who can alter the build environment and update code, and an audit process to validate who did what. It’s also prudent to keep in mind the reasons developers find containers so attractive, lest you try to adopt security controls which limit their usefulness. First, a container simplifies building and packaging application code – abstracting the app from its physical environment – so developers can worry about the application rather than its supporting systems. Second, the container model promotes lightweight services – breaking large applications down into small pieces, easing modification and scaling… especially in cloud and virtual environments. Finally, a very practical benefit is that container startup is nearly instant, allowing agile scaling up and down in response to demand. It is important to keep these feature in mind when considering security controls, because any control that reduces one of these core advantages is likely to be rejected or ignored. Build pipeline security breaks down into two basic areas. The first is application security: essentially testing your code and its container to ensure it conforms to security and operational practices. This includes tools such as static analysis, dynamic analysis, composition analysis, scanners built into the IDE, and tools which monitor runtime behavior. We will cover these topics in the next section. The second area of concern is the tools used to build and deploy applications – including source code control, build tools, the build controller, container registries, container management facilities, and runtime access control. At Securosis we often call this the “management plane”, as these interfaces – whether API or GUI – are used to set access policies, automate behaviors, and audit activity. Let’s dive into build tool security. Securing the Build The problem is conceptually simple, but there are many tools used for building software, and most have several plug-ins which alter how data flows, so environments can get complicated. You can call this Secure Software Delivery, Software Supply Chain Management, or Build Server Security – take your pick, because these terms are equivalent for our purpose. Our goal is to shed light on the tools and processes developers use to build application, so you can better gauge the threats, as well as security measures to secure these systems. Following is a list of recommendations for securing platforms in the build environment to ensure secure container construction. We include tools from Docker and others to automate and orchestrate source code, building, the Docker engine, and the repository. For each tool you select some combination of identity management, roles, platform segregation, secure storage of sensitive data, network encryption, and event logging. Source Code Control: Stash, Git, GitHub, and several variants are common. Source code control has a wide audience because it is now common for Security, Operations, and Quality Assurance to all contribute code, tests, and configuration data. Distributed access means all traffic should run over SSL or VPN connections. User roles and access levels are essential for controlling who can do what, but we recommend requiring token-based or certificate-based authentication, or two-factor authentication at a minimum, for all administrative access. This is good housekeeping whether you are using containers or not, but containers’ lack of transparency, coupled with automated processes pushing them into production, amplifies the need to protect the build. Build Tools and Controllers: The vast majority of development teams we speak with use build controllers like Bamboo and Jenkins, with these platforms becoming an essential part of their automated build processes. They provide many pre-, post-, and intra-build options, and can link to a myriad of other facilities. This is great for integration flexibility but can complicate security. We suggest full network segregation of the build controller system(s), and locking network connections to limit what can communicate with them. If you can deploy build servers as on-demand containers without administrative access to ensure standardization of the build environment and consistency of new containers. Limit access to the build controllers as tightly as possible, and leverage built-in features to restrict capabilities when developers need access. We also suggest locking down configuration and control data to prevent tampering with build controller behavior. Keep any sensitive data, including ssh keys, API access keys, database credentials, and the like in a secure database or data repository (such as a key manager, encrypted .dmg file, or vault) and pulling credentials on demand to ensure sensitive data never sits on-disk unprotected. Finally, enable the build controller’s built-in logging facilities or logging add-ons, and stream output to a secure location for auditing. Container Platform Security: Whether you use Docker or another tool to compose and run containers, your container manager is a powerful tool which controls what applications run. As with build controllers like Jenkins, you’ll want to limit access to specific container administrator accounts. Limit network access to only build controller systems. Make sure Docker client access is segregated between development, test, and production, to limit who and which services can launch containers in production. Container Registry Security: We need to discuss container registries, because developers and IT teams often make the same two mistakes. The first is to allow anyone to add containers to the registry, regardless of whether they have been vetted. In such an

Share:
Read Post

Container Security 2018: Threats and Concerns

To better understand which container security areas you should focus on, and why we recommend particular controls, it helps to understand which threats need to be addressed and which areas containers affect most. Some threats and issues are well-known, some are purely lab proofs of concept, and others are threat vectors which attackers have yet to exploit – typically because there is so much low-hanging fruit elsewhere. So what are the primary threats to container environments? Threats to the Build Environment The first area which needs protection is the build environment. It’s not first on most people’s lists for container security, but I start here because it is typically the least secure, and the easiest place to insert malicious code. Developers tend to loathe security in development because it slows them down. That is why there is an entire industry dedicated to test data management and data asking: because developers tend to end-run around security whenever it slows their build and testing processes. What kinds of threats are we talking about, specifically? Things like malicious or moronic source code changes. Malicious or mistaken alterations to automated build controllers. Configuration scripts with errors, or which expose credentials. The addition of insecure libraries or down-rev/insecure versions of existing code. We want to know whether runtime code has been scanned for vulnerabilities. And we worry about failures to audit all the above and catch any errors. Container Workload and Contents What the hell is in the container? What does it do? Is that even the correct version? These are common questions from operations folks. They have no idea. Nor do they know whether developers included tools like ssh in a container so they can alter its contents on the fly. Just as troubling is the difficulty of mapping access rights to OS and host resources by a container, which can break operational security and open up the entire stack to various attacks. Security folks are typically unaware of what – if any – container hardening may have been performed. You want to know each container’s contents have been patched, vetted, hardened, and registered prior to deployment. Runtime Behavior Organizations worry a container will attack or infect another container. They worry a container may quietly exfiltrate data, or just exhibit suspicious behavior. We have seen attacks extract source code, and others add new images to registries – in both cases the platforms were unprotected by identity and access management. Organizations need to confirm that access to the Docker client is sufficiently gated through access controls to limit who controls the runtime environment. They worry about containers running a long time, without rotation to newer patched versions. And whether the network has been properly configured to limit damage from compromise. And also about attackers probing containers, looking for vulnerabilities. Operating System Security Finally, the underlying operating system’s security is a concern. The key question is whether it is configured correctly to restrict each container’s access to the subset of resources it needs, and to effectively block everything else. Customers worry that a container will attack the underlying host OS or the container engine. They worry that the container engine may not sufficiently shield the underlying OS. If an attack on the host platform succeeds it’s pretty much game over for that cluster of containers, and may give malicious code sufficient access to pivot and attack other systems. Orchestration Manager Security A key reason to update and reissue this report is this change in the container landscape, where focus has shifted to orchestration managers which control containers. It sounds odd, but as containers have become a commodity unit of application delivery, organizations have begun to feel they understand containers, and attention has shifted to container management. Attention and innovation have shifted to focus on cluster orchestration, with Kubernetes the poster child for optimizing value and use of containers. But most of the tools are incredibly complex. And like many software product, the focus of orchestration tools is scalability and ease of management – not security. As you probably suspected, orchestration tools bring a whole new set of security issues and vulnerabilities. Insecure default configurations, as well as permission escalation and code injection vulnerabilities, are common. What’s more, most organizations issue certificates, identity tokens and keys from the orchestration manager as containers are launched. We will drill down into these issues and what to do about them in the remainder of this series. Share:

Share:
Read Post

Building a Container Security Program 2018: Introduction

The explosive growth of containers is not surprising – these technologies, such as Docker, alleviate several problems for developers deploying applications. Developers need simple packaging, rapid deployment, reduced environmental dependencies, support for microservices, generalized management, and horizontal scalability – all of which containers help provide. When a single technology enables us to address several technical problems at once, it’s very compelling. But this generic model of packaged services, where the environment is designed to treat each container as a “unit of service”, sharply reduces transparency and auditability (by design), and gives security pros nightmares. We run more code and faster, but must accept a loss of visibility inside the container. It begs the question, “How can we introduce security without losing the benefits of containers?” Containers scare the hell out of security pros because they are so opaque. The burden of securing containers falls across Development, Operations, and Security teams – but none of these groups always knows how to tackle their issues. Security and development teams may not even be fully aware of the security problems they face, as security is typically ignorant of the tools and technologies developers use, and developers don’t always know what risks to look for. The problem extends beyond containers to the entire build, deployment, and runtime environments. The container security space has changed substantially since our initial research 18-20 months back. Security of the orchestration manager is a primary concern, as organization rely more heavily on tools to deploy and scale out applications. We have seen a sharp increase in adoption of container services (PaaS) from various cloud vendors, which changes how organizations need to approach security. We reached forward a bit in our first container security paper, covering build pipleine security issues because we felt that was a hugely underservered area, but over the last 18 months DevOps practitioners have taken note, and this has become the top question we get. Just behind that is the need for secrets management to issue container credentials and secure identity. The rapid change of pace in this market means it’s time for a refresh. We get a ton of calls from people moving towards – or actively engaged in – DevOps, so we will target this research at both security practitioners and developers & IT operations. We will cover some reasons containers and container orchestration managers create new security concerns, as well as how to go about creating security controls across the entire spectrum. We will not go into great detail on how to secure apps in general here – instead we will focus on build, container management, deployment, platform, and runtime security which arise with containers. As always we hope you will ask questions and participate in the process. Community involvement makes our research betters so we welcome your inquires, comments, and suggestions. Share:

Share:
Read Post

New Paper: Understanding Secrets Management

Traditional application security concerns are shifting, responding to disruptive technologies and development frameworks. Cloud services, containerization, orchestration platforms, and automated build pipelines – to name just a few – all change the way we build and deploy applications. Each effects security a different way. One of the new application security challenges is to provision machines, applications, and services with the credentials they need at runtime. When you remove humans from the process things move much faster – but knowing how and when to automatically provide passwords, authentication tokens, and certificates is not an easy problem. This secrets management problem is not new, but our need grows exponentially when we begin orchestrating the entire application build and deployment process. We need to automate distribution and management of secrets to ensure secure application delivery. This research paper covers the basic use cases for secrets management, and then dives into different technologies that address this need. Many of the technologies assume a specific application deployment model so we also discuss pros and cons of the different approaches. We close with recommendations on product selection and decision criteria. We would like to thank the folks at CyberArk for getting behind this research effort and licensing this content. Support like this enables us to both deliever research under our Totally Transparent Research process and bring this content to you free of charge. Not even a registration wall. Free, and we respect your privacy. Not a bad deal. As always, if you have comments or question on our research please shoot us an email. If you want to comment or make suggestions for future iterations of this research, please leave a comment here. You can go directly to the full paper: Securosis_Secrets_Management_JAN2018_FINAL.pdf Or visit the research library page. Share:

Share:
Read Post

Secrets Management: Deployment Considerations

We will close out this series with a look at several operational considerations for selecting a secrets management platform. There are quite a few secrets management tools, both commercial and otherwise, on the market, and each does things a bit differently. Rather than a giant survey of every product and how it works, we will focus on the facets of these products which enable them to handle the use cases discussed earlier. Central questions include how these platforms deploy, how they provide scalability and resiliency, and how they integrate with the services they supply secrets to? To better distinguish between products you need to understand why they were created, because core functions and deployment models are heavily influenced by a platform’s intended use. Classes of Products Secrets management platforms fall into two basic categories: general-purpose and single-purpose. General-purpose solutions provide secrets for multiple use cases, with many types of secrets. General-purpose systems can automatically provision secrets to just about any type of application – from sending user name and password to a web page, to issuing API keys, to dynamic cloud workloads. Single-purpose options – commonly called ‘embedded’ because they install into another platform – are typically focused on one use case. For example the embedded solutions focus on provisioning secrets to Docker containers, and nest into your orchestration manager (e.g.: Swarm, Kubernetes, DC/OS). This is a critical distinction, because a product embedded into a container manager may not work for non-container use cases. The good news is that many services are deployed this way so they are still useful in many environments, and because these tools leverage existing infrastructure they often integrate well and scale easily. These platforms typically leverage specific constructs of the orchestration manager or container environment to provide secrets. They also tend to make assumptions about how secrets are to be used; for example they may leverage Kubernetes’ ‘namespace’ to enforce policy or the UNIX ‘namespace’ to distribute secrets. Because containers are ephemeral, ephemeral or ‘dynamic’ secrets are often preferred for those secrets managers. The bad news is that some embedded tools assume your cluster is a secure environment, so they can safely pass and store secrets in clear text. Many embedded tools fully encrypt secrets, but they may not support diverse types of secrets or integrate with non-containerized applications. These specializations are neither good nor bad, depending on what you need for secrets management, but embedded systems may be limited. General-purpose products are typically more flexible and may take more time and to set up, but provide a breadth of functions not generally found in embedded tools. Deployment Models Solitary Servers Common among early tools focused on personal productivity, solitary servers are exactly what the name implies. They typically consist of a central secret storage database and a single server instance that manages it. Basically all functions – including user interfaces, storage management, key management, authentication, and policy management – are handled by a single service. These tools are commonly used via command-line interfaces or API, and work best for a small number of systems. Client-Server Architecture The label for this model varies from vendor to vendor. Primary/Secondary, Manager/Worker, Master/Slave, and Service/Agent are just some of the terms to describe the hierarchical relationship between the principal service which manages the repository of secrets, and the client which works with the calling application. This is by far the most common architecture. There is a repository where encrypted secrets are stored, usually a database which is shared or replicated across one or more manager nodes. And each manager can work with one or more agents to support the needs of their service or application. This architecture helps provide scalability and reliability by spawning new clients and servers as needed. These products often deploy each component as a container, leveraging the same infrastructure as the applications they serve. Many embedded products use this model to scale. Integration We already talked about how secrets are shared between a secrets management tool and a recipient, whether human or machine. And we covered integration with container management and orchestration systems, as many tools were designed to do. It’s time to mention the other common integration points and how each works. Build Servers: Tools like Jenkins and Bamboo are used by software development teams to automate the building and verification of new code. These tools commonly access one or more repositories to get updated code, grab automation scripts and libraries to set up new environments, connect to virtual or cloud services to run tests, and sign code before moving tested code into another repository or container registry. Each action requires specific credentials before it can take place. Secrets management integration is either performed as a plug-in component to the build server or as an external service it communicates with. IT Automation: Automated builds and the power of build managers have vastly improved development productivity, but orchestration tools are what move code at warp speed from developer desktops into production. Chef/Puppet/Ansible are the trio of popular orchestration tools automating IT and development tasks, the backbone of Continuous Integration and Continuous Deployment. Virtually any programable IT operation can be performed with these tools, including most VMware and all cloud services functions offered through API. As with build servers, secrets management typically installs as a component or add-on module of the orchestration tool, or runs as a service. Public Cloud Support: Public cloud is a special case. Conceptually, every use case outlined in this series is applicable to cloud services. And because every service in a public cloud is API enabled, it is the ideal playground for secrets management tools. What’s special about cloud services is how integration is managed; most secrets management tools which support the cloud directly integrate with either cloud native identity systems, cloud-native key management, or both. This offers advantages because secrets can then be provisioned in any region, to any supported service within that region, leveraging existing identities. The cloud service can fully define which user can access which secrets.

Share:
Read Post

Secrets Management: Features and Functions (updated)

In this section we will discuss the core features of a secrets management platform. There are basic functions every secrets management platform needs to address the basic use cases. These include secure storage and disbursement of secrets, identity management, and API access, for starters. There are plenty of tools out there, many open source, and several bundled into other platforms. But when considering what you need from one of these platforms, the key thing to keep in mind is that most of them were originally developed to perform a single very specific task – such as injecting secrets into containers at runtime, or integrating tightly with a Jenkins build server, or supplementing a cloud identity service. Those do one thing well, but typically do not address multiple use cases. Now let’s take a closer look at the key features. Core Features Storing Secrets Secrets management platforms are software applications designed to support other applications in a very important task: securely storing and passing secrets to the correct parties. The most important characteristic of a secrets management platform is that it must never leave secret information sitting around in clear text. Secure storage is job #1. Almost every tool we reviewed provides one or more encrypted repositories – which several products call a ‘vault’ – to store secret information in. As you insert or update secrets in the repository, they are automatically encrypted prior to being written to storage. Shocking though it may be, at least one product you may come across does not actually encrypt secrets – instead storing them in locations its developers consider difficult to access. The good news is that most vaults use vetted implementations of well-known encryption algorithms to encrypt secrets. But it is worth vetting any implementation, with your regulatory and contractual requirements in mind, prior to selecting one for production use. With the exception of select platforms which provide ‘ephemeral secrets’ (more on these later), all secret data is stored within these repositores for future use. Nothing is stored in clear text. How each platform associates secrets with a given user identifier, credential, or role varies widely. Each platform has its own way of managing secrets internally, but mostly they use a unique identifier or key-value pair to identify each secret. Some store multiple versions of a secret so changes over time can be recalled if necessary for recovery or auditing, but the details are part of their secret sauce. The repository structure varies widely between offerings. Some store data in simple text or JSON files. Some use key-value pairs in a NoSQL style database. Others use a relational or Big Data database of your choice. A couple employ multiple repository types to increase isolation between secrets and/or use cases. Their repository architecture is seldom determined by strong security; more common drivers are low cost and ease of use for the product developers. And while a repository of any type can be secured, the choice of repository impact scalability, how replication is performed, and how quickly you can find and provision secrets. Another consideration is which data types a repository can handle. Most platforms we reviewed can handle any type of data you want to store: string values, text fields, N-tuple pairings, and binary data. Indexing is often performed automatically as you insert items, to speed lookup and retrieval later. Some of these platforms really only handle string, which simplifies programmatic API but limits their usability. Again, products tailored to a particular use case may be unsuitable for other uses or across teams. Identity and Access Management Most secrets management platforms concede IAM to external Active Directory or LDAP services, which makes sense because most firms already have IAM infrastructure in place. Users authenticate to the directory store to gain access, and the server leverages existing roles to determine which functions and secrets the user is authorized to access. Most platforms are also able to use a third-party Cloud Identity Service or Privileged Access Management service, or to directly integrate with cloud-native directory services. Note that a couple of the platforms we reviewed manage identity and secrets internally, rather than using an external identity store. This is not a bad thing because they then tend to include secrets management to supplement password or key management, and internally management of identity is part of their security architecture. Access and Usage Most platforms provide one or more programming interfaces. The most common, to serve secrets in automated environments, is an access API. A small and simple set of API calls are provided to authenticate a session, insert a record, locate a secret, and share a secret to a specific user or service. More advanced solutions also offer API access to advanced or administrative functions. Command-line access is also common, leveraging the same basic functions in a command-driven UNIX/Linux environment. A handful of others also offer a graphical user interface, either directly or indirectly, sometimes through another open source project. Sharing Secrets The most interesting aspect of a secrets management system is how it shares secrets with users, services, or applications. How do you securely provide a secret to its intended recipient? As in the repository, as discussed above, secrets in transit must be protected, which usually means encryption. And there are many different ways to pass secrets around. Let’s take a look at the common methods of secret passing. Encrypted Network Communications: Authenticated service or users are passed secrets, often in clear text, within an encrypted session. Some use Secure Sockets Layer (SSL), which is not ideal, for encrypted transport, but thankfully most use current versions of Transport Layer Encryption, which als authenticates the recipient to the secrets management server. PKI: Several secrets management platforms combine external identity management with a Public Key Infrastructure to validate recipients of secrets and transmit PKI encrypted payloads. The platform determines who will receive a secret, and encrypts the content with the recipient’s public key. This ensures that only the intended recipient can decrypt the secret, using their private key.

Share:
Read Post

Secrets Management: Use Cases

This post will discuss why secrets management is needed at all, along with the diverse use cases which teams need it to address. In every case there is some secret data which needs to be sent – hopefully not in plain text – to an application or service. And in every case we want the ability to provide secrets, both when an operator is present and automatically. The biggest single issue is that security around these secrets today is largely absent, and they are kept in cleartext within documents of various types. Let’s dive in. Use Cases API Gateways and Access Keys: Application Programming Interfaces are how software programs interact with other software and services. These API form the basic interface for joint operation. To use an API you must first authenticate yourself – or your code – to the gateway. This is typically done by providing an access key, token, or response to a cryptographic challenge. For ease of automation many developers hard-code access keys, leaving themselves vulnerable to simple file or code inspection. And all too often, even when kept in a private file on the developer’s desktop, keys are accidentally shared or posted, such as to public code repositories. The goal here is to keep access keys secret while still provisioning to valid applications as needed. Automated Services: Applications are seldom stand-alone entities. They are typically comprised of many different components, databases, and supporting services. And with current application architectures we launch many instances of an application to ensure scalability and resiliency. As we launch applications, whether in containers or as servers, we must provision them with configuration data, identity certificates, and tokens. How does a newly created virtual machine, container, or application discover its identity and access the resources it needs? How can we uniquely identify a container instance among a sea of clones? In the race to fully automate the environment, organizations have automated so fast that they got out over their skis, with little security and a decided imbalance towards build speed. Developers typically place credentials in configuration files which are conveniently available to applications and servers on startup. We find production credentials shared with quality assurance and developer systems, which are commonly far less secure and not always monitored. They are also frequently shared with other applications and services which should not have access. The goal is to segregate credentials without causing breakage or unacceptable barriers. Build Automation: Most build environments are insecure. Developers tend to feel security during development slows them down, so they often bypass security controls in development processes. Build environments are normally under developer control, on development-owned servers, so few outsiders know where they are or how they operate. Nightly build servers has been around for over a decade, with steadily increasing automation to improve agility. As things speed up we remove human oversight. Continuous Integration and Continuous Deployment use automation to speed software delivery. Build servers like Jenkins and Bamboo automatically regenerate application as code, formation templates, and scripts are checked into repositories. When builds are complete we automatically launch new environments to perform functional, regression, and even security tests. When these tests pass, some organizations launch the code in production! Build server security has become an important topic. We no longer have the luxury of leaving passwords, encryption keys, and access tokens sitting in unprotected files or scripts. But just as continuous integration and DevOps improve agility, we need to automate the provisioning of secrets into the process as well, and create an audit trail to prove we are delivering code and services securely. Encrypted Data: Providing encryption keys to unlock encrypted volumes and file stores is a common task, both on-premise and for cloud services. In fact, automated infrastructure makes the problem more difficult as the environment is less static, with thousands of services, containers and applications popping in and out of service. Traditionally we have used key management servers designed to handle secure distribution and management of keys, but a number of commercial key management tools (hardware and software) have not been augmented for Infrastructure and Platform as a Service. Additionally, developers demand better API integration for seamless use with applications. This capability is frequently lacking, so some teams use cloud native key management, while others opt for secrets management as a replacement. Sharing: Collaboration software has helped development, quality assurance, and product mamagement teams cooperate on projects; even though people in these groups are less and less likely to share office space. User are more likely to work from home, at least part time. In some contexts the issue is how to securely share information across a team of remote developers, but that use case overlaps with having IT share secret data across multiple data centers without exposing it in clear text, or exposed in random files. The databases that hold data for chat and collaboration services tend to not be very secure, and texting certificates to a co-worker is a non-starter. The solution is a central, robust repository, where a select group of users can store and retrieve secrets. Of course there are plenty more use cases. In interviews we discuss everything from simple passwords to bitcoin wallets. But for this research we need to focus on the issues developers and IT security folks asked about. Our next post will discuss the core features and functions of a secrets management system, as well as some advanced functions which differentiate commercial options from open source. We want to provide a sense of what is possible, and help guide readers to the subset of functions they need for their use cases. Share:

Share:
Read Post

Secrets Management: New Series

This week we are starting a new research series on Secrets Management. What is secrets management and why do you care? A good number of you in security will be asking these questions. Secrets Management platforms do exactly what the name implies; they store, manage and provide secrets. This technology addresses several problems most security folks don’t yet know they have. As development teams leverage automation and orchestration techniques, they are creating new security issues to be tackled. Let’s jump into some of the back story, and then outline what we will accomplish in this research effort. It sounds cliche, sure, but IT and application environments are genuinely undergoing radical change. New ways of deploying applications as microservices or into containers is improving our ability to cost-effectively scale services and large systems. Software defined IT stacks and granular control over services through API provide tremendous advantages in agility. Modern operational models such as Continuous Integration and DevOps amplify these advantages, bringing applications and infrastructure to market faster and more reliably. Perhaps the largest change currently affecting software development and IT is cloud computing: on-demand and elastic services offers huge advantages, but predicated on automated infrastructure defined as software. While cloud is not a necessary component to these other advancements, it’s makes them all the more powerful. Leveraging all these advancements together, a few lines of code can launch – or shut down – an entire (virtual) data center in minutes, with minimal human involvement or effort. Alongside their benefits, automation and orchestration raise new security concerns. The major issue today is secure sharing of secret information. Development teams need to share data, configurations, and access keys across teams to cooperate on application development and testing. Automated build servers need access to source code control, API gateways, and user roles to accomplish their tasks. Servers need access to encrypted disks, applications need to access databases, and containers must be provisioned with privileges as they start up. Automated services cannot wait around for users to type in passwords or provide credentials! So we need new agile and automated techniques to provision data, identity, and access rights. Obtaining these secrets is essential for automation scripts to function, but many organizations cling to the classic (simple) mode of operation: place secrets in files, or embed them into scripts, so tasks can complete without human intervention. Developers understand this is problematic, but it’s a technical problem most sweep under the rug. And they certainly do not go out of their way to tell security about how they provision secrets, so most CISOs and security architects are unaware of this emerging security issue. This problem is not new. No administrator wants to be called into work in the middle of the night to enter a password so an application can restart. So IT administrators routinely store encryption keys in files so an OS or application can access them when needed. Database administrators place encryption keys and passwords in files to facilitate automated reboots. Or they did until corporate networks came under even more attack; then we saw a shift to keys, certificates, and passwords. Since then we have relied upon everything from manual intervention, key management servers, and even hardware dongles to provide a root of trust to establish identity and provision systems. But those models not only break the automation we rely upon to reduce costs and speed up deployments, lack also the programmatic interfaces needed to integrate with cloud services. To address the changes described above, new utilities and platforms have been created to rapidly provide information across groups and applications. The term for this new class of product is “Secrets Management”; it is changing how we deliver identity, secrets, and tokens; as well as changing the way we validate systems for automated establishment of trust. In this research we will explore why this is an issue for many organizations, what sort of problems these new platforms tackle, and how they work in these newer environments. Specifically, we will cover: Use Cases: We will start by considering specific problems which make secret sharing so difficult: such as moving passwords in clear text, providing keys to encryption engines, secure disk storage, knowing which processes are trustworthy, and mutual (bidirectional) authentication. Then we will discus specific use cases driving secrets management. We will cover issues such as provisioning containers and servers, software build environments, database access, and encrypted disk & file stores; we will continue to examine sharing secrets across groups and controlling who can launch which resources in private and public cloud environments. Components and Features: This section will discuss the core features of a secrets management platform. We will discuss the vault/repository concept, the use of ephemeral non-vault systems, identity management for vault access, role-based authorization, network security, and replication for both resiliency and remote access. We will cover common interfaces such as CLI, API, and HTTP. We’ll contrast open source personal productivity tools with newer commercial products; we will also consider advanced features such as administration, logging, identity store integration, ability to provide secure connections, and policy enforcement. Deployment Considerations: Next we will discuss what is stored in a repository, and how secrets are shared or integrated with dependent services or applications. We will discuss both deployment models; as well as the secrets to be shared: passwords, encryption keys, access tokens, API keys, identity certificates, IoT key pairs, secure configuration data, and even text files. We will also offer some advice on product selection criteria and what to look for. As we leverage cloud services and rely more heavily on automation to provision applications and IT resources, we find more and more need to get secrets to applications and scripts securely. So our next post will start with use cases driving this market. Share:

Share:
Read Post

Multi-cloud Key Management Research Paper

Cloud computing is the single biggest change to computing we have seen, fundamentally changing how we use computing resources. We have reached a point where multi-cloud support is a reality for most firms; SaaS and private clouds are complimented by public PaaS and IaaS. With these changes we have received an increasing number of questions on how to protect data in the cloud, so in this research paper we discuss several approaches to both keeping data secure and maintaining control over access. From the paper: Controlling encryption keys – and thus also your data – while adopting cloud services is one of the more difficult puzzles in moving to the cloud. For example you need to decide who creates keys (you or your provider), where they are managed (on-premises or in-cloud), how they are stored (hardware or software), how keys will be maintained, how to scale up in a dynamic environment, and how to integrate with each different cloud model you use (SaaS, PaaS, IaaS, and hybrid). And you still need to either select your own encryption library or invoke your cloud service to encrypt on your behalf. Combine this with regulatory and contractual requirements for data security that – if anything – are becoming more stringent than ever before, piecing together a solution that addresses these concerns is a challenge. We are grateful that security companies like Thales eSecurity and many others appreciate the need to educate customers and prospects with objective material built in a Totally Transparent manner. This allows us to perform impactful research and protect our integrity. You can get a copy of the paper, or go to our research library to download it there. Share:

Share:
Read Post

Multi-Cloud Key Management: Selection and Migration

Cloud services are typically described as sharing responsibility for security, but the reality is that you don’t working shoulder to shoulder with the vendor. Instead you implement security with the building blocks they provide you, possibly filling in gaps where they don’t provide solutions. One of the central goals of this research project was to show that it is possible to take control of data security, supplanting embedded encryption and key management services, even when you don’t control the environment. And with key management you can gain as much security as your on-premise solution provides – in some cases even continuing leverage familiar tools – with minimal disruption to existing management processes. That said, if you decided to Bring Your Own Keys (and select a cloud HSM), or bring your own software key management stack, you are signing on for additional setup work. And it’s not always simple – the cloud variants of HSM and software key management services are different than their on-premise counterparts. This section will highlight some differences to consider when managing keys in the cloud. Governance Let’s cut to the heart of the issue: If you need an HSM, you likely have regulatory requirements or contractual obligations driving your decisions. Many of these requirements spell out specific physical and electronic security levels, typically something like FIPS 140-2 Level 2 or 140-2 Level 3. And the regulations often specify usage models, such as requiring periodic key rotation, split administrative authority, and other best practices. Cloud vendors usually publish certifications on their HSM, if not HSM specifics. You’ll likely need to dig through their documentation to understand how to manage the HSM to meet your operational requirements, and what interfaces its functions are available through – typically some or all of web application, command-line tool, and API. It’s one thing to have a key rotation capability, for example, but another to prove you are using it consistently and properly. So key management service administrative actions are a favorite audit item. As your HSM is now in the cloud, you need to determine how you will access the HSM logs and move them into your SIEM or compliance reporting tools. Integration A key question is whether it is okay for your cloud provider to perform encryption and decryption on your behalf, so long as your master keys are always kept within an HSM. Essentially, if your requirement is that all encryption and signing operations must happen in hardware, you need to ensure your cloud vendor provides that option. Some SaaS solutions do not: you provide them keys derived from your master key, and the service performs the actual encryption without necessarily using an HSM. Some IaaS platforms let you choose to keep bulk encryption in their HSM platform, or leverage their software service. Find out whether your potential cloud provider offers what you need. For IaaS migrations of applications and databases which encrypt data elements or columns, you may need to change the API calls to leverage the HSM or software key management service. And depending upon how your application authenticates itself to the key management server, you may also need to change to this code as well. The process to equip volume encryption services with keys varies between cloud vendors, so your operations team should investigate how startup provisioning works. Finally, as we mentioned under governance, you will need to get log files from the HSM or software key manager. Logs are typically provided on demand via API calls to the cloud service, or dumped into a storage repository where you can access raw events as needed. But HSMs are a special service with additional security controls, so you will need to check with your vendor for how to access log files and what formats they offer data in. Management Whether using hardware or software, you can count on the basic services of key creation, secure storage, rotation, and encryption. But a number of concerns pop up when moving to the cloud because things work a bit differently. One is dual-administrator functions, sometimes called ‘split-key’ authority. Two or more administrators must authorize certain sensitive administrative functions. For cloud-based key management you’ll need to designate your HSM operators. These operators will are typically issued identity certificates and hardware tokens to authenticate to the HSM or key manager. We recommend that these certificates be stored in password managers on-premise, and the hardware tokens secured on-premise as well. We suggest you do not tie the role of HSM operator to an individual, but instead use a service account, so you’re not locked out of the HSM when an admin leaves the company. You’ll want to modify your existing processes to accomodate changes the cloud brings. And prior to production deployment you should practice key import and rotation to ensure there are no hiccups. Operations In NIST’s definition of cloud computing one of the essential characteristics – which separates it from hosting providers and on-premise virtualization technologies – is availability on-demand and through self-service. HSM is new enough that it is not yet always fully self-service. You may need to work through a partially manual process to get set up and vetted before you can use the service. This is normally a one-time annoyance, which should not affect ongoing agility or access. It is worth reiterating that HSM services cost more than software-only native key management services. SaaS services tend to charge a set-up fee and a flat monthly rate, so costs are predictable. IaaS charges are generally based on the number of keys used, so if you expect to generate lots of keys – such as one per document – costs can skyrocket. Check to see how keys are generated, how often, and how often they are rotated, for a handle on operating costs. For disaster recovery you need to fully understand your cloud provider’s failover and recovery models, and whether you need to replicate keys back to your on-premise HSM. To provide infrastructure failover you may extend services across multiple

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.