Securosis

Research

Multi-Cloud Key Management: Service and Deployment Options

This post will discuss how to deploy encryption keys into a third-party cloud service. We illustrate the deployment options, along with the components of a solution. We will then walk through the process of getting a key from your on-premise Hardware Security Module (HSM) into a cloud HSM. We will discuss variations on using cloud-based HSM for all encryption operations, as well as cases where you instead delegate encryption operations to the cloud-native encryption service. We’ll close out with a discussion of software-based (non-HSM) key management systems running on IaaS cloud services. There are two basic design approaches to cloud key management. The most common model is generally referred to as ‘BYOK’ (Bring Your Own Key). As the name implies you place your own keys in a cloud HSM, and use your keys with the cloud HSM service to encrypt and decrypt content. This model requires HSM to work, but does supports all cloud service models (SaaS, PaaS, and IaaS) so long as the cloud vendor offers an HSM service. The second model is software-based key management. In this case you run the same key management software you currently use on-premise, but in a multi-tenant IaaS cloud. Your vendors supplies either a server or a container image containing the software, and you configure and deploy it in your cloud environment. Let’s jump into the specifics of each model, with some different ways each approach is used. BYOK Cloud platforms for commercial services offer encryption as an option for data storage and communications. With most cloud environments – especially SaaS – encryption is built-in and occurs by default for all tenants as part of the service. To keep things simple the encryption and key management interfaces are not exposed – instead encryption is a transparent function handled on the customer’s behalf. For select cloud services where stronger security is required, or regulations demand their use, Hardware Security Modules are provided as an option. These modules are physically and digitally hardened against attack to ensure that keys are secure from tampering and difficult to misuse. To incorporate HSM into a cloud service, cloud vendors typically offer an extension to their key management service. In some cases it’s a simple set of additional API, but in most cases a dashboard is provided with API for provisioning and key management. In some cases, particularly when you use the same type of HSM on-premise as your cloud vendor, the full suite of HSM functions may be available. So the amount of work you need to set up BYOK varies. Let’s take a closer look at getting your keys into the cloud. Exporting Keys Those of you used to using HSM on-premise understand that typically keys remain fully protected within the HSM, never extracted from its protection. When vendors configure HSM they are seeded with information about the vendor and the customer. This process can be reversed, providing the ability to extract keys, but generally not to use outside the HSM – traditionally only to seed another appliance. Key extraction is a manual process for most – if not all – HSM. It typically involves two or more security administrators providing credentials and a smart card or USB stick with a secure enclave to authenticate to the HSM, then requesting a new key for extraction. For most HSM extraction is similar: Once validation occurs, the HSM takes the customer’s master key and bundles it with information specific to the HSM vendor and the customer, and in some cases information specific to usage rights for the key, then encrypts the data. These added data elements provide additional protections for the key, dictating where it can be un-encrypted and how it may be used. Export of keys does not occur over any specific proxy, and is not performed synchronously with import on a destination HSM. Instead the encrypted information bundle is sent to the cloud service provider. A cloud HSM service likely leverages at least a 2-node HSM cluster, and each vendor implements their own integration layer, so key import specifics vary widely, as does the level of effort required. In general; once the customer has been provisioned for the cloud HSM service, they can import their master key via a dashboard, API, or command line. The customer’s master key bundle is used to create their intermediate keys as needed by their cloud key hierarchy, and those intermediate keys in turn are used to generate data encryption keys as needed. These encryption keys are copied into cloud HSM as needed. Each cloud provider scales up and maintains redundancy in its own ways, and they typically do not typically publish details of how. Instead they provide service guarantees for uptime and performance. The good news is you no longer need to worry much about these specifics, because they are taken care of for you. Additionally, cloud service providers do not as a rule use Active/Standby HSM pairs, preferring a more scalable ‘cloud’ of many hardware modules, handling importation of customer keys as needed, so resiliency is likely better than whatever you have on-premise today. Keep in mind that hardware-based key management support is still considered a special case by cloud service vendors. Not all customers demand it. And it is often not fully available as a self-service feature – there may be a manual sign-up process and availability in only specific regions or zones. Unlike built-in native encryption, HSM capabilities cost extra. Once you have your installed in the cloud HSM service you can use it to encrypt data. But how this works varies between different cloud service models, so we will look at a couple common cases. SaaS with HSM Encryption With many SaaS services, if you contract for a cloud-based HSM service, all encryption operations on your behalf are performed inside the HSM. The native cloud encryption service may satisfy the requests on your behalf so encryption and decryption are transparent, but key access and encryption operations are performed fully within the HSM. The graphic below illustrates

Share:
Read Post

Multi-Cloud Key Management: Use Cases

This post will cover some issues and concerns customers cite when considering a move – or more carefully reassessing a move they have already made – to cloud services. To provide some context to this discussion, one of the major mental adjustments security folks need to make when moving to cloud services is where their responsibilities begin and end. You are no longer responsible for physical security of cloud systems, and do not control the security of resource pools (e.g.: compute, storage, network), so your areas of concern move “up the stack”. With IaaS you control applications, data, user access, and network accessibility. With SaaS, you’re limited to data and user access. With either you are more limited in the tools at your disposal, either provided natively by your vendor or third-party tools which work with the specific cloud service. The good news is that the cloud shrinks your overall set of responsibilities. Whether or not these are appropriate to your use case is a different question. Fielding customer calls on data security for the better part of the last decade, we learned inquiries regarding on-premise systems typically start with the data repository. For example, “I need to protect my database”, “My SAN vendor provides encryption, but what threats does that protect us from?” or “I need to protect sensitive data on my file servers.” In these conversations, once we understand the repository and the threats to address, we can construct a data security plan. They usually center on some implementation of encryption with supporting key management, access management, and possibly masking/tokenization technologies. In the cloud encryption is still the primary to tool for data security, but the starting points of conversations have been different. The issues are more about needs than by threats. The following are the main issues cited by customers: PII: Personally Identifiable Information – essentially sensitive data specific to a user or customer – is the top concern. PII includes things like social security numbers, credit card numbers, account numbers, passwords, and other sensitive data types, as defined by various regulations. And it’s highly very common for what companies move into – or derive inside – the cloud to contain sensitive customer information. Other types of sensitive data are present as well, but PII compliance requirements are driving our conversations. The regulation might be GLBA, Mass Privacy Regulation 201 CMR 17, NIST 800-53, FedRAMP, PCI-DSS, HIPAA, or another from the evolving list. The mapping of these requirements to on-premise security controls has always been fuzzy, and the differences have confused many IT staff and external auditors who are accustomed to on-premise systems. Leveraging existing encryption keys and tools helps ensure consistency with existing processes. Trust: More precisely, the problem is lack of trust: Some customers simply do not trust their vendor.s Many security pros, having seen security products and platforms fail repeatedly during their careers, view security with a jaundiced eye. They are especially hesitant with security systems they cannot fully audit. Or they do not have faith that cloud vendors’ IT staff cannot access their data. In some cases they do not trust software-based encryption services. Perhaps the customer cannot risk the cloud service provider being forced to turn over encryption keys by court order, or compromised by a nation-state. If the vendor is never provided they keys, they cannot be compelled to turn them over. Vendor Lock-in and Migration: A common reservation regards vendor lock-in, and not being able to move to another cloud service provider in case a service fails or the contractual relationship becomes untenable. Some native cloud encryption systems do not allow customer keys to move outside the system, and cloud encryption systems offer proprietary APIs. The goal is to maintain protection regardless of where data resides, moving between cloud vendors as needed. Jurisdiction: Cloud service providers, and especially IaaS vendors, offer services in multiple countries, often in more than one region, and with multiple (redundant) data centers. This redundancy is great for resilience, but the concern arises when moving data from one region to another with may have different laws and jurisdictions. For example the General Data Protection Regulation (GDPR) is an EU regulation governing the personal data of EU citizens, and applies to any foreign company regardless of where data is moved. While similar in intent and covered types of data to the US regulation mentioned above under ‘PII’, it further specifies that some citizen data must not be available in foreign countries, or in some data centers. Many SaaS and IaaS security models do not account for such data-centric concerns. Segregation of duties and access controls are augmented in this case by key management. Consistency: It’s common for firms to adopt a “best of breed” cloud approach. They leverage multiple IaaS providers, placing each application on the service which best fits the application’s particular requirements. Most firms are quite familiar with their on-premise encryption and key management systems, so they often prefer to leverage the same tool and skills across multiple clouds. This minimizes process changes around key management, and often application changes to support different APIs. Obviously there nuances of each cloud implementation guide these conversations as well. Not all services are created equally, so what works in one may not be appropriate in another. But the major vendors offer very strong encryption implementations. Concerns such as data exfiltration protection, storage security, volume security, database security, and protecting data in transit can all be addressed with provided tools. That said, some firms cannot fully embrace a cloud native implementation, typically for regulatory or contract reasons. These firms have options to maintain control over encryption keys and leverage cloud native or third-party encryption. Our next post will go into detail on several deployment options, and then illustrate how they work. Share:

Share:
Read Post

Multi-Cloud Key Management (New Series)

Running IT systems on public cloud services is a reality for most companies. Just about every company uses Software as a Service to some degree; with many having already migrated back-office systems like email, collaboration, file storage, and customer relationship management software. But we are now also witnessing the core of the data center – financial systems, databases, supply chain, and enterprise resource planning software – moving to public Platform and Infrastructure “as a Service” (PaaS & IaaS) providers. It’s common for medium and large enterprises to run SaaS, PaaS, and IaaS at different providers, all in parallel with on-premise systems. Some small firms we speak with no longer have data centers, with all their applications hosted by third parties. Cloud services offer an alluring cocktail of benefits: they are cost effective, reliable, agile, and secure. While several of these advantages were never in question, security was the last major hurdle for customers. So cloud service providers focused on customer security concerns, and now offer extensive capabilities for data, network, and infrastructure security. In fact most customers can realize as good or better security in the cloud than possible in-house. With the removal of this last impediment we are seeing a growing number of firms embracing IaaS for critical applications. Infrastructure as a Service means handing over ownership and operational control of your IT infrastructure to a third party. But responsibility for data security does not go along with it. The provider ensures compute, storage, and networking components are secure from external attackers or other tenants in the cloud, but you must protect your data and application access to it. Some of you trust your cloud providers, while others do not. Or you might trust one cloud service but not others. Regardless, to maintain control of your data you must engineer cloud security controls to ensure compliance with internal security requirements as well as regulatory and contractual obligations. In some cases you will leverage security capabilities provided by a cloud vendor, and in others you will bring your own and run them atop the cloud. Encryption is the ‘go-to’ security technology in modern computing. So it should be no surprise that encryption technologies are everywhere in cloud computing. The vast majority of cloud service providers enable network encryption by default to protect data in transit and prevent hijacking. And the majority of cloud providers offer encryption for data at rest to protect files and archives from unwanted inspection by the people who manage the infrastructure or in case data leaks from the cloud service. In many ways encryption is another commodity, and part of the cloud service you pay for. But it is only effective when the encryption keys are properly protected. Just as with on-premise systems, when you move data to cloud services, it is critical to properly manage and secure encryption keys. Controlling encryption keys – and by proxy your data – while adopting cloud services is one of the more difficult tasks when moving to the cloud. In this research series we will discuss challenges specific to multi-cloud key management. We will help you select the right strategy from many possible combinations. For example you need to decide who creates keys (you or your provider), where key are managed (on-premise or in-cloud), how they are stored (hardware or software), policies for how keys will be maintained, how to scale up in a dynamic environment, and how to integrate with each different cloud service model you use (SaaS, PaaS, IaaS, or hybrid). And you still need to either select your own encryption library or invoke your cloud service to encrypt on your behalf. All together, you have a wonderful set of choices to meet any use case, but piecing it all together is a challenge. So we will discuss each of these options, how each customer requirement maps to different deployment options, and what to look for in a key management system. Our next post will discuss common customer use cases. Share:

Share:
Read Post

Securing SAP Clouds [New Paper]

Use of cloud services is common in IT. Gmail, Twitter, and Dropbox are ubiquitous; as are business applications like Salesforce, ServiceNow, and QuickBooks. But along with the basic service, customers are outsourcing much of application security. As more firms move critical back-office components such as SAP Hana to public platform and infrastructure services, those vendors are taking on much more security responsibility. It is far from clear how to assemble a security strategy for complex a application such as SAP Hana, or how to adapt existing security controls to an unfamiliar environment with only partial control. We have received a growing number of questions on SAP cloud security, so we researched and wrote this paper to tackle the main questions. When we originally scoped this project we intended to focus on the top five questions we hear, but we quickly realized that would grossly underserve our audience, and we should instead help to design a more comprehensive security plan. So we took a big picture approach – examining a broad range of concerns including how cloud services differ, and then mapped existing security controls to cloud deployments. In some cases our recommendations are as simple as changing a security tool or negotiating directly with your cloud provider, while in others we must recommend an entirely new security model. This paper clarifies the division of responsibility between you and your cloud vendor, which tools and approaches are viable for the cloud, and how to adapt your security model, with advice for putting together a complete security program for SAP cloud services. We focus on SAP’s Hana Cloud Platform (HCP) which is PaaS, but we encountered an equal number of firms deploying on IaaS so we cover that scenario as well. The approaches vary quite a bit because the tools and built-in security capabilities differ, so we compare and contrast as appropriate. Finally, we would like to thank Onapsis for licensing this content. Community support like theirs enables us to bring independent analysis and research to you free of charge. We don’t even require registration! You can grab the research paper directly, or visit its landing page in our Research Library. Please visit Onapsis if you would like to learn how they provide security for both cloud and on-premise SAP solutions. Share:

Share:
Read Post

Securing SAP Clouds: Application Security

This post will discuss the foundational elements of an application security program for SAP HCP deployments. Without direct responsibility for management of hardware and physical networks you lose the traditional security data capture points for traffic analysis and firewall technologies. The net result is that, whether on PaaS or IaaS, your application security program becomes more important than ever as what you have control over. Yes, SAP provides some network monitoring and DDoS services, but your options are are limited, they don’t share much data, and what they monitor is not tailored to your applications or requirements. Any application security program requires a breadth of security services: to protect data in motion and at rest, to ensure users are authenticated and can only view data they have rights to, to ensure the application platform is properly patched and configured, and to make sure an audit trail is generated. The relevant areas to apply these controls to are the Hana in-memory platform, SAP add-on modules, your custom application code, data storage, and supplementary services such as identity management and the management dashboard. All these areas are at or above the “water line” we defined earlier. This presents a fairly large matrix of issues to address. SAP provides many of the core security features you need, but their model is largely based on identity management and access control capabilities built into the service. The following are the core features of SAP HCP: Identity Management: The SAP HANA Cloud Platform provides robust identity management features. It supports fully managed HCP identities, but also supports on-premise identity services (i.e.: Active Directory) as well as third-party cloud identity management services. These services store and mange user identities, along with role-based authorization maps to define authorized users’ resource access. Federation and Token-based Authentication: SAP supports traditional user authentication schemes (such as username and password), but also offers single sign-on. In conjunction with the identity management services above, HCP supports several token-based authenticators, including Open Authorization Framework (OAuth), Security Assertion Markup Language (SAML), and traditional X.509 certificates. A single login grants users access to all authorized applications from any location on any device. Data at Rest Encryption: Despite being an in-memory database, HCP leverages persistent (disk-based) storage. To protect this data HCP offers transparent Data Volume Encryption (DVE) as a native encryption capability for data within your database, as well as its transaction logs. You will need to configure these options because they are not enabled by default. If you run SAP Hana in an IaaS environment you also have access to several third-party transparent data encryption options, as well as encryption services offered directly by the IaaS provider. Each option has cost, security, and ease-of-use considerations. Key Store: If you are encrypting data, then somewhere encryption keys are in use. Anyone or any service with access to keys can encrypt and decrypt data, so your selection of a keystore to manage encryption keys is critical for both security and regulatory compliance. HCP’s keystore is fully integrated into its disk and log file storage capabilities, which makes it very easy to set up and manage. Organizations who do not trust their cloud service provider, as well as those subject to data privacy regulations which require they maintain direct control control of encryption keys, need to integrate on-premise key management with HCP. If you are running SAP Hana in an IaaS environment, you also have several third-party key management options – both in the cloud and on-premise – as well as whatever your IaaS provider offers. Management Plane: A wonderful aspect of Hana’s cloud service is full administrative capabilities through ‘Cockpit’, API calls, a web interface, or a mobile application. You can specify configuration, set deployment characteristics, configure logging, etc. This is a wonderful convenience for administrators, and a potential nightmare for security because an account takeover means your entire cloud infrastructure can be taken over and/or exposed. It is critical to disallow password access and leverage token-based access and two-factor authentication to secure these administrative accounts. If you are leveraging an IaaS provider you can disable the root administrator account, and assign individual administrators to specific SAP subcomponents or functions. These are foundational elements of an application security program, and we recommend leveraging the capabilities SAP provides. They work, and they reduce both the cost and complexity of managing cloud infrastructure. That said, SAP’s overarching security model leaves several large gaps which you will need to address with third-party capabilities. SAP publishes many of the security controls they implement for HCP, but these capabilities are not shared with tenants, nor is raw data. So for many security controls you must still provide your own. Areas you need to address include: Assessment: This is one of the most effective means of finding security vulnerabilities with on-premise applications. SAP’s scope and complexity make it easy to accidentally misconfigure insecurely. When moving to the cloud SAP takes care of many of these issues on your behalf. But even with SAP managing the underlying platform there are still add-on modules, configurations, and your own custom code to be scanned. Running on IaaS, assessment scans and configuration management remain a central piece of an application security program. You will need to adjust your deployment model because many of the more effective third-party scanners run as a standalone machine (in AWS, an AMI), while others run on a standalone central server supported by remote ‘agents’ which perform the actual scans. You will likely need to adjust your deployment model from what you use on-premise, because in the cloud you should not be able to address all servers from any single point within your infrastructure. Monitoring: SAP regularly monitors their own security logs for suspicious events, but they don’t share findings or tune their analysis to support your application security efforts, so you need to implement your own monitoring. Monitoring system usage is one security control you will rely on much more in the cloud, as your proxy for determining what is going on.

Share:
Read Post

Securing SAP Clouds: Architecture and Operations

This post will discuss several keys differences in application architecture and operations – with a direct impact on security – which you need to reconsider when migrating to cloud services. These are the areas which make operations easier and security better. As companies move large business-critical applications to the cloud, they typically do it backwards. Most people we speak with, to start getting familiar with the cloud, opt for cheap storage. Once a toe is in the water they place some development, testing, and failover servers in the cloud to backstop on-premise systems. These ar less critical than production servers, where firms do not tolerate missteps. By default firms design their first cloud systems and applications to mirror what they already have in existing data centers. That means they carry over the same architecture, network topology, operational model, and security models. Developers and operations teams work with a familiar model, can leverage existing skills, and can focus on learning the nuances of their new cloud service. More often than not, once these teams are up to speed, they expect to migrate production systems fully to the cloud. Logical, right? It’s good until you move production to the cloud, when it becomes very wrong. Long-term, this approach creates problems. It’s the “Lift and Shift” model of cloud deployment, where you create an exact copy of what you have today, just running on a service provider’s platform. The issues are many and varied. This approach fails to take into account the inherent resiliency of cloud services. It doesn’t embrace automatic scaling up and down for efficient resource usage. From our perspective the important failures are around security capabilities. This approach fails to embrace ephemeral servers, highly segmented networks, automated patching, or agile incident response – all of which enable companies to respond to security issues faster, more efficiently, and more accurately than possible with existing systems. Architecture Considerations Network and Application Segmentation Most firms have a security ‘DMZ’, an untrusted zone between the outside world and their internal network, and inside a flat internal network. There are good reasons this less than ideal setup is common. Segregating networks in a data center is hard – users and applications leverage many different resources. To segregate networks often requires special hardware and software and becomes expensive to implement and difficult to maintain. As attackers commonly move from where they breached a company network, either “East/West” between servers or “North/South” gain control of applications as well. ‘Pivoting’ this way, to compromise as much as possible, is exactly why we segregate networks and applications. But this is exactly the sort of capability provided by default with cloud services. If you’re leveraging SAP’s Hana Cloud Platform, or running SAP Hana on an IaaS provider like AWS, network segregation is built in. Inbound ports an protocols are disabled by default, eliminating many of the avenues attackers use to penetrate severs. You open only those ports and protocols you need. Second, SAP and AWS are inherently multi-tenant services, so individual accounts – and their assigned resources – are fully segregated and protected from other users. This enables you to limit the “blast radius” of a compromise to the resources in a single account. Application by application segregation is not new, but ease of use makes it newly feasible in the cloud. In some cases you can even leverage both PaaS and IaaS simultaneously – letting one cloud serve as an “air gap” for another. Your cloud service provider offers added advantages of running under different account credentials, roles, and firewalls. You can specify exactly which users can access specific ports, require TLS, and limit inbound connections to approved IP addresses. Immutable Servers “Immutable servers” have radically changed how we approach security. Immutable servers do not change once they go into production. You completely remove login access to the server. PaaS providers leverage this approach to ensure their administrators cannot access your underlying resources. For IaaS it means there is no administrative access to servers. In Hana, for example, your team only logs into the application layer, and the underlying servers do not offer administrator logins for the service provider – that capability is disabled. Your operating systems and applications cannot be changed, and administrative ports and accounts are disabled entirely. If you need to update an OS or application you alter the server configuration or select a new version of the application code in a cloud console, and then start new application servers and shut down the old versions. HCP does not yet leverage immutable servers, but it is on the roadmap. Regular automated replacement is a huge shock, which takes most IT operations folks a long time to wrap their heads around, but something you should embrace early for the security and productivity gains. Preventing hostile administrative access to servers is one key advantage. And auditors love the fact that third parties do not have access. Blast Radius This concept is limits which resources an attacker can access after initial compromise. We reduce blast radius by preventing attackers from pivoting elsewhere, by reducing the number of accessible services. There are a couple approaches. One is use of VPCs and the cloud’s native hyper-segregation. Most vulnerable ports, protocols, and permissions are simply unavailable. Another approach is to deploy different SAP features and add-ons in different user accounts, leveraging the isolation capabilities built into multi-tenant clouds. If a specific user or administrative account is breached, your exposure is limited to the resources in that account. This sounds radical but it not particularly difficult to implement. Some firms we have spoken with manage hundreds – or even thousands – of accounts to segregate development, QA, and production systems. Network Visibility Most firms we speak with have a firewall to protect their internal network from outsiders, and identity and access management to gate user access to SAP features. Beyond that most security is not at the application layer – instead it is at the network layer. Intrusion detection, data loss prevention, extrusion

Share:
Read Post

Assembling A Container Security Program [New Paper]

We are pleased to launch our latest research paper, on Docker security: Assembling a Container Security Program. Containers are now such integral elements of software delivery that enterprises are demanding security in and around containers. And it’s no coincidence that Docker has recently added a variety of security capabilities to its offerings, but they are only a small subset of what customers need. During our research we learned many things, including that: Containers are no longer a hypothetical topic for discussion among security practitioners. Today Development and Operations teams need a handle on what is being done, and how to verify that security controls are in place. Security attention in this area is still focused on OS hardening. This is complex and can be difficult to manage, but it is a fairly well-understood set of problems. But there are many more important moving pieces in play, which are still largely being ignored. Very little attention is being paid to the build environment – making sure the container contains what it should, and nothing else. The companies we talked to do not, as a rule, verify that internal code and third-party libraries are secure. Human error is more likely to cause issues than security bugs. Running services in the container with root credentials, poor handling of keys and certificates, opening up ports inappropriately, and indiscriminate communications are all common issues… which can be tested for. The handoff from Development to Operations, and how Operations teams vet containers prior to putting them into production, are somewhat free-form. As more containers are delivered faster, especially with continuous integration and DevOps engineering, container management in general – and specifically knowing what containers should be running at any given time – is becoming harder. Overall, there are many issues beyond OS hardening and patching your Docker runtime. Crucial runtime aspects of container security include monitoring, container segregation, and blocking unwanted communications; these are not getting sufficient attention. They ways containers are built, managed, and deployed are all important aspects of application security, and so should be core to any container security program. So we took an unusually broad view of container security, covering each of these aspects in this paper. Finally, we would like to thank Aqua Security for licensing this content. Community support like this enables us to bring independent analysis and research to you free of charge. We don’t even require registration. You can grab a copy of the research paper directly, or visit the paper’s landing page in our research library, and please visit Aqua Security if you would like to understand how they help provide container security. Share:

Share:
Read Post

Cloud Database Security: 2011 vs. Today

Adrian here. I had a brief conversation today about security for cloud database deployments, and their two basic questions encapsulated many conversations I have had over the last few months. It is relevant to a wider audience, so I will discuss them here. The first question I was asked was, “Do you think that database security is fundamentally different in the cloud than on-premise?” Yes, I do. It’s not the same. Not that we no longer need IAM, assessment, monitoring, or logging tools, but the way we employ them changes. And there will be more focus on things we have not worried about before – like the management plane – and far less on things like archival and physical security. But it’s very hard to compare apples to apples here, because of fundamental changes in the way cloud works. You need to shift your approach when securing databases run on cloud services. The second question was, “Then how are things different today from 2011 when you wrote about cloud database security?” Database security has changed in three basic ways: 1) Architecture: We no longer leverage the same application and database architectures. It is partially about applications adopting microservices, which both promotes micro-segmentation at the network and application layer, and also breaks the traditional approach of closely tying the application to a database. Architecture has also developed in response to evolving database services. We see need for more types of data, with far more dynamic lookup and analysis than transaction support. Together these architectural changes lead to more segmented deployment, with more granular control over access to data and database services. 2) Big Data: In 2011 I expected people to push their Oracle, MS SQL Server, and PostgreSQL installations into the cloud, to reduce costs and scale better. That did not happen. Instead firms prefer to start new projects in the cloud rather than moving existing projects. Additionally we see strong adoption of big data platforms such as Hadoop and Dynamo. These are different platforms with slightly different security issues and security tools than the relational platforms which dominated the previous two decades. And in an ecosystem like Hadoop applications running on the same data lake may be exposed to entirely different service layers. 3) Database as a Service: At Securosis we were a bit surprised by how quickly the cloud vendors embraced big data. Now they offer big data (along with other relational database platforms) as a service. “Roll your own” has become much less necessary. Basic security around internal table structures, patching, administrative access, and many other facets is now handled by vendors to reduce your headaches. We can avoid installation issues. Licensing is far, far easier. It has become so easy to stand up a new relational database or big data cluster this way running databases on Infrastructure as a Service now seems antiquated. I have not gone back through everything I wrote in 2011, but there are probably many more subtle differences. But the question itself overlook another important difference: Security is now embedded in cloud services. None of us here at Securosis anticipated how fast cloud platform vendors would introduce new and improved security features. They have advanced their security offerings much faster than any other platform or service offering I’ve ever seen, and done a much better job with quality and ease of use than anyone expected. There are good reasons for this. In most cases the vendors were starting from a clean slate, unencumbered by legacy demands. Additionally, they knew security concerns were an impediment to enterprise adoption. To remove their primary customer objections, they needed to show that security was at least as good as on-premise. In conclusion, if you are moving new or existing databases to the cloud, understand that you will be changing tools and process, and adjusting your biggest priorities. Share:

Share:
Read Post

Assembling a Container Security Program: Monitoring and Auditing

Our last post in this series covers two key areas: Monitoring and Auditing. We have more to say, in the first case because most development and security teams are not aware of these options, and in the latter because most teams hold many misconceptions and considerable fear on the topic. So we will dig into these two areas essential to container security programs. Monitoring Every security control we have discussed so far had to do with preventative security. Essentially these are security efforts that remove vulnerabilities or make it hard from anyone to exploit them. We address known attack vectors with well-understood responses such as patching, secure configuration, and encryption. But vulnerability scans can only take you so far. What about issues you are not expecting? What if a new attack variant gets by your security controls, or a trusted employee makes a mistake? This is where monitoring comes in: it’s how you discover the unexpected stuff. Monitoring is critical to a security program – it’s how you learn what is effective, track what’s really happening in your environment, and detect what’s broken. For container security it is no less important, but today it’s not something you get from Docker or any other container provider. Monitoring tools work by first collecting events, and then examining them in relation to security policies. The events may be requests for hardware resources, IP-based communication, API requests to other services, or sharing information with other containers. Policy types are varied. We have deterministic policies, such as which users and groups can terminate resources, which containers are disallowed from making external HTTP requests, or what services a container is allowed to run. Or we may have dynamic – also called ‘behavioral’ – policies, which prevent issues such as containers calling undocumented ports, using 50% more memory resources than typical, or uncharacteristically exceeding runtime parameter thresholds. Combining deterministic white and black list policies with dynamic behavior detection provides the best of both worlds, enabling you to detect both simple policy violations and unexpected variations from the ordinary. We strongly recommend that your security program include monitoring container activity. Today, a couple container security vendors offer monitoring products. Popular evaluation criteria for differentiating products and determining suitability include: Deployment Model: How does the product collect events? What events and API calls can it collect for inspection? Typically these products use either of two models for deployment: an agent embedded in the host OS, or a fully privileged container-based monitor running in the Docker environment. How difficult is it to deploy collectors? Do the host-based agents require a host reboot to deploy or update? You will need to assess what type of events can be captured. Policy Management: You will need to evaluate how easy it is to build new policies – or modify existing ones – within the tool. You will want to see a standard set of security policies from the vendor to help speed up deployment, but over the lifetime of the product you will stand up and manage your own policies, so ease of management is key to your-long term happiness. Behavioral Analysis: What, if any, behavioral analysis capabilities are available? How flexible are they, meaning what types of data can be used in policy decisions? Behavioral analysis requires starting with system monitoring to determine ‘normal’ behavior. The criteria for detecting aberrations are often limited to a few sets of indicators, such as user ID or IP address. The more you have available – such as system calls, network ports, resource usage, image ID, and inbound and outbound connectivity – the more flexible your controls can be. Activity Blocking: Does the vendor provide the capability to block requests or activity? It is useful to block policy violations in order to ensure containers behave as intended. Care is required, as these policies can disrupt new functionality, causing friction between Development and Security, but blocking is invaluable for maintaining Security’s control over what containers can do. Platform Support: You will need to verify your monitoring tool supports the OS platforms you use (CentOS, CoreOS, SUSE, Red Hat, etc.) and the orchestration tool (such as Swarm, Kubernetes, Mesos, or ECS) of your choice. Audit and Compliance What happened with the last build? Did we remove sshd from that container? Did we add the new security tests to Jenkins? Is the latest build in the repository? Many of you reading this may not know the answer off the top of your head, but you should know where to get it: log files. Git, Jenkins, JFrog, Docker, and just about every development tool you use creates log files, which we use to figure out what happened – and often what went wrong. There are people outside Development – namely Security and Compliance – who have similar security-related questions about what is going on with the container environment, and whether security controls are functioning. Logs are how you get these external teams the answers they need. Most of the earlier topics in this research, such as build environment and runtime security, have associated compliance requirements. These may be externally mandated like PCI-DSS or GLBA, or internal security requirements from internal audit or security teams. Either way the auditors will want to see that security controls are in place and working. And no, they won’t just take your word for it – they will want audit reports for specific event types relevant to their audit. Similarly, if your company has a Security Operations Center, in order to investigate alerts or determine whether a breach has occurred, they will want to see all system and activity logs over a period of time to in order reconstruct events. You really don’t want to get too deep into this stuff – just get them the data and let them worry about the details. The good news is that most of what you need is already in place. During our investigation for this series we did not speak with any firms which did not have

Share:
Read Post

Assembling a Container Security Program: Container Validation

This post is focused on security testing your code and container, and verifying that both conform to security and operational practices. One of the major advances over the last year or so is the introduction of security features for the software supply chain, from both Docker itself and a handful of third-party vendors. All the solutions focus on slightly different threats to container construction, with Docker providing tools to certify that containers have made it through your process, while third-party tools are focused on vetting the container contents. So Docker provides things like process controls, digital signing services to verify chain of custody, and creation of a Bill of Materials based on known trusted libraries. In contrast, third-party tools to harden container inputs, analyze resource usage, perform static code analysis, analyze the composition of libraries, and check against known malware signatures; they can then perform granular policy-based container delivery based on the results. You will need a combination of both, so we will go into a bit more detail: Container Validation and Security Testing Runtime User Credentials: We could go into great detail here about runtime user credentials, but will focus on the most important thing: Don’t run the container processes as root, as that provides attackers access to attack other containers or the Docker engine. If you get that right you’re halfway home for IAM. We recommend using specific user accounts with restricted permissions for each class of container. We do understand that roles and permissions change over time, which requires some work to keep permission maps up to date, but this provides a failsafe when developers change runtime functions and resource usage. Security Unit Tests: Unit tests are a great way to run focused test cases against specific modules of code – typically created as your dev teams find security and other bugs – without needing to build the entire product every time. This can cover things such as XSS and SQLi testing of known attacks against test systems. Additionally, the body of tests grows over time, providing a regression testbed to ensure that vulnerabilities do not creep back in. During our research, we were surprised to learn that many teams run unit security tests from Jenkins. Even though most are moving to microservices, fully supported by containers, they find it easier to run these tests earlier in the cycle. We recommend unit tests somewhere in the build process to help validate the code in containers is secure. Code Analysis: A number of third-party products perform automated binary and white box testing, failing the build if critical issues are discovered. We recommend you implement code scans to determine if the code you build into a container is secure. Many newer tools have full RESTful API integration within the software delivery pipeline. These tests usually take a bit longer to run, but still fit within a CI/CD deployment framework. Composition Analysis: A useful technique is to check library and supporting code against the CVE (Common Vulnerabilities and Exposures) database to determine whether you are using vulnerable code. Docker and a number of third parties provide tools for checking common libraries against the CVE database, and they can be integrated into your build pipeline. Developers are not typically security experts, and new vulnerabilities are discovered in common tools weekly, so an independent checker to validate components of your container stack is essential. Resource Usage Analysis: What resources does the container use? What external systems and utilities does it depend upon? To manage the scope of what containers can access, third-party tools can monitor runtime access to environment resources both inside and outside the container. Basically, usage analysis is an automated review of resource requirements. These metrics are helpful in a number of ways – especially for firms moving from a monolithic to a microservices architecture. Stated another way, this helps developers understand what references they can remove from their code, and helps Operations narrow down roles and access privileges. Hardening: Over and above making sure what you use is free of known vulnerabilities, there are other tricks for securing applications before deployment. One is to check the contents of the container and remove items that are unused or unnecessary, reducing attack surface. Don’t leave hard-coded passwords, keys, or other sensitive items in the container – even though this makes things easy for you, it makes them much easier for attackers. Some firms use manual scans for this, while others leverage tools to automate scanning. App Signing and Chain of Custody: As mentioned earlier, automated builds include many steps and small tests, each of which validates that some action was taken to prove code or container security. You want to ensure that the entire process was followed, and that somewhere along the way some well-intentioned developer did not subvert the process by sending along untested code. Docker now provides the means to sign code segments at different phases of the development process, and tools to validate the signature chain. While the code should be checked prior to being placed into a registry or container library, the work of signing images and containers happens during build. You will need to create specific keys for each phase of the build, sign code snippets on test completion but before the code is sent onto the next step in the process, and – most importantly – keep these keys secured so an attacker cannot create their own code signature. This gives you some guarantee that the vetting process proceeded as intended. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.