Securosis

Research

Assembling a Container Security Program: Runtime Security

This post will focus on the ‘runtime’ aspects of container security. Unlike the tools and processes discussed in previous sections, here we will focus on containers in production systems. This includes which images are moved into production repositories, security around selecting and running containers, and the security of the underlying host systems. Runtime Security The Control Plane: Our first order of business is ensuring the security of the control plane – the platforms for managing host operating systems, the scheduler, the container engine(s), the repository, and any additional deployment tools. Again, as we advised for build environment security, we recommend limiting access to specific administrative accounts: one with responsibility for operating and orchestrating containers, and another for system administration (including patching and configuration management). We recommend network segregation and physical (for on-premise) or logical segregation (for cloud and virtual) systems. Running the Right Container: We recommend establishing a trusted image repository and ensuring that your production environment can only pull containers from that trusted source. Ad hoc container management is a good way to facilitate bypassing of security controls, so we recommend scripting the process to avoid manual intervention and ensure that the latest certified container is always selected. Second, you will want to check application signatures prior to putting containers into the repository. Trusted repository and registry services can help, by rejecting containers which are not properly signed. Fortunately many options are available, so find one you like. Keep in mind that if you build many containers each day, a manual process will quickly break down. You’ll need to automate the work and enforce security policies in your scripts. Remember, it is okay to have more than one image repository – if you are running across multiple cloud environments, there are advantages to leveraging the native registry in each. Beware the discrepancies between platforms, which can create security gaps. Container Validation and BOM: What’s in the container? What code is running in your production environment? How long ago did we build this container image? These are common questions asked when something goes awry. In case of container compromise, a very practical question is: how many containers are currently running this software bundle? One recommendation – especially for teams which don’t perform much code validation during the build process – is to leverage scanning tools to check pre-built containers for common vulnerabilities, malware, root account usage, bad libraries, and so on. If you keep containers around for weeks or months, it is entirely possible a new vulnerability has since been discovered, and the container is now suspect. Second, we recommend using the Bill of Materials capabilities available in some scanning tools to catalog container contents. This helps you identify other potentially vulnerable containers, and scope remediation efforts. Input Validation: At startup containers accept parameters, configuration files, credentials, JSON, and scripts. In some more aggressive scenarios, ‘agile’ teams shove new code segments into a container as input variables, making existing containers behave in fun new ways. Either through manual review, or leveraging a third-party security tool, you should review container inputs to ensure they meet policy. This can help you prevent someone from forcing a container to misbehave, or simply prevent developers from making dumb mistakes. Container Group Segmentation: Docker does not provide container-level restriction on which containers can communicate with other containers, systems, hosts, IPs, etc. Basic network security is insufficient to prevent one container from attacking another, calling out to a Command and Control botnet, or other malicious behavior. If you are using a cloud services provider you can leverage their security zones and virtual network capabilities to segregate containers and specify what they are allowed to communicate with, over which ports. If you are working on-premise, we recommend you investigate products which enable you to define equivalent security restrictions. In this way each application has an analogue to a security group, which enables you to specify which inbound and outbound ports are accessible to and from which IPs, and can protect containers from unwanted access. Blast Radius: An good option when running containers in cloud services, particularly IaaS clouds, is to run different containers under different cloud user accounts. This limits the resources available to any given container. If a given account or container set is compromised, the same cloud service restrictions which prevent tenants from interfering with each other limit possible damage between accounts and projects. For more information see our post on limiting blast radius with user accounts. Platform Security In Docker’s early years, when people talked about ‘container’ security, they were really talking about how to secure the Linux operating system underneath Docker. Security was more about the platform and traditional OS security measures. If an attacker gained control of the host OS, they could pretty much take control of anything they wanted in containers. The problem was that security of containers, their contents, and even the Docker engine were largely overlooked. This is one reason we focused our research on the things that make containers – and the tools that build them – secure. That said, no discussion of container security can be complete without some mention of OS security. We would be remiss if we did not talk about host/OS/engine security, at least a bit. Here we will cover some of the basics. But we will not go into depth on securing the underlying OS. We could not do that justice within this research, there is already a huge amount of quality documentation available on the operating system of your choice, and there are much more knowledgable sources to address your concerns and questions on OS security. Kernel Hardening: Docker security depends fundamentally on the underlying operating system to limit access between ‘users’ (containers) on the system. This resource isolation model is built atop a virtual map called Namespaces, which maps specific users or group of users to a subset of resources (e.g.: networks, files, IPC, etc.) within their Namespace. Containers should run under a specified user ID. Hardening starts with a secure

Share:
Read Post

Assembling a Container Security Program: Securing the Build

As we mentioned in our last post, most people don’t seem to consider the build environment when thinking about container security, but it’s important. Traditionally, the build environment is the domain of developers, and they don’t share a lot of details with outsiders (in this case, Operations folks). But this is beginning to change with Continuous Integration (CI) or full Continuous Deployment (CD), and more automated deployment. The build environment is more likely to go straight into production. This means that operations, quality assurance, release management, and other groups find themselves having to cooperate on building automation scripts and working together more closely. Collaboration means a more complex, distributed working environment, with more stakeholders having access. DevOps is rapidly breaking down barriers between groups, even getting some security teams to contribute test scripts and configuration updates. Better controls are needed to restrict who can alter the build environment and update code, and an audit process to validate who did what. Don’t forget why containers are so attractive to developers. First, a container simplifies building and packaging application code – abstracting the app from its physical environment – so developers can worry about the application rather than its supporting systems. Second, the container model promotes lightweight services, breaking large applications down into small pieces, easing modification and scaling – especially in cloud and virtual environments. Finally, a very practical benefit is that container startup is nearly instant, allowing agile scaling up and down in response to demand. It is important to keep these in mind when considering security controls, because any control that reduces one of these core advantages will not be considered, or is likely to be ignored. Build environment security breaks down into two basic areas. The first is access and usage of the basic tools that form the build pipeline – including source code control, build tools, the build controller, container management facilities, and runtime access. At Securosis we often call this the “management plane”, as these interfaces — whether API or GUI – are used to set access policies, automate behaviors and audit activity. Second is security testing of your code and the container, validating it conforms to security and operational practices. This post will focus on the former. Securing the Build Here we discuss the steps to protect your code – more specifically to protect build systems, to ensure they implement the build process you intended. This is conceptually very simple, but there are many pieces to this puzzle, so implementation can get complicated. People call this Secure Software Delivery, Software Supply Chain Management, and Build Server Security – take your pick. It is management of the assemblage of tools which oversee and implement your process. For our purposes today these terms are synonymous. Following is a list of recommendations for securing platforms in the build environment to ensure secure container construction. We include tools from Docker and others which that automate and orchestrate source code, building, the Docker engine, and the repository. For each tool you will employ a combination of identity management, roles, platform segregation, secure storage of sensitive data, network encryption, and event logging. Some of the subtleties follow. Source Code Control: Stash, Git, GitHub, and several variants are common. Source code control is one of the tools with a wide audience, because it is now common for Security, Operations, and Quality Assurance to all contribute code, tests, and configuration data. Distributed access means all traffic should run over SSL or VPN connections. User roles and access levels are essential for controlling who can do what, but we recommend requiring token-based or certificate-based authentication, with two-factor authentication a minimum for all administrative access. Build Tools and Controllers: The vast majority of development teams we spoke with use build controllers like Bamboo and Jenkins, with these platforms becoming an essential part of their automated build processes. These provide many pre-, post- and intra-build options, and can link to a myriad of other facilities, complicating security. We suggest full network segregation of the build controller system(s), and locking down network connections down to source code controller and docker services. If you can, deploy build servers as on-demand containers – this ensures standardization of the build environment and consistency of new containers. We recommend you limit access to the build controllers as tightly as possible, and leverage built-in features to restrict capabilities when developers need access. We also suggest locking down configuration and control data to prevent tampering with build controller behavior. We recommend keeping any sensitive data, such as ssh keys, API access keys, database credentials, and the like in a secure database or data repository (such as a key manager, .dmg file, or vault) and pulling credentials on demand to ensure sensitive data never sits on disk unprotected. Finally, enable logging facilities or add-ons available for the build controller, and stream output to a secure location for auditing. Docker: You will use Docker as a tool for pre-production as well as production, building the build environment and test environments to vet new containers. As with build controllers like Jenkins, you’ll want to limit Docker access in the build environment to specific container administrator accounts. Limit network access to accept content only from the build controller system(s) and whatever trusted repository or registry you use. Our next post will discuss validation of individual containers and their contents. Share:

Share:
Read Post

Assembling a Container Security Program: Threats

After a somewhat lengthy hiatus – sorry about that – I will close out this series over the next couple days. In this post I want to discuss container threat models – specifically for Docker containers. Some of these are known threats and issues, some are purely lab exercises for proof-of-concept, and others are threat vectors which attackers have yet to exploit – likely because there is so much low-hanging fruit for them elsewhere. So what are the primary threats to container environments? Build Environment One area that needs protection is the build environment. It’s not first on most people’s lists for container security, but it’s first on mine because it’s the easiest place to insert malicious code. Developers tend to loathe security in development as it slows them down. This is why there is an entire industry dedicated to test data management and masked data: developers tend to do an end-run around security if it slows down their build and testing process. What kinds of threats are we talking about specifically? Things like malicious or moronic source code changes. Malicious or moronic alterations to automated build controllers. Configuration scripts with errors, or with credentials sitting around. The addition of insecure libraries or back-rev/insecure versions of existing code. We want to know if the runtime code has been scanned for vulnerabilities. And we worry about a failure to audit all the above and catch any errors. Container Security What the hell is in the container? What does it do? Is that even the correct version of the container? These are common questions I hear a lot from operations folks. They have no idea. Nor do they know what permissions the container has or requires – all too often lazy developers run everything as root, breaking operational security models and opening up the container engine and underlying OS to various attacks. And security folks are unaware of what – if any – container hardening may have been performed. You want to know the container’s contents have been patched, vetted, hardened, and registered prior to deployment. Runtime Security So what are the threats to worry about? We worry a container will attack or infect another container. We worry a container may quietly exfiltrate data, or just exhibit any other odd behavior. We worry containers have been running a long time, and not rotated to newer patched versions. We worry about whether the network has been properly configured to limit damage from a compromise. And we worry about attackers probing containers, looking for vulnerabilities. Platform Security Finally, the underlying platform security is a concern. We worry that a container will attack the underlying host OS or the container engine. If it succeeds it’s pretty much game over for that cluster of containers, and you may have given malicious code resources to pivot and attack other systems. If you are in the security industry long enough, you see several patterns repeat over and over. One is how each hot new tech becomes all the rage, finds its way into your data center, and before you have a chance to fully understand how it works, someone labels it “business critical”. That’s about when security and operations teams get mandated to secure that hot new technology. It’s a natural progression – every software platform needs to focus on attaining minimum usability, scalability, and performance levels before competitors come and eat their lunch. After a certain threshold of customer adoption is reached – when enterprises really start using it – customers start asking, “Hey, how do we secure this thing?” The good news is that Docker has reached that point in its evolutionary cycle. Security is important to Docker customers, so it has become important to Docker as well. They have now implemented a full set of IAM capabilities: identity management, authentication, authorization, and (usually) single sign-on or federation – along with encrypted communications to secure data in transit. For the rest of the features enterprises expect: configuration analysis, software assessment, monitoring, logging, encryption for data at rest, key management, development environment security, etc. – you’re looking at a mixture of Docker and third-party solution providers to fill in gaps. We also see cloud providers like Azure and AWS mapping their core security services over the container environment, providing different security models from what you might employ on-premise. This is an interesting time for container security in general… and a bit confusing, as you have a couple different ways to address any given threat. Next we will delve into how to address these threats at each stage of the pipeline, with build environment security. Share:

Share:
Read Post

The Difference between SecDevOps and Rugged DevOps

Adrian here. I wanted to do a quick post on a question I’ve been getting a lot: “Is there a difference between SecDevOps, Rugged DevOps, DevSecOps, and the rest of those various terms? Aren’t they all the same?” No, they are not. I realized that Rich and I have been making this distinction for some time, and while we have made references in presentations, I don’t think we have ever discussed it on the blog. So here they are, our definitions of Rugged DevOps and SecDevOps: Rugged is about bashing your code prior to production, to ensure it holds up to external threats once it gets into production, and using runtime code to help applications protect themselves. Be as mean to your code as attackers will, and make it resilient against attacks. SecDevOps, or DevSecOps, is about using the wonders of automation to tackle security-related problems including composition analysis, configuration management, selecting approved images/containers, use of immutable servers, and other techniques to address security challenges facing operations teams. It also helps to eliminate certain classes of attacks. For instance immutable servers in a security zone which blocks port 22 can prevent both hackers and administrators from logging in. In simplest terms, Rugged DevOps is more developer-focused, while SecDevOps is more operations-focused. Before you ask, yes, DevOps disposes with the silos between development, QA, operations, and security. They are all part of the same team. They work together. Security’s role changes a bit. They help advise, help with tool selection, and more technically astute members even help write code or tests to validate code. But we are still having developer-centric conversations and operations conversations, so this merger is clearly a work in progress. Please feel free to disagree. Share:

Share:
Read Post

SAP Cloud Security: Contracts

This post will discuss the division of responsibility between a cloud provider and you as a tenant, and how to define aspects of that relationship in your service contract. Renting a platform from a service provider does not mean you can afford to cede all security responsibility. Cloud services free you from many traditional IT jobs, but you must still address security. The cloud provider assumes some security responsibilities, but many still fall into your lap, while others are shared. The administration and security guides don’t spell out all the details of how security works behind the scenes, or what the provider really provides. Grey areas should be defined and clarified in your contract up fron. During an incident response is a terrible time to discover what SAP actually offers. SAP’s brochures on cloud security imply you will tackle security in a simple and transparent way. That’s not quite accurate. SAP has done a good job providing basic security controls, and they have obtained certifications for common regulatory and compliance requirements on their infrastructure. But you are renting a platform, which leaves a lot up to you. SAP does not provide a good roadmap of what you need to tackle, or a list of topics to understand before you deploy into an SAP HCP cloud. Our first goal for this section is to help you identify which areas of cloud security you are responsible for. Just as important is identifying and clarifying shared responsibilities. To highlight important security considerations which generally are not discussed in service contracts, we will guide you through assessing exactly what a cloud provider’s responsibilities are, and what they do not provide. Only then does it become clear where you need to deploy resources. Divisions of Responsibility What is PaaS? Readers who have worked with SAP Hana already know what it is and how it works. Those new to cloud may understand the Platform as a Service (PaaS) concept, but not yet be fully aware what it means structurally. To highlight what a PaaS service provides, let’s borrow Christopher Hoff’s cloud taxonomy for PaaS; this illustrates what SAP provides. This diagram includes the components of IaaS and PaaS systems. Obviously the facilities (such as power, HVAC, and physical space) and hardware (storage, network, and computing power) portions of the infrastructure are provided, as are the virtualization and cluster management technologies to make it all work together. More interesting, though: SAP Hana, its associated business objects, personalization, integration, and data management capabilities are all provided – as well as APIs for custom application development. This enables you to focus on delivering custom application features, tailored UIs, workflows, and data analytics, while SAP takes care of managing everything else. The Good, the Bad, and the Uncertain The good news is that this frees you up from lengthy hardware provisioning cycles, network setup, standing up DNS servers, cluster management, database installations, and the myriad things it takes to stand up a data center. And all the SAP software, middleware components, and integration are built in – available on demand. You can stand up an entire SAP cluster through their management console in hours instead of weeks. Scaling up – and down – is far easier, and you are only charged for what you use. The bad news is that you have no control over underlying network security; and you do not have access to network events to seed your on-premise DLP, threat analysis, SIEM, and IDS systems. Many traditional security tools therefore no longer function, and event collection capabilities are reduced. The net result is that you become more reliant than ever on the application platform’s built-in security, but you do not fully control it. SAP provides fairly powerful management capabilities from a single console, so administrative account takeovers or malicious employees can cause considerable damage. There are many security details the vendor may share with you, but wherever they don’t publish specifics, you need to ask. Specifically, things like segregation of administrative duties, data encryption and key management, employee vetting process, and how they monitor their own systems for security events. You’ll need to dig in a bit and ask SAP about details of the security capabilities they have built into the platform. Contract Considerations At Securosis we call the division between your security responsibilities and your vendor’s “the waterline”. Anything above the waterline is your responsibility, and everything below is SAP’s. In some areas, such as identity management, both parties have roles to play. But you generally don’t see below the waterline – how they perform their work is confidential. You have very little visibility into their work, and very limited ability to audit it – for SAP and other cloud services. This is where your contract comes into play. If a service is not in the contract, there is a good chance it does not exist. It is critical to avoid assumptions about what a cloud provider offers or will do, if or when something like a data breach occurs. Get everything in writing. The following are several areas we advise you to ask about. If you need something for security, include it in your contract. Event Logs: Security analytics require event data from many sources. Network flows, syslog, database activity, application logs, IDS, IAM, and many others are all useful. But SAP’s cloud does not offer all these sources. Further, the cloud is multi-tenant, so logs may include activity from other tenants, and therefore not be available to you. For platforms and applications you manage in the cloud, event logs are available. Assess what you rely on today that’s unavailable. In most cases you can switch to more application-centric event sources to collect required information. You also need to determine how data will be collected – agents are available for many things, while other logs must be gathered via API requests. Testing and Assessment: SAP states that they conduct internal penetration tests to verify common defects are not present, and attempts to validate that their

Share:
Read Post

Assembling a Container Security Program [New Series]

The explosive growth of containers is not surprising – technologies such as Docker address several problems facing developers when they deploy applications. Developers need simple packaging, rapid deployment, reduced environmental dependancies, support for micro-services, and horizontal scalability – all of which containers provide, making them very compelling. Yet this generic model of packaged services, where the environment is designed to treat each container as a “unit of service” sharply reduces transparency and auditability (by design) and gives security pros nightmares. We run more code and run it faster, begging the question, “How can you introduce security without losing the benefits of containers?” IT and Security teams lack visibility into containers, and have trouble validating them – both before placing them into production, and once they are running in production. Their peers on the development team are often disinterested in security, and cannot be bothered with providing reports and metrics. This is essentially the same problem we have for application security in general: the people responsible for the code are not incentivized to make security their problem, and the people who want to know what’s going on lack visibility. In this research we will delve into container technology, its unique value proposition, and how it fits into the application development and management processes. We will offer advice on how to build security into the container build process, how to validate and manage container inventories, and how to protect the container run-time environment. We will discuss applicability, both for pre-deployment testing and run-time security. Our hypothesis is that containers are scaring the hell out of security pros because of their lack of transparency. The burden of securing containers falls across development, operations, and security teams; but not of these audiences are sure how to tackle the problem. This research is intended to aid security practitioners and IT operations teams in selecting tools and approaches for container security. We are not diving into how to secure apps in containers here – instead we are limiting ouselves to build, container management, deployment, and runtime security for the container environment. We will focus on Docker security as the dominant container model today, but will comment on other options as appropriate – particularly Google and Amazon services. We will not go into detail on the Docker platform’s native security offerings, but will mention them as part of an overall strategy. Our working title is “Assembling a Container Security Program”, but that is open for review. Our outline for this series is: Threats and Concerns: We will outline why container security is difficult, with a dive into the concerns of malicious containers, trust between containers and the runtime environment, container mismanagement, and hacking the build environment. We will discuss the areas of responsibility for Security, Development, and Operations. Securing the Build: This post will cover the security of the build environment, where code is assembled and containers are constructed. We will consider vetting the contents of the container, as well as how to validate supporting code libraries. We will also discuss credential management for build servers to help protect against container tampering, code insertion and misuse through assessment tools, build tool configuration, and identity management. We will offer suggestions for Continuous Integration and DevOps environments. Validating the Container: Here we will discuss methods of container management and selection, as well as ways to ensure selection of the correct containers for placement into the environment. We will discuss approaches for container validation and management, as well as good practices for response when vulnerabilities are found. Protect the Runtime Environment: This post will cover protecting the runtime environment from malicious containers. We will discuss the basics of host OS security and container engine security. This topic could encompass an entire research paper itself, so we will only explore the basics, with pointers to container engine and OS platform security controls. And we will discuss use of identity management in cloud environments to restrict container permissions at runtime. Monitoring and Auditing: Here we will discuss the need to verify that containers are behaving as intended; we will break out use of logging, real-time monitoring, and activity auditing for container environments. We will also discuss verification of code behavior – through both sandboxing and API monitoring. Containers are not really new, but container security is still immature. So we are in full research mode with this project, and as always we use an open research model. The community helps make these research papers better – by both questioning our findings and sharing your experiences. We want to hear your questions, concerns, and experiences. Please reach out to us via email or leave comments. Our next post will address concerns we hear from security and IT folks. Share:

Share:
Read Post

Securing SAP Clouds [New Series]

Every enterprise uses cloud computing services to some degree – tools such as Gmail, Twitter, and Dropbox are ubiquitous; as are business applications like Salesforce, ServiceNow, and Quickbooks. Cost savings, operational stability, and reduced management effort are all proven advantages. But when we consider moving back-office infrastructure – systems at the heart of business – there is significant angst and uncertainty among IT and security professionals. For big and complex applications like SAP, they wonder if cloud services are a viable option. The problem is that security is not optional, but actually critical. For folks operating in a traditional on-premise environment, it is often unclear how to adapt the security model to an unfamiliar environment where they only have partial control. We have been receiving an increasing number of questions on SAP cloud security, so today we are kicking off a new research effort to address major questions on SAP cloud deployment. We will examine how cloud services are different and how to adapt to produce secure deployments. Out main focus areas will be the division of responsibility between you and your cloud vendor, which tools and approaches are viable, changes to the operational model, and advice for putting together a cloud security program for SAP. Cloud computing infrastructure faces many of the same challenges as traditional on-premise IT. We are past legitimately worrying that the cloud is “less secure”. Properly implemented, cloud services are as secure – in many cases more secure – than on-premise applications. But “proper implementation” is tricky – if you simply “lift and shift” your old model into the cloud, we know from experience that it will be less secure and cost more to operate. To realize the advantages of the cloud you need to leverage its new features and capabilities – which demands a degree of re-engineering for architecture, security program, and process. SAP cloud security is tricky. The main issue is that there is no single model for what an “SAP Cloud” looks like. From many, it’s Hana Enterprise Cloud (HEC), a private cloud within the existing on-premise domain. Customers who don’t modify or extend SAP’s products can leverage SAP’s Software as a Service (SaaS) offering. But a growing number of firms we speak with are moving to SAP’s Hana Cloud Platform (HCP), a Platform as a Service (PaaS) bundle of the core SAP Hana application with data management features. Alternatively, various other cloud services can be bundled or linked to build a cloud plastform for SAP – often including mobile client access ‘enablement’ services and supplementary data management (think big data analytics and data mining). But we find customers do not limit themselves only to SAP software – they blend SAP cloud services with other major IaaS providers, including Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure to create ‘best-of-breed’ solutions. In response, SAP has published widely on its vision for cloud computing architectures, so we won’t cover that in detail here, but they promote hybrid deployments centered around Hana Cloud Platform (HCP) in conjunction with on-premise and/or public IaaS clouds. There is a lot to be said for the flexibility of this model – it enables customers to deploy applications into the cloud environments they are comfortable with, or to choose one optimal for their applications. But this flexibility comes at the price of added complexity, making it more difficult to craft a cohesive security model. So we will focus on the use of the HCP service, discussing security issues around hybrid architectures as appropriate. We will cover the following areas: Division of Responsibility: This post will discuss the division of responsibility between the cloud provider and you, the tenant. We will talk about where the boundary lands in different cloud service models (specifically SaaS, PaaS, and IaaS). We will discuss new obligations (particularly the cloud provider’s responsibilities), the need to investigate which security tools and information they provide, and where you need to fill in the gaps. Patching, configuration, breach analysis, the ability to assess installations, availability of event data, and many other considerations come into play. We will discuss the importance of contracts and service definitions, as well as what to look for when addressing compliance concerns. We will briefly address issues of jurisdiction and data privacy when considering where to deploy SAP servers and failover systems. Cloud Architectures and Security Models: SAP’s cloud service offers many feature which are similar to their on-premise offerings. But cloud deployments disrupt traditional security controls, and reliance on old-school network scanning and monitoring no longer works in multi-tenant environments on virtual networks. So this post will discuss how to evolve your approach for security, particularly in application architecture and the security selection process. We will cover the major areas you need to address when mapping your security controls to cloud-enabled security technologies. We will explore some issues with current preventive security controls, cluster configuration, and logging. Application Security: Cloud deployments free us of many burdens of patching, server maintenance, and physical network segregation. But we are still responsible for many application-layer security controls – including SAP applications, your application code, and supporting databases. And many cloud vendor services impact application configuration. This post will discuss preventive security controls in areas such as configuration, assessment, and identity management; as well as how to approach patch management. We will also discuss real-time security in monitoring, data security, logging, and analytics. And we will discuss security controls missing from SAP cloud services. Security Operations in Cloud Environments: The cloud fundamentally changes IT operations, for the better. Traditional concepts of how to provide reliability and security are turned on their ear in cloud environments. Most IT and security personnel don’t fully grasp the challenges – or opportunities. This post will present the advantages of ephemeral servers, automation, virtual networks, API enablement, and fine-grained authorization. We will discuss automation and orchestration of security tasks through APIs and scripts, how to make patching less painful, and how to deploy security as part of your application

Share:
Read Post

New Paper: Understanding and Selecting RASP

We are pleased to announce the availability of our Understanding RASP (Runtime Application Self-Protection) research paper. We would like to heartily thank Immunio for licensing this content. Without this type of support we could not bring this level of research to you, both free of charge and without requiring registration. We think this research paper will help developers and security professionals who are tackling application security from within. Our initial motivation for this paper was questions we got from development teams during our Agile Development and DevOps research efforts. During each interview we received questions about how to embed security into the application and the development lifecycle. The people asking us wanted security, but they needed it to work within their development and QA frameworks. Tools that don’t offer RESTful APIs, or cannot deploy within the application stack, need not apply. During these discussions we were asked about RASP, which prompted us to dive in. As usual, during this research project we learned several new things. One surprise was how much RASP vendors have advanced the application security model. Initial discussions with vendors showed several used a plug-in for Tomcat or a similar web server, which allows developers to embed security as part of their application stack. Unfortunately that falls a bit short on protection. The state of the art in RASP is to take control of the runtime environment – perhaps using a full custom JVM, or the Java JVM’s instrumentation API – to enable granular and internal inspection of how applications work. This model can provide assessments of supporting code, monitoring of activity, and blocking of malicious events. As some of our blog commenters noted, the plug-in model offers a good view of the “front door”. But full access to the JVM’s internal workings additionally enables you to deploy very targeted protection policies where attacks are likely to occur, and to see attacks which are simply not visible at the network or gateway layer. This in turn caused us to re-evaluate how we describe RASP technology. We started this research in response to developers looking for something suitable for their automated build environments, so we spent quite a bit of time contrasting RASP with WAF because to spotlight the constraints WAF imposes on development processes. But for threat detection, these comparisons are less than helpful. Discussions of heuristics, black and white lists, and other detection approaches fail to capture some of RASP’s contextual advantages when running as part of an application. Compared to a sandbox or firewall, RASP’s position inside an application alleviates some of WAF’s threat detection constraints. In this research paper we removed those comparisons; we offer some contrasts with WAF, but do not constrain RASP’s value to WAF replacement. We believe this technological approach will yield better results and provide the hooks developers need to better control application security. You can download the research paper, or get a copy from our Research Library. Share:

Share:
Read Post

Understanding and Selecting RASP: Buyers Guide

Before we jump into today’s post, we want to thank Immunio for expressing interest in licensing this content. This type of support enables us to bring quality research to you, free of charge. If you are interested in licensing this Securosis research as well, please let us know. And we want to thank all of you who have been commenting throughout this series – we have received many good comments and questions. We have in fact edited most of the posts to integrate your feedback, and added new sections to address your questions. This research is certainly better for it! Share:

Share:
Read Post

Summary: June 10, 2016

Adrian here. A phone call about Activity Monitoring administrative actions on mainframes, followed by a call on security architectures for new applications in AWS. A call on SAP vulnerability scans, followed by a call on Runtime Application Self-Protection. A call on protecting relational databases against SQL injection, followed by a discussion of relevant values to key security event data for a big data analytics project. Consulting with a firm which releases code every 12 months, and discussing release management with a firm that is moving to two-a-day in a continuous deployment model. This is what my call logs look like. If you want to see how disruptive technology is changing security, you can just look at my calendar. On any given day I am working at both extremes in security. On one hand we have the old and well-worn security problems; familiar, comfortable and boring. On the other hand we have new security problems, largely part driven by cloud and mobile technologies, and the corresponding side-effects – such as hybrid architectures, distributed identity management, mobile device management, data security for uncontrolled environments, and DevOps. Answers are not rote, problems do not always have well-formed solutions, and crafting responses takes a lot of work. Worse, the answer I gave yesterday may be wrong tomorrow, if the pace of innovation invalidates my answer. This is our new reality. Some days it makes me dizzy, but I’ve embraced the new, if for no other reason that to avoid being run over by it. It’s challenging as hell, but it’s not boring. On to this week’s summary: If you want to subscribe directly to the Friday Summary only list, just click here. Top Posts for the Week Azure Infrastructure Security Book Coming Fujitsu to Integrate Box into enterprise software Gene Kin on The Three Ways Big data increasingly a driver of cloud services Microsoft partners with Jenkins Oracle will sue the former employee who allegedly would not embrace cloud computing accounting methods Tool of the Week I decided to take some to and learn about tools more common to clouds other than AWS. I was told Kubernetes was the GCP open source version of Docker, so I though that would be a good place to start. After I spent some time playing with it, I realized what I was initially told was totally wrong! Kubernetes is called a “container manager”, but it’s really focused on setting up services. Docker focuses on addressing app dependencies and packaging; Kubernetes on app orchestration. And it runs anywhere you want – not just GCP and GCE, but in other clouds or on-premise. If you want to compare Kubernetes to something in the Docker universe, it’s closest to Docker Swarm, which tackles some of the management and scalability issues. Kubernetes has three basic parts: controllers that handle things like replication and pod behaviors; a simple naming system – essentially using key-value pairs – to identify pods; and a services directory for discovery, routing, and load balancing. A pod can be one or more Docker containers, or a standalone application. These three primitives make it pretty easy to stand up code, direct application requests, manage clusters of services, and provide basic load balancing. It’s open source and works across different clouds, so your application should work the same on GCP, Azure, or AWS. It’s not super easy to set up, but it’s not a nightmare either. And it’s incredibly flexible – once set up, you can easily create pods for different services, with entirely different characteristics. A word of caution: if you’re heavily invested in Docker, you might instead prefer Swarm. Early versions of Kubernetes seemed to have Docker containers in mind, but the current version does not integrate with native Docker tools and APIs, so you have to duct tape some stuff together to get Docker compliant containers. Swarm is compliant with Docker’s APIs and works seamlessly. But don’t be swayed by studies that compare container startup times as a main measure of performance; that is one of the least interesting metrics for comparing container management and orchestration tools. Operating performance, ease of use, and flexibility are all far more important. If you’re not already a Docker shop, check out Kubernetes – its design is well-thought-out and purpose-built to tackle micro-service deployment. And I have not yet had a chance to use Google’s Container Engine, but it is supposed to make setup easier, with a number of supporting services. Securosis Blog Posts this Week Evolving Encryption Key Management Best Practices: Use Cases Incite 6/7/2016: Nature Mr. Market Loves Ransomeware Building a Vendor (IT) Risk Management Program (New Paper) Evolving Encryption Key Management Best Practices: Part 2 Other Securosis News and Quotes Mike did a webcast with Chris over at IANS Training and Events We are running two classes at Black Hat USA: Black Hat USA 2016 | Cloud Security Hands-On (CCSK-Plus) Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.