Securosis

Research

Understanding and Selecting RASP 2019: Use Cases

Updated 9-13 to include business requirements The primary function of RASP is to protect web applications against known and emerging threats. In some cases it is deployed to block attacks at the application layer, before vulnerabilities can be exploited, but in many cases RASP tools process a request until it detects an attack and then blocks the action. Astute readers will notice that these are basically the classic use cases for Intrusion Detection Systems (IDS) and Web Application Firewalls (WAFs). So why look for something new, if other tools in the market already provide the same application security benefits? The answer is not in what RASP does, but rather in how it does works, which makes it more effective in a wide range of scenarios. Let’s delve into detail about what clients are asking for, so we can bring this into focus. Primary Market Drivers RASP is a relatively new technology, so current market drivers are tightly focused on addressing the security needs of two distinct “buying centers” which have been largely unaddressed by existing security applications. We discovered this important change since our last report in 2017 through hundreds of conversations with buyers, who expressed remarkably consistent requirements. The two “buying centers” are security and application development teams. Security teams are looking for a reliable WAF replacement without burdensome management requirements, and development teams ask for a security technology to protect applications within the framework of existing development processes. The security team requirement is controversial, so let’s start with some background on WAF functions and usability. It’s is essential to understand the problems driving firms toward RASP. Web Application Firewalls typically employ two methods of threat detection; blacklisting and whitelisting. Blacklisting is detection – and often blocking – of known attack patterns spotted within incoming application requests. SQL injection is a prime example. Blacklisting is useful for screening out many basic attacks against applications, but new attack variations keep showing up, so blacklists cannot stay current, and attackers keep finding ways to bypass them. SQL injection and its many variants is the best illustration. But whitelisting is where WAFs provide their real value. A whitelist is created by watching and learning acceptable application behaviors, recording legitimate behaviors over time, and preventing any requests which do not match the approved behavior list. This approach offers substantial advantages over blacklisting: the list is specific to the application monitored, which makes it feasible to enumerate good functions – instead of trying to catalog every possible malicious request – and therefore easier (and faster) to spot undesirable behavior. Unfortunately, developers complain that in the normal course of application deployment, a WAF can never complete whitelist creation – ‘learning’ – before the next version of the application is ready for deployment. The argument is that WAFs are inherently too slow to keep up with modern software development, so they devolve to blacklist enforcement. Developers and IT teams alike complain that WAF is not fully API-enabled, and that setup requires major manual effort. Security teams complain they need full-time personnel to manage and tweak rules. And both groups complain that, when they try to deploy into Infrastructure as a Service (IaaS) public clouds, the lack of API support is a deal-breaker. Customers also complain of deficient vendor support beyond basic “virtual appliance” scenarios – including a lack of support for cloud-native constructs like application auto-scaling, ephemeral application stacks, templating, and scripting/deployment support for the cloud. As application teams become more agile, and as firms expand their cloud footprint, traditional WAF becomes less useful. To be clear, WAF can provide real value – especially commercial WAF “Security as a Service” offerings, which focus on blacklisting and some additional protections like DDoS mitigation. These are commonly run in the cloud as a proxy service, often filtering requests “in the cloud” before they pass into your application and/or RASP solution. But they are limited to a ‘Half-a-WAF’ role – without the sophistication or integration to leverage whitelisting. Traditional WAF platforms continue to work for on-premise applications with slower deployment, where the WAF has time to build and leverage a whitelist. So existing WAF is largely not being “ripped and replaced”, but it is largely unused in the cloud and by more agile development teams. So security teams are looking for an effective application security tool to replace WAF, which is easier to manage. They need to cover application defects and technical debt – not every defect can be fixed in a timely fashion in code. Developer requirements are more nuanced: they cite the same end goal, but tend to ask which solutions can be fully embedded into existing application build and certification processes. To work with development pipelines, security tools need to go the extra mile, protecting against attacks and accommodating the disruption underway in the developer community. A solution must be as agile as application development, which often starts with compatible automation capabilities. It needs to scale with the application, typically by being bundled with the application stack at build time. It should ‘understand’ the application and tailor its protection to the application runtime. A security tool should not require that developers be security experts. Development teams working to “shift left” to get security metrics and instrumentation earlier in their process want tools which work in pre-production, as well as production. RASP offers a distinct blend of capabilities and usability options which make it a good fit for these use cases. This is why, over the last three years, we have been fielding several calls each week to discuss it. Functional Requirements The market drivers mentioned above change traditional functional requirements – the features buyers are looking for. Effectiveness: This seems like an odd buyer requirement. Why buy a product which does not actually work? The short answer is ‘false positives’ that waste time and effort. The longer answer is many security tools don’t work well, produce too many false positives to be usable, or require so much maintenance that building your bespoke seems like a better

Share:
Read Post

Understanding and Selecting RASP: 2019

During our 2015 DevOps research conversations, developers consistently turned the tables on us, asking dozens of questions about embedding security into their development process. We were surprised to discover how much developers and IT teams are taking larger roles in selecting security solutions, working to embed security products into tooling and build processes. Just like they use automation to build and test product functionality, they automate security too. But the biggest surprise was that every team asked about RASP, Runtime Application Self-Protection. Each team was either considering RASP or already engaged in a proof-of-concept with a RASP vendor. This was typically in response to difficulties with existing Web Application Firewalls (WAF) – most teams still carry significant “technical debt”, which requires runtime application protection. Since 2017 we have engaged in over 200 additional conversations on what gradually evolved into ‘DevSecOps’ – with both security and development groups asking about RASP, how it deploys, and benefits it can realistically provide. These conversations solidified the requirement for more developer-centric security tools which offer the agility developers demand, provide metrics prior to deployment, and either monitor or block malicious requests in production. Research Update Our previous RASP research was published in the summer of 2016. Since then Continuous Integration for application build processes has become the norm, and DevOps is no longer considered wild idea. Developers and IT folks have embraced it as a viable and popular tool approach for producing more reliable application deployments. But it has raised the bar for security solutions, which now need to be as agile and embeddable as developers’ other tools to be taken seriously. The rise of DevOps has also raised expectations for integration of security monitoring and metrics. We have witnessed the disruptive innovation of cloud services, with companies pivoting from “We are not going to the cloud.” to “We are building out our multi-cloud strategy.” in three short years. These disruptive changes have spotlit the deficiencies of WAF platforms, both lack of agility and inability to go “cloud native”. Similarly, we have observed advancements in RASP technologies and deployment models. With all these changes it has become increasingly difficult to differentiate one RASP platform from another. So we are kicking off a refresh of our RASP research. We will dive into the new approaches, deployment models, and revised selection criteria for buyers. Defining RASP Runtime Application Self-Protection (RASP) is an application security technology which embeds into an application or application runtime environment, examining requests at the application layer to detect attacks and misuse in real time. RASP products typically contain the following capabilities: Unpack and inspect requests in the application context, rather than at the network or HTTP layer Monitor and block application requests; products can sometimes alter requests to strip out malicious content Fully functional through RESTful APIs Protect against all classes of application attacks, and detect whether an attack would succeed Pinpoint the module, and possibly the specific line of code, where a vulnerability resides Instrument application functions and report on usage As with all our research, we welcome public participation in comments to augment or discuss our content. Securosis is known for research positions which often disagree with vendors, analyst firms, and other researchers, so we encourage civil debate and contribution. The more you add to the discussion, the better the research! Next we will discuss RASP use cases and how they have changed over the last few years. Share:

Share:
Read Post

Building a Multi-cloud Logging Strategy: Issues and Pitfalls

As we begin our series on Multi-cloud logging, we start with reasons some traditional logging approaches don’t work. I don’t like to start with a negative tone, but we need to point out some challenges and pitfalls which often beset firms on first migration to cloud. That, and it helps frame our other recommendations later in this series. Let’s take a look at some common issues by category. Tooling Scale & Performance: Most log management and SIEM platforms were designed and first sold before anyone had heard of clouds, Kafka, or containers. They were architected for ‘hub-and-spoke’ deployments on flat networks, when ‘Scalability’ meant running on a bigger server. This is important because the infrastructure we now monitor is agile – designed to auto-scale up when we need processing power, and back down to reduce costs. The ability to scale up, down, and out is essential to the cloud, but often missing from older logging products which require manual setup, lacking full API enablement and auto-scale capability. Data Sources: We mentioned in our introduction that some common network log sources are unavailable in the cloud. Contrawise, as automation and orchestration of cloud resources are via API calls, API logs become an important source. Data formats for these new log sources may change, as do the indicators used to group events or users within logs. For example servers in auto-scale groups may share a common IP address. But functions and other ‘serverless’ infrastructure are ephemeral, making it impossible to differentiate one instance from the next this way. So your tools need to ingest new types of logs, faster, and change their threat detection methods by source. Identity: Understanding who did what requires understandings identity. An identity may be a person, service, or device. Regardless, the need to map it, and perhaps correlate it across sources, becomes even more important in hybrid and multi-cloud environments Volume: When SIEM first began making the rounds, there were only so many security tools and they were pumping out only so many logs. Between new security niches and new regulations, the array of log sources sending unprecedented amounts of logs to collect and analyze grows every year. Moving from traditional AV to EPP, for example, brings with it a huge log volume increase. Add in EDR logs and you’re really into some serious volumes. On the server side, moving from network and server logs to add application layer and container logs brings a non-trivial increase in volume. There are only so many tools designed to handle modern event rates (X billio events per day) and volumes (Y terabytes per day) without buckling under the load, and more importantly, there are only so many people who know how to deploy and operate them in production. While storage is plentiful and cheap in the cloud, you still need to get those logs to the desired storage from various on-premise and cloud sources – perhaps across IaaS, PaaS, and SaaS. If you think that’s easy call your SaaS vendor and ask how to export all your logs from their cloud into your preferred log store (S3/ADLS/GCS/etc.). That old saw from Silicon Valley, “But does it scale?” is funny but really applies in some cases. Bandwidth: While we’re on the topic of ridiculous volumes, let’s discuss bandwidth. Network bandwidth and transport layer security between on-premise and cloud and inter-cloud is non-trivial. There are financial costs, as well as engineering and operational considerations. If you don’t believe me ask your AWS or Azure sales person how to move, say, 10 terabytes a day between those two. In some cases architecture only allows a certain amount of bandwidth for log movement and transport, so consider this when planning migrations and add-ons. Structure Multi-account Multi-cloud Architectures: Cloud security facilitates things like micro-segmentation, multi-account strategies, closing down all unnecessary network access, and even running different workloads in different cloud environments. This sort of segmentation makes it much more difficult for attackers to pivot if they gain a foothold. It also means you will need to consider which cloud native logs are available, what you need to supplement with other tooling, and how you will stitch all these sources together. Expecting to dump all your events into a syslog style service and let it percolate back on-premise is unrealistic. You need new architectures for log capture, filtering, and analysis. Storage is the easy part. Monitoring “up the Stack”: As cloud providers manage infrastructure, and possibly applications as well, your threat detection focus must shift from networks to applications. This is both because you lack visibility into network operations, but also because cloud network deployments are generally more secure, prompting attackers to shift focus. Even if you’re used to monitoring the app layer from a security perspective, for example with a big WAF in front of your on-premise servers, do you know whether you vendor has a viable cloud offering? If you’re lucky enough to have one that works in both places, and you can deploy in cloud as well, answer this (before you initiate the project): Where will those logs go, and how will you get them there? Storage vs. Ingestion: Data storage in cloud services, especially object storage, is so cheap it is practically free. And long-term data archival cloud services offer huge cost advantages over older on-premise solutions. In essence we are encouraged to store more. But while storage is cheap, it’s not always cheap to ingest more data into the cloud because some logging and analytics services charge based upon volume (gigabytes) and event rates (number of events) ingested into the tool/service/platform. Example are Splunk, Azure Eventhubs, AWS Kinesis, and Google Stackdriver. Many log sources for the cloud are verbose – both number of events and amount of data generated from each. So you will need to architect your solution to be economically efficient, as well as negotiate with your vendors over ingestion of noisy sources such as DNS and proxies, for example. A brief side note on ‘closed’ logging pipelines: Some vendors want to own your logging pipeline on top of your analytics toolset. This may

Share:
Read Post

DAM Not Moving to the Cloud

I have concluded that nobody is using Database Activity Monitoring (DAM) in public Infrastructure or Platform as a Service. I never see it in any of the cloud migrations we assist with. Clients don’t ask about how to deploy it or if they need to close this gap. I do not hear stories, good or bad, about its usage. Not that DAM cannot be used in the cloud, but it is not. There are certainly some reasons firms invest security time and resources elsewhere. What comes to mind are the following: PaaS and use of Relational: There are a couple trends which I think come into play. First, while user installed and managed relational databases do happen, there is a definite trend towards adopting RDBMS as a Service. If customers do install their own relational platform, it’s MySQL or MariaDB, for which (so far as I know) there are few monitoring options. Second, for most new software projects, a relational database is a much less likely choice to back applications – more often it’s a NoSQL platform like Mongo (self-managed) or something like Dynamo. This has reduced the total relational footprint. CI:CD: Automated build and security test pipelines – we see a lot more application and database security testing in development and quality assurance phases, prior to production deployment. Many potential code vulnerabilities and common SQL injection attacks are being spotted and addressed prior to applications being deployed. And there may not be a lot of reconfiguration in production if your installation is defined in software. Network Security: Between segmentation, firewalls/security groups, and port management you can really lock down the (virtual) network so only the application can talk to the database. Difficult for anyone to end-run around if properly set up. Database Ownership: Some people cling to the misconception that the database is owned and operated by the cloud provider, so they will take care of database security. Yes, the vendor handles lots of configuration security and patching for you. Certainly much of the value of a DAM platform, namely security assessment and detection of old database versions, is handled elsewhere. Permission misuse is harder. Most IaaS clouds offer dynamic policy-driven IAM. You can set very fine-grained access controls on database access, so you can block many types of ad hoc and potentially malicious queries. Maybe none of these reasons? Maybe all the above? I don’t really know. Regardless, DAM has not moved to the cloud. The lack of interest does not provide any real insights as to why, but it is very clear. I do still want some of DAM’s monitoring functions for cloud migrations, specifically looking for SQL injection attacks – which are still your issue to deal with – as well as looking for credential misuse, such as detecting too much data transfer or scraping. Cloud providers log API access to the database installation, and there are cloud-native ways to perform assessment. But on the monitoring side there are few other options for watching SQL queries. Share:

Share:
Read Post

Cloudera and Hortonworks Merge

I had been planning to post on the recent announcement of the planned merger between Hortonworks and Cloudera, as there are a number of trends I’ve been witnessing with the adoption of Hadoop clusters, and this merger reflects them in a nutshell. But catching up on my reading I ran across Mathew Lodge’s recent article in VentureBeat titled Cloudera and Hortonworks merger means Hadoop’s influence is declining. It’s a really good post. I can confirm we see the same lack of interest in deployment of Hadoop to the cloud, the same use of S3 as a storage medium when Hadoop is used atop Infrasrtucture as a Service (IaaS), and the same developer-driven selection of whatever platform is easiest to use and deploy on. All in all it’s an article I wish I’d written, as he did a great job capturing most of the areas I wanted to cover. And there are some humorous bits like “Ironically, there has been no Cloud Era for Cloudera.” Check it out – it’s worth your time. But there are a couple other areas I still want to cover. It is rare to see someone install Hadoop into a public IaaS account. Customers (now) choose a cloud native variant and let the vendor handle all the patching and hide much of the infrastructure pieces from them. And they gain the option of spinning down the cluster when not in use, making it much more efficient. Couple that with all the work to set up Hadoop yourself, and it’s an easy decision. I was somewhat surprised to learn that things like AWS’s Elastic Map Reduce (EMR) are not always chosen as repository, but Dynamo is surprisingly popular – which makes sense, given its powerful query features, indexing, and ability to offer the best of relational and big data capabilities. Most public IaaS vendors offer so many database variants that it is easy to mix and match multiple variants to support applications, further reducing demand for classic Hadoop installations. One area continuing to drive Hadoop adoption is on-premise data collection and data lakes for logs. The most cited driver is the need to keep Splunk costs under control. It takes effort to divert some content to Hadoop instead of sending everything to the Splunk collectors – but data can be collected and held at drastically lower cost. And you need not sacrifice analytics. For organizations collecting every log entry, this is a win. We also see Hadoop adopted by Security Operations Centers, running side by side with other platforms. Part of the need is to fill gaps around what their SIEM keeps, part is to keep costs down, and part is to easily support deployment of custom security intelligence applications by non-developers. Another aspect not covered in any of the articles I have found so far is that Cloudera and Hortonworks both have deep catalogs of security capabilities. Together they are dominant. As firms use large “data lakes” to hold all sorts of sensitive data inside Hadoop, this will be a win for firms running Hadoop in-house. Identity management, encryption, monitoring, and a whole bunch of other great stuff. Big data is not the security issue it was 5 years ago. Hortonworks and Cloudera have a lot to do with that; their combined capabilities and enterprise deployment experience make them a powerful choice to help firms manage and maintain existing infrastructure. That is all my way of saving that some of their negative press is unwarranted, given the profitable avenues ahead. The idea that growth in the Hadoop segment appears to have been slowing is not new. AWS has been the largest seller of Hadoop-based data platforms, by revenue and by customer, for several years. The cloud is genuinely an existential threat to all the commercial Hadoop vendors – and comparable big data databases – if they continue to sell in the same way. The recent acceleration of cloud adoption simply makes it more apparent that Cloudera and Hortonworks are competing for a shrinking share of IT budgets. But it makes sense to band together and make the most of their expertise in enterprise Hadoop deployments, and should help with tooling and management software for cloud migrations. If Kubernetes is any indication, there are huge areas for improvement in tooling and services beyond what cloud vendors provide. Share:

Share:
Read Post

Building a Multi-cloud Logging Strategy: Introduction

Logging and monitoring for cloud infrastructure has become the top topic we are asked about lately. Even general conversations about moving applications to the cloud always seem to end with clients asking how to ‘do’ logging and monitoring of cloud infrastructure. Logs are key to security and compliance, and moving into cloud services – where you do not actually control the infrastructure – makes logs even more important for operations, risk, and security teams. But these questions make perfect sense – logging in and across cloud infrastructure is complicated, offering technical challenges and huge potential cost overruns if implemented poorly. The road to cloud is littered with the charred remains of many who have attempted to create multi-cloud logging for their respective employers. But cloud services are very different – structurally and operationally – than on-premise systems. The data is different; you do not necessarily have the same event sources, and the data is often different or incomplete, so existing reports and analytics may not work the same. Cloud services are ephemeral so you can’t count on a server “being there” when you go looking for it, and IP addresses are unreliable identifiers. Networks may appear to behave the same, but they are software defined, so you cannot tap into them the same way as on-premise, nor make sense of the packets even if you could. How you detect and respond to attacks differs, leveraging automation to be as agile as your infrastructure. Some logs capture every API call; while their granularity of information is great, the volume of information is substantial. And finally, the skills gap of people who understand cloud is absent at many companies, so they ‘lift and shift’ what they do today into their cloud service, and are then forced to refactor the deployment in the future. One aspect that surprised all of us here at Securosis is the adoption of multi-cloud; we do not simply mean some Software as a Service (SaaS) along with a single Infrastructure as a Service (IaaS) provider – instead firms are choosing multiple IaaS vendors and deploying different applications to each. Sometimes this is a “best of breed” approach, but far more often the selection of multiple vendors is driven by fear of getting locked in with a single vendor. This makes logging and monitoring even more difficult, as collection across IaaS providers and on-premise all vary in capabilities, events, and integration points. Further complicating the matter is the fact that existing Security Information and Event Management (SIEM) vendors, as well as some security analytics vendors, are behind the cloud adoption curve. Some because their cloud deployment models are no different than what they offer for on-premise, making integration with cloud services awkward. Some because their solutions rely on traditional network approaches which don’t work with software defined networks. Still others employ pricing models which, when hooked into highly verbose cloud log sources, cost customers small fortunes. We will demonstrate some of these pricing models later in this paper. Here are some common questions: What data or logs do I need? Server/network/container/app/API/storage/etc.? How do I get them turned on? How do I move them off the sources? How do I get data back to my SIEM? Can my existing SIEM handle these logs, in terms of both different schema and volume & rate? Should I use log aggregators and send everything back to my analytics platform? At what point during my transition to cloud does this change? How do I capture packets and where do I put them? These questions, and many others, are telling because they come from trying to fit cloud events into existing/on-premise tools and processes. It’s not that they are wrong, but they highlight an effort to map new data into old and familiar systems. Instead you need to rethink your logging and monitoring approach. The questions firms should be asking include: What should my logging architecture look like now and how should it change? How do I handle multiple accounts across multiple providers? What cloud native sources should I leverage? How do I keep my costs manageable? Storage can be incredibly cheap and plentiful in the cloud, but what is the pricing model for various services which ingest and analyze the data I’m sending them? What should I send to my existing data analytics tools? My SIEM? How do I adjust what I monitor for cloud security? Batch or real-time streams? Or both? How do I adjust analytics for cloud? You need to take a fresh look at logging and monitoring, and adapt both IT and security workflows to fit cloud services – especially if you’re transitioning to cloud from an on-premise environment and will be running a hybrid environment during the transition… which may be several years from initial project kick-off. Today we launch a new series on Building a Multi-cloud Logging Strategy. Over the next few weeks, Gal Shpantzer and I (Adrian Lane) will dig into the following topics to discuss what we see when helping firms migrate to cloud. And there is a lot to cover. Our tentative outline is as follows: Barriers to Success: This post will discuss some reasons traditional approaches do not work, and areas where you might lack visibility. Cloud Logging Architectures: We discuss anti-patterns and more productive approaches to logging. We will offer recommendations on reference architectures to help with multi-cloud, as well as centralized management. Native Logging Features: We’ll discuss what sorts of logs you can expect to receive from the various types of cloud services, what you may not receive in a shared responsibility service, the different data sources firms have come to expect, and how to get them. We will also provide practical notes on logging in GCP, Azure, and AWS. We will help you navigate their native offerings, as well as the capabilities of PaaS/SaaS vendors. BYO Logging: Where and how to fill gaps with third-party tools, or building them into applications and service you deploy in the cloud. Cloud or On-premise Management? We will discuss tradeoffs between moving log management into the cloud, keeping these activities on-premise, and using a

Share:
Read Post

Complete Guide to Enterprise Container Security *New Paper*

The explosive growth of containers is not surprising because the technology (most obviously Docker) alleviates several problems for deploying applications. Developers need simple packaging, rapid deployment, reduced environmental dependencies, support for micro-services, generalized management, and horizontal scalability – all of which containers help provide. When a single technology enables us to address several technical problems at once, it is very compelling. But this generic model of packaged services, where the environment is designed to treat each container as a “unit of service”, sharply reduces transparency and audit-ability (by design), and gives security pros nightmares. We run more code faster, but must in turn accept a loss of visibility inside the containers. It begs the question, “How can we introduce security without losing the benefits of containers?” This research effort was designed to confront all aspects of container security, from developer desktops to production deployments, to illustrate the numerous places where security controls and monitoring can be introduced into the ecosystem. Tools and technologies are available to run containers with high security and strong confidence that they are no less secure than any other applications. We also have access to capabilities which validate security claims through scans and reports on the security controls. We would like to thank Aqua Security and Tripwire for licensing this research and participating in some of our initial discussions. As always we welcome comments and suggestions. If you have questions please feel free to email us: info at securosis.com. You can download all or part of this reseach from the website of either licensee, grab a copy from our Research Library, or just download a copy of the paper directly: Complete Guide to Enterprise Container Security (PDF). Share:

Share:
Read Post

Container Security 2018: Logging and Monitoring

We close out this research paper with two key areas: Monitoring and Auditing. We want to draw attention to them because they are essential to security programs, but have received only sporadic coverage in security blogs and the press. When we go beyond network segregation and network policies for what we allow, the ability to detect misuse is extremely valuable, which is where monitoring and logging come in. Additionally, most Development and Security teams are not aware of the variety of monitoring options available, and we have seen a variety of misconceptions and outright fear of the volume of audit logs to capture, so we need to address these issues. Monitoring Every security control discussed so far can be classed as preventative security. These efforts remove vulnerabilities or make them hard to exploit. We address known attack vectors with well-understood responses such as patching, secure configuration, and encryption. But vulnerability scans can only take you so far. What about issues you are not expecting? What if a new attack variant gets by your security controls, or a trusted employee makes a mistake? This is where monitoring comes in: it is how you discover unexpected problems. Monitoring is critical to any security program – it’s how you learn what works, track what’s really happening in your environment, and detect what’s broken. Monitoring is just as important for container security, but container providers don’t offer it today. Monitoring tools work by first collecting events, then comparing them to security policies. Events include requests for hardware resources, IP-based communication, API requests to other services, and sharing information with other containers. Policy types vary widely. Deterministic policies address areas such as which users and groups can terminate resources, which containers are disallowed from making external HTTP requests, and which services a container is allowed to run. Dynamic (also called ‘behavioral’) policies address issues such as containers connecting to undocumented ports, using more memory than normal, or exceeding runtime thresholds. Combining deterministic white and black lists with dynamic behavior detection offers the best of both worlds, enabling you to detect both simple policy violations and unexpected variations from the ordinary. We strongly recommend you include monitoring container activity in your security program. A couple container security vendors offer monitoring tools. Popular evaluation criteria include: Deployment Model: How does the product collect events? What events and API calls can it collect for inspection? Typically these products use one of two models for deployment: either an agent embedded in the host OS, or a fully privileged container-based monitor running in the Docker environment. How difficult are collectors to deploy? Do host-based agents require a host reboot to deploy or update? You need to assess what types of events can be captured. Policy Management: You need to evaluate how easy it is to build new policies or modify existing ones. You will want a standard set of security policies from the vendor to speed deployment, but you will also stand up and manage your own policies, so ease of management is key to long-term happiness. Behavioral Analysis: What, if any, behavioral analysis capabilities are available? How flexible are they – what types of data are available for use in policy decisions? Behavioral analysis starts with system monitoring to determine ‘normal’ behavior. The pre-built criteria for detecting aberrations are often limited to a few sets of indicators, such as user ID or IP address, but more advanced tools offer a dozen or more choices. The more you have available – such as system calls, network ports, resource usage, image ID, and inbound and outbound connectivity – the more flexible your controls can be. Activity Blocking: Does the vendor offer blocking of requests or activity? Blocking policy violations helps ensure containers behave as intended. Care is required because such policies can disrupt new functionality, causing friction between Development and Security, but blocking is invaluable for maintaining Security’s control over what containers can do. Platform Support: You need to verify your monitoring tool supports your OS platforms (CentOS, CoreOS, SUSE, Red Hat, Windows, etc.) and orchestration tool (Swarm, Kubernetes, Mesos, or ECS). Audit and Compliance What happened with the last build? Did we remove sshd from that container? Did we add the new security tests to Jenkins? Is the latest build in the repository? You may not know the answers off the top of your head, but you know where to get them: log files. Git, Jenkins, JFrog, Docker, and just about every development tool creates log files, which we use to figure out what happened – and all too often, what went wrong. There are people outside Development – namely Security and Compliance – with similar security-related questions about what is going on in the container environment, and whether security controls are functioning. Logs are how you get answers for these teams. Most of the earlier sections in this paper, covering areas such as build environments and runtime security, carry compliance requirements. These may be externally mandated like PCI-DSS or GLBA, or internal requirements from internal audit or security teams. Either way, auditors will want to see that security controls are in place and working. And no, they won’t just take your word for it – they will want audit reports for specific event types relevant to their audit. Similarly, if your company has a Security Operations Center, they will want all system and activity logs some time period to reconstruct events, and, investigate alerts, and/or determine whether a breach occurred. You really don’t want to get too deep into that stuff – just get them the data and let them worry about the details. CIS offers benchmarks and security checklists for container security, orchestration manager security, and most compliance initiatives. These are a good starting point for conducting basic security and compliance assessments of your container environment. In addition ‘vendors’ – both open source teams and cloud service providers – offer security deployment and architecture recommendations to help produce dependable environments. Finally, we see configuration checkers arriving in the

Share:
Read Post

Container Security 2018: Runtime Security Controls

After the focus on tools and processes in previous sections, we can now focus on containers in production systems. This includes which images are moved into production repositories, selecting and running containers, and the security of underlying host systems. Runtime Security The Control Plane: Our first order of business is ensuring the security of the control plane: tools for managing host operating systems, the scheduler, the container client, engine(s), the repository, and any additional deployment tools. As we advised for container build environment security, we recommend limiting access to specific administrative accounts: one with responsibility for operating and orchestrating containers, and another for system administration (including patching and configuration management). On-premise we recommend network and physical segregation, and for cloud and virtual systems we prefer logical segregation. The good news is that several third-party tools offer full identity and access management, LDAP/AD integration, and token-based SSO (i.e.: SAML) across systems. Resource Usage Analysis: Many readers are familiar with this for performance, but it can also offer insight into basic code security. Does the container allow port 22 (administration) access? Does the container try to update itself? What external systems and utilities does it depend upon? Any external resource usage is a potential attack point for attackers, so it’s good hygiene to limit ingress and egress points. To manage the scope of what containers can access, third-party tools can monitor runtime access to environment resources – both inside and outside the container. Usage analysis is basically automated review of resource requirements. This is useful in a number of ways – especially for firms moving from a monolithic architecture to microservices. Analysis can help developers understand which references they can remove from their code, and help operations narrow down roles and access privileges. Selecting the Right Image: We recommend establishing a trusted image repository and ensuring that your production environment can only pull containers from that trusted source. Ad hoc container management makes it entirely too easy for engineers to bypass security controls, so we recommend establishing trusted central repositories for production images. We also recommend scripting deployment to avoid manual intervention, and to ensure the latest certified container is always selected. This means checking application signatures in your scripts before putting containers into production, avoiding manual verification overhead or delay. Trusted repository and registry services can help by rejecting containers which are not properly signed. Fortunately many options are available, so pick one you like. Keep in mind that if you build many containers each day, a manual process will quickly break down. It is okay to have more than one image repository – if you are running across multiple cloud environments there are advantages to leveraging the native registry in each one. Immutable Images: Developers often leave shell access to container images so they can log into containers running in production. Their motivation is often debugging and on-the-fly code changes, both bad for consistency and security. Immutable containers – which do not allow ssh connections – prevent interactive real-time manipulation. They force developers to fix code in the development pipeline, and remove a principal attack path. Attackers routinely scan for ssh access to take over containers, and leverage them to attack underlying hosts and other containers. We strongly suggest use of immutable containers without ‘port 22’ access, and making sure that all container changes take place (with logging) in the build process, rather than in production. Input Validation: At startup containers accept parameters, configuration files, credentials, JSON, and scripts. In more aggressive scenarios ‘agile’ teams shove new code segments into containers as input variables, making existing containers behave in fun new ways. Validate that all input data is suitable and complies with policy, either manually or using a third-party security tool. You must ensure that each container receives the correct user and group IDs to map to the assigned view at the host layer. This can prevent someone from forcing a container to misbehave, or simply prevent dumb developer mistakes. Blast Radius: The cloud enables you to run different containers under different cloud user accounts, limiting the resources available to any given container. If an account or container set is compromised, the same cloud service restrictions which prevent tenants from interfering with each other will limit damage between your different accounts and projects. For more information see our reference material on limiting blast radius with user accounts. Container Group Segmentation: One of the principal benefits of container management systems is help scaling tasks across pools of shared servers. Each management platform offers a modular architecture, with scaling performed on node/minion/slave sub-groups, which in turn include a set of containers. Each node forms its own logical subnet, limiting network access between sets of containers. This segregation limits ‘blast radius’ by restricting which resources any container can access. It is up to application architects and security teams to leverage this construct to improve security. You can enforce this with network policies on the container manager service, or network security controls provided by your cloud vendor. Over and above this orchestration manager feature, third-party container security tools – whether running as an agent inside containers, or as part of underlying operation systems – can provide a type of logical network segmentation which further limits network connections between groups of containers. All together this offers fine-grained isolation of containers and container groups from each another. Platform Security Until recently, when someone talked about container security, they were really talking about how to secure the hypervisor and underlying operating system. So most articles and presentations on container security focuses on this single – admittedly important – facet. But we believe runtime security needs to encompass more than that, and we break the challenge into three areas: host OS hardening, isolation of namespaces, and segregation of workloads by trust level. Host OS/Kernel Hardening: Hardening is how we protect a host operating system from attacks and misuse. It typically starts with selection of a hardened variant of the operating system you will use. But while these versions

Share:
Read Post

Container Security 2018: Securing Container Contents

Testing the code and supplementary components which will execute within containers, and verifying that everything conforms to security and operational practices, is core to any container security effort. One of the major advances over the last year or so is the introduction of security features for the software supply chain, from container engine providers including Docker, Rocket, OpenShift and so on. We also see a number of third-party vendors helping to validate container content, both before and after deployment. Each solution focuses on slightly different threats to container construction – Docker, for example, offers tools to certify that a container has gone through your process without alteration, using digital signatures and container repositories. Third-party tools focus on security benefits outside what engine providers offer, such as examining libraries for known flaws. So while things like process controls, digital signing services to verify chain of custody, and creation of a bill of materials based on known trusted libraries are all important, you’ll need more than what is packaged with your base container management platform. You will want to consider third-party to help harden your container inputs, analyze resource usage, analyze static code, analyze library composition, and check for known malware signatures. In a nutshell, you need to look for risks which won’t be caught by your base platform. Container Validation and Security Testing Runtime User Credentials: We could go into great detail here about user IDs, namespace views, and resource allocation; but instead we’ll focus on the most important thing: don’t run container processes as root, because that would provide attackers too-easy access to the underlying kernel and a direct path to attack other containers and the Docker engine itself. We recommend using specific user ID mappings with restricted permissions for each class of container. We understand roles and permissions change over time, which requires ongoing work to keep kernel views up to date, but user segregation offers a failsafe to limit access to OS resources and virtualization features underlying the container engine. Security Unit Tests: Unit tests are a great way to run focused test cases against specific modules of code – typically created as your development teams find security and other bugs – without needing to build the entire product every time. They cover things such as XSS and SQLi testing of known attacks against test systems. As the body of tests grows over time it provides an expanding regression testbed to ensure that vulnerabilities do not creep back in. During our research we were surprised to learn that many teams run unit security tests from Jenkins. Even though most are moving to microservices, fully supported by containers, they find it easier to run these tests earlier in the cycle. We recommend unit tests somewhere in the build process to help validate the code in containers is secure. Code Analysis: A number of third-party products perform automated binary and white box testing, rejecting builds when critical issues are discovered. We also see several new tools available as plug-ins to common Integrated Development Environments (IDE), where code is checked for security issues prior to check-in. We recommend you implement some form of code scanning to verify the code you build into containers is secure. Many newer tools offer full RESTful API integration within the software delivery pipeline. These tests usually take a bit longer to run but still fit within a CI/CD deployment framework. Composition Analysis: Another useful security technique is to check libraries and supporting code against the CVE (Common Vulnerabilities and Exposures) database to determine whether you are using vulnerable code. Docker and a number of third parties – including some open source distributions – provide tools for checking common libraries against the CVE database, and can be integrated into your build pipeline. Developers are not typically security experts, and new vulnerabilities are discovered in common tools weekly, so an independent checker to validate components of your container stack is both simple and essential. Hardening: Over and above making sure what you use is free of known vulnerabilities, there are other tricks for securing containers before deployment. This type of hardening is similar to OS hardening, which will we discuss in the next section; removal of libraries and unneeded packages reduces attack surface. There are several ways to check for unused items in a container, and you can then work with the development team to verify and remove unneeded items. Another hardening technique is to check for hard-coded passwords, keys, and other sensitive items in the container – these breadcrumbs makes things easy for developers, but help attackers even more. Some firms use manual scanning for this, while others leverage tools to automate it. Container Signing and Chain of Custody: How do you know where a container came from? Did it complete your build process? These techniques address “image to container drift”: addition of unwanted or unauthorized items. You want to ensure your entire process was followed, and that nowhere along the way did a well-intentioned developer subvert your process with untested code. You can accomplish this by creating a cryptographic digest of all image contents, and then track it though your container lifecycle to ensure that no unapproved images run in your environment. Digests and digital fingerprints help you detect code changes and identify where each container came from. Some conatiner management platfroms offer tools to digitially fingerprint code at each phase of the development process, alongside tools to validate the signature chain. But these capabilities are seldom used, and platforms such as Docker may only optionally produce signatures. While all code should be checked prior to being placed into a registry or container library, signing images and code modules happens during building. You will need to create specific keys for each phase of the build, sign code snippets on test completion but before code is sent on to the next step in the process, and (most important) keep these keys secured so attackers cannot create their own trusted code signatures. This offers some assurance that your

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.