Securosis

Research

Container Security 2018: Securing Container Contents

Testing the code and supplementary components which will execute within containers, and verifying that everything conforms to security and operational practices, is core to any container security effort. One of the major advances over the last year or so is the introduction of security features for the software supply chain, from container engine providers including Docker, Rocket, OpenShift and so on. We also see a number of third-party vendors helping to validate container content, both before and after deployment. Each solution focuses on slightly different threats to container construction – Docker, for example, offers tools to certify that a container has gone through your process without alteration, using digital signatures and container repositories. Third-party tools focus on security benefits outside what engine providers offer, such as examining libraries for known flaws. So while things like process controls, digital signing services to verify chain of custody, and creation of a bill of materials based on known trusted libraries are all important, you’ll need more than what is packaged with your base container management platform. You will want to consider third-party to help harden your container inputs, analyze resource usage, analyze static code, analyze library composition, and check for known malware signatures. In a nutshell, you need to look for risks which won’t be caught by your base platform. Container Validation and Security Testing Runtime User Credentials: We could go into great detail here about user IDs, namespace views, and resource allocation; but instead we’ll focus on the most important thing: don’t run container processes as root, because that would provide attackers too-easy access to the underlying kernel and a direct path to attack other containers and the Docker engine itself. We recommend using specific user ID mappings with restricted permissions for each class of container. We understand roles and permissions change over time, which requires ongoing work to keep kernel views up to date, but user segregation offers a failsafe to limit access to OS resources and virtualization features underlying the container engine. Security Unit Tests: Unit tests are a great way to run focused test cases against specific modules of code – typically created as your development teams find security and other bugs – without needing to build the entire product every time. They cover things such as XSS and SQLi testing of known attacks against test systems. As the body of tests grows over time it provides an expanding regression testbed to ensure that vulnerabilities do not creep back in. During our research we were surprised to learn that many teams run unit security tests from Jenkins. Even though most are moving to microservices, fully supported by containers, they find it easier to run these tests earlier in the cycle. We recommend unit tests somewhere in the build process to help validate the code in containers is secure. Code Analysis: A number of third-party products perform automated binary and white box testing, rejecting builds when critical issues are discovered. We also see several new tools available as plug-ins to common Integrated Development Environments (IDE), where code is checked for security issues prior to check-in. We recommend you implement some form of code scanning to verify the code you build into containers is secure. Many newer tools offer full RESTful API integration within the software delivery pipeline. These tests usually take a bit longer to run but still fit within a CI/CD deployment framework. Composition Analysis: Another useful security technique is to check libraries and supporting code against the CVE (Common Vulnerabilities and Exposures) database to determine whether you are using vulnerable code. Docker and a number of third parties – including some open source distributions – provide tools for checking common libraries against the CVE database, and can be integrated into your build pipeline. Developers are not typically security experts, and new vulnerabilities are discovered in common tools weekly, so an independent checker to validate components of your container stack is both simple and essential. Hardening: Over and above making sure what you use is free of known vulnerabilities, there are other tricks for securing containers before deployment. This type of hardening is similar to OS hardening, which will we discuss in the next section; removal of libraries and unneeded packages reduces attack surface. There are several ways to check for unused items in a container, and you can then work with the development team to verify and remove unneeded items. Another hardening technique is to check for hard-coded passwords, keys, and other sensitive items in the container – these breadcrumbs makes things easy for developers, but help attackers even more. Some firms use manual scanning for this, while others leverage tools to automate it. Container Signing and Chain of Custody: How do you know where a container came from? Did it complete your build process? These techniques address “image to container drift”: addition of unwanted or unauthorized items. You want to ensure your entire process was followed, and that nowhere along the way did a well-intentioned developer subvert your process with untested code. You can accomplish this by creating a cryptographic digest of all image contents, and then track it though your container lifecycle to ensure that no unapproved images run in your environment. Digests and digital fingerprints help you detect code changes and identify where each container came from. Some conatiner management platfroms offer tools to digitially fingerprint code at each phase of the development process, alongside tools to validate the signature chain. But these capabilities are seldom used, and platforms such as Docker may only optionally produce signatures. While all code should be checked prior to being placed into a registry or container library, signing images and code modules happens during building. You will need to create specific keys for each phase of the build, sign code snippets on test completion but before code is sent on to the next step in the process, and (most important) keep these keys secured so attackers cannot create their own trusted code signatures. This offers some assurance that your

Share:
Read Post

The Future of Security Operations: Embracing the Machines

To state the obvious, traditional security operations is broken. Every organization faces more sophisticated attacks, the possibility of targeted adversaries, and far more complicated infrastructure; compounding the problem, we have fewer skilled resources to execute on security programs. Obviously it’s time to evolve security operations by leveraging technology to both accelerate human work and take care of rote, tedious tasks which don’t add value. So security orchestration and automation are terms you will hear pretty consistently from here on out. Some security practitioners resist the idea of automation, mostly because if done incorrectly the ramifications are severe and likely career-limiting. So we’ve advocated a slow and measured approach, starting with use cases that won’t crater the infrastructure if something goes awry. We discussed two of those in depth: enriching alerts and accelerating incident response, in our Regaining Balance post. The value of being able to respond to more alerts, better, is obvious. So we expect technologies focused on this (small) aspect of security operations to become pervasive over the next 2-3 years. But the real leverage lies not just in making post-attack functions work better. The question is: How can you improve your security posture and make your environment more resilient by orchestrating and automating security controls? That’s what this post will dig into. But first we need to set some rules of engagement for what automation of this sort looks like. And more importantly, how you can establish trust in what you are automating. Ultimately the Future of Security Operations hinges on this concept. Without trust, you are destined to remain in the same hamster wheel of security pain (h/t to Andy Jaquith). Attack, alert, respond, remediate, repeat. Obviously that hasn’t worked too well, or we wouldn’t continue having the same conversations year after year. The Need for Trustable Automation It’s always interesting to broach the topic of security automation with folks who have had negative experiences with early (typically network-centric) automation. They instantaneously break out in hives when discussing automatically reconfiguring anything. We get it. When there is downtime or another adverse situation, ops people get fired and can’t pay their mortgages. Predictably, survival instincts kick in, limiting use of automation. Thus our focus on Trustable Automation – which means you tread carefully, building trust in both your automated processes and the underlying decisions that trigger them. Iterate your way to broader use of automation with a simple phased approach. Human approval: The first step is to insert a decision point into the process where a human takes a look and ensures the proper functions will happen as a result of automation. This is basically putting a big red button in the middle of the process and giving an ops person the ability to perform a few checks and then hit it. It’s faster but not really fast, because it still involves waiting on a human. Accept that some processes are so critical they never get past human approval, because the organization just cannot risk a mistake. Automation with significant logging: The next step is to take the training wheels off and let functions happen automatically, while making sure to log pretty much everything and have humans keep close tabs on it. Think of this as taking the training wheels off, but staying within a few feet of the bike, just in case it tips over. Or running an application in Debug mode so you can see exactly what is happening. If something does happen which you don’t expect, you’ll be right there to figure out what didn’t work as expected and correct it. As you build trust in the process, we recommend you continue to scrutinize logs, even when things go perfectly. This helps you understand the frequency of changes, and which changes are made. Basically you are developing a baseline of your automated process, which you can use in the next phase. Automation with guardrails: Finally you reach the point where you don’t need to step through every process. The machines are doing their job. That said, you still don’t want things to go haywire. Now you leverage the baseline you developed using automation with logging. With these thresholds you can build guardrails to make sure nothing happens outside your tolerances. For example, if you are automatically adding entries to an egress IP blacklist to stop internal traffic going to known bad locations, and all of a sudden your traffic to your SaaS CRM system is due to be added to your blacklist due to a fault threat intel update, you can prevent that update and alert administrators to investigate the threat intel update. Obviously this requires a fundamental understanding of the processes being automated and an ability to distinguish between low-risk changes which should be made automatically from those which require human review. But that level of knowledge is what engenders trust, right? Once you have built some trust in your automated process, you still want a conceptual net to make sure you don’t go splat if something doesn’t work as intended. The second requirement for trustable automation is rollback. You need to be able to quickly and easily get back to a known good configuration. So when rolling out any kind of automation (whether via scripting or a platform), you’ll want to make sure you store state information, and have the capability to reverse any changes quickly and completely. And yes, this is something you’ll want to test extensively, both as you select an automation platform and once you start using it. The point is that as you design orchestration and automation functions, you have a lot of flexibility to get there at your own pace. Some folks have a high threshold for pain and jump in with both feet, understanding at some point they will likely need to clean up a mess. Others choose to tiptoe toward an automated future, adding use cases as they build comfort in the ability of their controls to work without human involvement. There is no right answer

Share:
Read Post

Container Security 2018: Build Pipeline Security

Most people fail to consider the build environment when thinking about container security, but it is critical. The build environment is traditionally the domain of developers, who don’t share much detail with outsiders (meaning security teams). But with Continuous Integration (CI) or full Continuous Deployment (CD), we’re shooting new code into production… potentially several times a day. An easy way for an attacker to hack an application is get into its development or build environment – usually far less secure than production – and alter code or add new code to containers. The risk is aggravated by DevOps rapidly breaking down barriers between groups, and operations and security teams given access so they can contribute to the process. Collaboration demands a more complex and distributed working environment, with more stakeholders. Better controls are needed to restrict who can alter the build environment and update code, and an audit process to validate who did what. It’s also prudent to keep in mind the reasons developers find containers so attractive, lest you try to adopt security controls which limit their usefulness. First, a container simplifies building and packaging application code – abstracting the app from its physical environment – so developers can worry about the application rather than its supporting systems. Second, the container model promotes lightweight services – breaking large applications down into small pieces, easing modification and scaling… especially in cloud and virtual environments. Finally, a very practical benefit is that container startup is nearly instant, allowing agile scaling up and down in response to demand. It is important to keep these feature in mind when considering security controls, because any control that reduces one of these core advantages is likely to be rejected or ignored. Build pipeline security breaks down into two basic areas. The first is application security: essentially testing your code and its container to ensure it conforms to security and operational practices. This includes tools such as static analysis, dynamic analysis, composition analysis, scanners built into the IDE, and tools which monitor runtime behavior. We will cover these topics in the next section. The second area of concern is the tools used to build and deploy applications – including source code control, build tools, the build controller, container registries, container management facilities, and runtime access control. At Securosis we often call this the “management plane”, as these interfaces – whether API or GUI – are used to set access policies, automate behaviors, and audit activity. Let’s dive into build tool security. Securing the Build The problem is conceptually simple, but there are many tools used for building software, and most have several plug-ins which alter how data flows, so environments can get complicated. You can call this Secure Software Delivery, Software Supply Chain Management, or Build Server Security – take your pick, because these terms are equivalent for our purpose. Our goal is to shed light on the tools and processes developers use to build application, so you can better gauge the threats, as well as security measures to secure these systems. Following is a list of recommendations for securing platforms in the build environment to ensure secure container construction. We include tools from Docker and others to automate and orchestrate source code, building, the Docker engine, and the repository. For each tool you select some combination of identity management, roles, platform segregation, secure storage of sensitive data, network encryption, and event logging. Source Code Control: Stash, Git, GitHub, and several variants are common. Source code control has a wide audience because it is now common for Security, Operations, and Quality Assurance to all contribute code, tests, and configuration data. Distributed access means all traffic should run over SSL or VPN connections. User roles and access levels are essential for controlling who can do what, but we recommend requiring token-based or certificate-based authentication, or two-factor authentication at a minimum, for all administrative access. This is good housekeeping whether you are using containers or not, but containers’ lack of transparency, coupled with automated processes pushing them into production, amplifies the need to protect the build. Build Tools and Controllers: The vast majority of development teams we speak with use build controllers like Bamboo and Jenkins, with these platforms becoming an essential part of their automated build processes. They provide many pre-, post-, and intra-build options, and can link to a myriad of other facilities. This is great for integration flexibility but can complicate security. We suggest full network segregation of the build controller system(s), and locking network connections to limit what can communicate with them. If you can deploy build servers as on-demand containers without administrative access to ensure standardization of the build environment and consistency of new containers. Limit access to the build controllers as tightly as possible, and leverage built-in features to restrict capabilities when developers need access. We also suggest locking down configuration and control data to prevent tampering with build controller behavior. Keep any sensitive data, including ssh keys, API access keys, database credentials, and the like in a secure database or data repository (such as a key manager, encrypted .dmg file, or vault) and pulling credentials on demand to ensure sensitive data never sits on-disk unprotected. Finally, enable the build controller’s built-in logging facilities or logging add-ons, and stream output to a secure location for auditing. Container Platform Security: Whether you use Docker or another tool to compose and run containers, your container manager is a powerful tool which controls what applications run. As with build controllers like Jenkins, you’ll want to limit access to specific container administrator accounts. Limit network access to only build controller systems. Make sure Docker client access is segregated between development, test, and production, to limit who and which services can launch containers in production. Container Registry Security: We need to discuss container registries, because developers and IT teams often make the same two mistakes. The first is to allow anyone to add containers to the registry, regardless of whether they have been vetted. In such an

Share:
Read Post

Container Security 2018: Threats and Concerns

To better understand which container security areas you should focus on, and why we recommend particular controls, it helps to understand which threats need to be addressed and which areas containers affect most. Some threats and issues are well-known, some are purely lab proofs of concept, and others are threat vectors which attackers have yet to exploit – typically because there is so much low-hanging fruit elsewhere. So what are the primary threats to container environments? Threats to the Build Environment The first area which needs protection is the build environment. It’s not first on most people’s lists for container security, but I start here because it is typically the least secure, and the easiest place to insert malicious code. Developers tend to loathe security in development because it slows them down. That is why there is an entire industry dedicated to test data management and data asking: because developers tend to end-run around security whenever it slows their build and testing processes. What kinds of threats are we talking about, specifically? Things like malicious or moronic source code changes. Malicious or mistaken alterations to automated build controllers. Configuration scripts with errors, or which expose credentials. The addition of insecure libraries or down-rev/insecure versions of existing code. We want to know whether runtime code has been scanned for vulnerabilities. And we worry about failures to audit all the above and catch any errors. Container Workload and Contents What the hell is in the container? What does it do? Is that even the correct version? These are common questions from operations folks. They have no idea. Nor do they know whether developers included tools like ssh in a container so they can alter its contents on the fly. Just as troubling is the difficulty of mapping access rights to OS and host resources by a container, which can break operational security and open up the entire stack to various attacks. Security folks are typically unaware of what – if any – container hardening may have been performed. You want to know each container’s contents have been patched, vetted, hardened, and registered prior to deployment. Runtime Behavior Organizations worry a container will attack or infect another container. They worry a container may quietly exfiltrate data, or just exhibit suspicious behavior. We have seen attacks extract source code, and others add new images to registries – in both cases the platforms were unprotected by identity and access management. Organizations need to confirm that access to the Docker client is sufficiently gated through access controls to limit who controls the runtime environment. They worry about containers running a long time, without rotation to newer patched versions. And whether the network has been properly configured to limit damage from compromise. And also about attackers probing containers, looking for vulnerabilities. Operating System Security Finally, the underlying operating system’s security is a concern. The key question is whether it is configured correctly to restrict each container’s access to the subset of resources it needs, and to effectively block everything else. Customers worry that a container will attack the underlying host OS or the container engine. They worry that the container engine may not sufficiently shield the underlying OS. If an attack on the host platform succeeds it’s pretty much game over for that cluster of containers, and may give malicious code sufficient access to pivot and attack other systems. Orchestration Manager Security A key reason to update and reissue this report is this change in the container landscape, where focus has shifted to orchestration managers which control containers. It sounds odd, but as containers have become a commodity unit of application delivery, organizations have begun to feel they understand containers, and attention has shifted to container management. Attention and innovation have shifted to focus on cluster orchestration, with Kubernetes the poster child for optimizing value and use of containers. But most of the tools are incredibly complex. And like many software product, the focus of orchestration tools is scalability and ease of management – not security. As you probably suspected, orchestration tools bring a whole new set of security issues and vulnerabilities. Insecure default configurations, as well as permission escalation and code injection vulnerabilities, are common. What’s more, most organizations issue certificates, identity tokens and keys from the orchestration manager as containers are launched. We will drill down into these issues and what to do about them in the remainder of this series. Share:

Share:
Read Post

Building a Container Security Program 2018: Introduction

The explosive growth of containers is not surprising – these technologies, such as Docker, alleviate several problems for developers deploying applications. Developers need simple packaging, rapid deployment, reduced environmental dependencies, support for microservices, generalized management, and horizontal scalability – all of which containers help provide. When a single technology enables us to address several technical problems at once, it’s very compelling. But this generic model of packaged services, where the environment is designed to treat each container as a “unit of service”, sharply reduces transparency and auditability (by design), and gives security pros nightmares. We run more code and faster, but must accept a loss of visibility inside the container. It begs the question, “How can we introduce security without losing the benefits of containers?” Containers scare the hell out of security pros because they are so opaque. The burden of securing containers falls across Development, Operations, and Security teams – but none of these groups always knows how to tackle their issues. Security and development teams may not even be fully aware of the security problems they face, as security is typically ignorant of the tools and technologies developers use, and developers don’t always know what risks to look for. The problem extends beyond containers to the entire build, deployment, and runtime environments. The container security space has changed substantially since our initial research 18-20 months back. Security of the orchestration manager is a primary concern, as organization rely more heavily on tools to deploy and scale out applications. We have seen a sharp increase in adoption of container services (PaaS) from various cloud vendors, which changes how organizations need to approach security. We reached forward a bit in our first container security paper, covering build pipleine security issues because we felt that was a hugely underservered area, but over the last 18 months DevOps practitioners have taken note, and this has become the top question we get. Just behind that is the need for secrets management to issue container credentials and secure identity. The rapid change of pace in this market means it’s time for a refresh. We get a ton of calls from people moving towards – or actively engaged in – DevOps, so we will target this research at both security practitioners and developers & IT operations. We will cover some reasons containers and container orchestration managers create new security concerns, as well as how to go about creating security controls across the entire spectrum. We will not go into great detail on how to secure apps in general here – instead we will focus on build, container management, deployment, platform, and runtime security which arise with containers. As always we hope you will ask questions and participate in the process. Community involvement makes our research betters so we welcome your inquires, comments, and suggestions. Share:

Share:
Read Post

How Cloud Security Managers Should Respond to Meltdown and Spectre

I hope everyone enjoyed the holidays… just in time to return to work, catch up on email, and watch the entire Internet burn down thanks to a cluster of hardware vulnerabilities built into pretty much every computing platform available. I won’t go into details or background on Meltdown and Spectre (note: if I ever discover a vulnerability, I want it named “CutYourF-ingHeartOutWithSpoon”). Instead I want to talk about them in the context of the cloud, short-term and long-term implications, and some response strategies. These are incredibly serious vulnerabilities – not only due to their immediate implications, but also because they will draw increased scrutiny to a set of hardware weaknesses, which in turn are likely to require a generational fix (a computer generation – not your kids). Meltdown Briefly, Meltdown increases the risk of a multi-tenancy break. This has impacts on three levels: It potentially enables any instance or guest on a system to read all the memory on that system. This is the piece which cloud providers have almost completely patched. On a single system, it could also allow code in a container to read the memory of the entire server. This is likely also patched by cloud providers (AWS/Google/Microsoft). Because Function as a Service (‘serverless’) offerings are really implemented as code in containers, the same issues apply to these products. Meltdown is a privilege escalation vulnerability and requires a malicious process to be run on the system – you cannot use it to gain an initial foothold or exploitation, but to do things like steal secrets from memory once you have presence. Meltdown in its current form on major cloud providers is likely not an immediate security risk. But just to be safe I recommend immediately applying Meltdown patches at the operating system level to any instances you have running. This would have been far worse if there hadn’t been a coordinated disclosure between researchers, hardware and operating system vendors, and cloud providers. You may see some performance degradation, but anything that uses autoscaling shouldn’t really notice. Spectre Spectre is a different group of vulnerabilities which relies on a different set of hardware-related issues. Right now Spectre only allows access to memory the application already has access to. This is still a privilege escalation issue because it’s useful for things like allowing hostile JavaScript code in a browser access to data outside its sandbox. This also seems like it could be an issue for anything which runs multiple processes in a sandbox (such as containers), and might allow reading data from other guests or containers on the same host. Exploitation is difficult, the cloud providers are on it, and there is nothing to be done right now – other than to pay attention. So for both attacks, your short-term action is to patch instances and keep an eye on upcoming patches. Oh – and if you run a private cloud, you really need to patch everything yesterday and be prepared to replace all your hardware within the next few years. All your hardware. Oops. Long-term implications and recommendations These are complex vulnerabilities related to deeply embedded hardware functionality. Spectre itself is more an entire vulnerability/exploit class than a single patchable vulnerability. Right now we seem to have the protections we need available, and the performance implications appear manageable (although the performance impact will be costly for some customers). The bigger concern is that we don’t know what other variants of both vulnerability classes may appear (or be discovered by malicious actors who don’t make them public). The consensus among my researcher friends is that this is a new area of study; while it’s not completely novel, it’s definitely drawing highly intelligent and experienced eyeballs. I will be very surprised if we don’t see more variants and implications over the next few years. Hardware manufacturers need to update chip designs, which is a slow process, and even then they are likely to leave holes which researchers will eventually discover. Let’s not mince words – this is a very big deal for cloud computing. The immediate risk is very manageable but we need to be prepared for the long-term implications. As this evolves, here is what I recommend: Obviously, immediately patch all your operating systems on all your instances to the best of your ability. Hopefully cloud provider mitigations at the hypervisor level are already protecting you, but it’s still better to be safe. Start with a focus on instances where memory leaks are the worst threat. For highly sensitive workloads (e.g., encryption) immediately consider moving to dedicated tenancy and don’t run any less-privileged workloads on the same hardware. Dedicated tenancy means you rent a whole box from your cloud provider, and only your workloads run on it. This eliminates much of the concern of guest to host breaks. Migrate to dedicated PaaS where possible, especially for things like encryption operations. For example if you move to an AWS Elastic Load Balancer and perform discrete application data encryption in KMS, your crypto operations and keys are never exposed in the memory of any general-purpose system. This is the critical piece: the hardware underpinning these services isn’t used for anything other than the assigned service. So another tenant cannot run a malicious process to read the box’s physical memory. If you can’t run malicious code as a tenant, then even if you break multi-tenancy you still need to compromise the entire system – which cloud providers are damn good at preventing. Removing customers’ ability to run arbitrary processes is a massive roadblock to exploitation of these kinds of vulnerabilities. Continue to migrate workloads to Function as a Service (also called ‘serverless’ and ‘Lambda’), but recognize there still are risks. Moving to servlerless pushes more responsibility for mitigating future vulnerabilities in these (and any other) classes onto your cloud provider, but since tenants can run nearly arbitrary code there is always a chance of future issues. Right now my feeling is that the risk is low, and far lower than running things

Share:
Read Post

New Paper: Understanding Secrets Management

Traditional application security concerns are shifting, responding to disruptive technologies and development frameworks. Cloud services, containerization, orchestration platforms, and automated build pipelines – to name just a few – all change the way we build and deploy applications. Each effects security a different way. One of the new application security challenges is to provision machines, applications, and services with the credentials they need at runtime. When you remove humans from the process things move much faster – but knowing how and when to automatically provide passwords, authentication tokens, and certificates is not an easy problem. This secrets management problem is not new, but our need grows exponentially when we begin orchestrating the entire application build and deployment process. We need to automate distribution and management of secrets to ensure secure application delivery. This research paper covers the basic use cases for secrets management, and then dives into different technologies that address this need. Many of the technologies assume a specific application deployment model so we also discuss pros and cons of the different approaches. We close with recommendations on product selection and decision criteria. We would like to thank the folks at CyberArk for getting behind this research effort and licensing this content. Support like this enables us to both deliever research under our Totally Transparent Research process and bring this content to you free of charge. Not even a registration wall. Free, and we respect your privacy. Not a bad deal. As always, if you have comments or question on our research please shoot us an email. If you want to comment or make suggestions for future iterations of this research, please leave a comment here. You can go directly to the full paper: Securosis_Secrets_Management_JAN2018_FINAL.pdf Or visit the research library page. Share:

Share:
Read Post

Firestarter: An Explicit End of Year Roundup

The gang almost makes it through half the episode before dropping some inappropriate language as they summarize 2017. Rather than focusing on the big news, we spend time reflecting on the big trends and how little has changed, other than the pace of change. How the biggest breaches of the year stemmed from the oldest of old issues, to the newest of new. And last we want to thank all of you for all your amazing support over the years. Securosis has been running as a company for a decade now, which likely scares all of you even more than us. We couldn’t have done it without you… seriously. Share:

Share:
Read Post

Firestarter: Breacheriffic EquiFail

This week Mike and Rich address the recent spate of operational fails leading to massive security breaches. This isn’t yet another blame the victim rant, but a frank discussion of why these issues are so persistent and so difficult to actually manage. We also discuss the rising role of automation and its potential to reduce these all-too-human errors. Watch or listen: Share:

Share:
Read Post

The Future of Security Operations: Regaining Balance

The first post in this series, Behind the 8 Ball, raised a number of key challenges practicing security in our current environment. These include continual advancement and innovation by attackers seeking new ways to compromise devices and exfiltrate data, increasing complexity of technology infrastructure, frequent changes to said infrastructure, and finally the systemic skills shortage which limits our resources available to handle all the challenges created by the other issues. Basically, practitioners are behind the 8-ball in getting their job done and protecting corporate data. As we discussed in that earlier post, thinking differently about security entails you changing things up to take a (dare we say it?) more enlightened approach, basically focusing the right resources on the right functions. We know it seems obvious that having expensive staff focused on rote and tedious functions is a suboptimal way to deploy resources. But most organizations do it anyway. We prefer to have our valuable, constrained, and usually highly skilled humans doing what humans are good at, such as: identifying triggers that might indicate malicious activity drilling into suspicious activity to understand the depth of attacks and assess potential damage figuring out workarounds to address attacks Humans in these roles generally know what to look for, but aren’t very good at looking at huge amounts of data to find those patterns. Many don’t like doing the same things over and over again – they get bored and less effective. They don’t like graveyard shifts, and they want work that teaches them new things and stretches their capabilities. Basically they want to work in an environment where they do cool stuff and can grow their skills. And (especially in security) they can choose where they work. If they don’t get the right opportunity in your organization, they will find another which better suits their capabilities and work style. On the other hand machines have no problem working 24/7 and don’t complain about boring tasks – at least not yet. They don’t threaten to find another place to work, nor do they agitate for broader job responsibilities or better refreshments in the break room. We’re being a bit facetious here, and certainly don’t advocate replacing your security team with robots. But in today’s asymmetric environment, where you can’t keep up with the task list, robots may be your only chance to regain balance and keep pace. So we will expand a bit on a couple concepts from our Intro to Threat Operations paper, because over time we expect our vision of threat operations to become a subset of SecOps. Enriching Alerts: The idea is to take an alert and add a bunch of common information you know an analyst will want to the alert, before to sending it to an analyst. This way the analyst doesn’t need to spend time gathering information from those various systems and information sources, and can get right to work validating the alert and determining potential impact. Incident Response: Once an alert has been validated, a standard set of activities are generally part of response. Some of these activities can be automated via integration with affected systems (networks, endpoint management, SaaS, etc.) and the time saved enables responders to focus on higher-level tasks such as determining proliferation and assessing data loss. Enriching Alerts Let’s dig into enriching alerts from your security monitoring systems, and how this can work without human intervention. We start with a couple different alerts, and some educated guesses as to what would be useful to an analyst. Alert: Connection to a known bad IP: Let’s say an alert fires for connectivity to a known bad IP address (thanks, threat intel!). With source and destination addresses, an analyst would typically start gathering basic information. 1. Identity: Who uses the device? With a source IP it’s usually straightforward to see who the address is allocated to, and then what devices that person tends to use. Target: Using a destination IP external site comes into focus. An analyst would probably perform geo-location to figure out where the IP is and a whois query to figure out who owns it. They could also figure out the hosting provider and search their threat intel service to see if the IP belongs to a known botnet, and dig up any associated tactics. Network traffic: The analyst may also check out network traffic from the device to look for strange patterns (possibly C&C or reconnaissance) or uncharacteristically large volumes to or from that device over the past few days. Device hygiene: The analyst also needs to know specifics about the device, such as when it was last patched and does it have a non-standard configuration? Recent changes: The analyst would probably be interested in software running on the device, and whether any programs have been installed or configurations changed recently. Alert: Strange registry activity: In this scenario an alert is triggered because a device has had its registry changed, but it cannot be traced back to authorized patches or software installs. The analyst could use similar information to the first example, but device hygiene and recent device changes would be of particular interest. The general flow of network traffic would also be of interest, given that the device may have been receiving instructions or configuration changes from external devices. In isolation registry changes may not be a concern, but in close proximity of a larger inbound data transfer the odds of trouble increase. Additionally, checking out web traffic logs from the device could provide clues to what they were doing that might have resulted in compromise. Alert: Large USB file transfer: We can also see the impact of enrichment in an insider threat scenario. Maybe an insider used their USB port for the first time recently, and transferred 1GB of data in a 3-hour window. That could generate a DLP alert. At that point it would be good to know which internal data sources the device has been communicating with, and any anomalous data volumes over the past few days, which

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.