Login  |  Register  |  Contact
Friday, May 20, 2016

Summary: May 19, 2016

By Rich

Rich here.

Not a lot of news from us this week, because we’ve mostly been traveling, and for Mike and me the kids’ school year is coming to a close.

Last week I was at the Rocky Mountain Information Security Conference in Denver. The Denver ISSA puts on a great show, but due to some family scheduling I didn’t get to see as many sessions as I hoped. I presented my usual pragmatic cloud pitch, a modification of my RSA session from this year. It seems one of the big issues organizations are still facing is a mixture of where to get started on cloud/DevOps, with switching over to understand and implement the fundamentals.

For example, one person in my session mentioned his team thought they were doing DevOps, but actually mashed some tools together without understanding the philosophy or building a continuous integration pipeline. Needless to say, it didn’t go well.

In other news, our advanced Black Hat class sold out, but there are still openings in our main class I highlighted the course differences in a post.

You can subscribe to only the Friday Summary.

Top Posts for the Week

  • Another great post from the Signal Sciences team. This one highlights a session from DevOps Days Austin by Dan Glass of American Airlines. AA has some issues unique to their industry, but Dan’s concepts map well to any existing enterprise struggling to transition to DevOps while maintaining existing operations. Not everyone has the luxury of building everything from scratch. Avoiding the Dystopian Road in Software.
  • One of the most popular informal talks I give clients and teach is how AWS networking works. It is completely based on this session, which I first saw a couple years ago at the re:Invent conference – I just cram it into 10-15 minutes and skip a lot of the details. While AWS-specific, this is mandatory for anyone using any kind of cloud. The particulars of your situation or provider will differ, but not the issues. Here is the latest, with additional details on service endpoints: AWS Summit Series 2016 | Chicago – Another Day, Another Billion Packets.
  • In a fascinating move, Jenkins is linking up with Azure, and Microsoft is tossing in a lot of support. I am actually a fan of running CI servers in the cloud for security, so you can tie them into cloud controls that are hard to implement locally, such as IAM. Announcing collaboration with the Jenkins project.
  • Speaking of CI in the cloud, this is a practical example from Flux7 of adding security to Git and Jenkins using Amazon’s CodeDeploy. TL;DR: you can leverage IAM and Roles for more secure access than you could achieve normally: Improved Security with AWS CodeCommit.
  • Netflix releases a serverless Open Source SSH Certificate Authority. It runs on AWS Lambda, and is definitely one to keep an eye on: Netflix/bless.
  • AirBnB talks about how they integrated syslog into AWS Kinesis using osquery (a Facebook tool I think I will highlight as tool of the week): Introducing Syslog to AWS Kinesis via Osquery – Airbnb Engineering & Data Science.

Tool of the Week

osquery by Facebook is a nifty Open Source tool to expose low-level operating system information as a real-time relational database. What does that mean? Here’s an example that finds every process running on a system where the binary is no longer on disk (a direct example from the documentation, and common malware behavior):

SELECT name, path, pid FROM processes WHERE on_disk = 0;

This is useful for operations but it’s positioned as a security tool. You can use it for File Integrity Monitoring, real-time alerting, and a whole lot more. The site even includes ‘packs’ for common needs including OS X attacks, compliance, and vulnerability management.

Securosis Blog Posts this Week

Other Securosis News and Quotes

Another quiet week…

Training and Events

—Rich

Thursday, May 19, 2016

Incident Response in the Cloud Age: Shifting Foundations

By Mike Rothman

Since we published our React Faster and Better research and Incident Response Fundamentals, quite a bit has changed relative to responding to incidents. First and foremost, incident response is a thing now. Not that it wasn’t a discipline mature security organizations focused on before 2012, but since then a lot more resources and funding have shifted away from ineffective prevention towards detection and response. Which we think is awesome.

Of course, now that I/R is a thing and some organizations may actually have decent response processes, the foundation us is shifting. But that shouldn’t be a surprise – if you wanted a static existence, technology probably isn’t the best industry for you, and security is arguably the most dynamic part of technology. We see the cloud revolution taking root, promising to upend and disrupt almost every aspect of building, deploying and operating applications. We continue to see network speeds increase, putting scaling pressure on every aspect of your security program, including response.

The advent of threat intelligence, as a means to get smarter and leverage the experiences of other organizations, is also having a dramatic impact on the security business, particularly incident response. Finally, the security industry faces an immense skills gap, which is far more acute in specialized areas such as incident response. So whatever response process you roll out needs to leverage technological assistance – otherwise you have little chance of scaling it to keep pace with accelerating attacks.

This new series, which we are calling “Incident Response in the Cloud Age”, will discuss these changes and how your I/R process needs to evolve to keep up. As always, we will conduct this research using our Totally Transparent Research methodology, which means we’ll post everything to the blog first, and solicit feedback to ensure our positions are on point.

We’d also like to thank SS8 for being a potential licensee of the content. One of the unique aspects of how we do research is that we call them a potential licensee because they have no commitment to license, nor do they have any more influence over our research than you. This approach enables us to write the kind of impactful research you need to make better and faster decisions in your day to day security activities.

Entering the Cloud Age

Evidently there is this thing called the ‘cloud’, which you may have heard of. As we have described for our own business, we are seeing cloud computing change everything. That means existing I/R processes need to now factor in the cloud, which is changing both architecture and visibility.

There are two key impacts on your I/R process from the cloud. The first is governance, as your data now resides in a variety of locations and with different service providers. Various parties required to participate as you try to investigate an attack. The process integration of a multi-organization response is… um… challenging.

The other big difference in cloud investigation is visibility, or its lack. You don’t have access to the network packets in an Infrastructure as a Service (IaaS) environment, nor can you see into a Platform as a Service (PaaS) offering to see what happened. That means you need to be a lot more creative about gathering telemetry on an ongoing basis, and figuring out how to access what you need during an investigation.

Speed Kills

We have also seen a substantial increase in the speed of networks over the past 5 years, especially in data centers. So if network forensics is part of your I/R toolkit (as it should be) how you architect your collection environment, and whether you actually capture and store full packets, are key decisions. Meanwhile data center virtualization is making it harder to know which servers are where, which makes investigation a bit more challenging.

Getting Smarter via Threat Intelligence

Sharing attack data between organizations still feels a bit strange for long-time security professionals like us. The security industry resisted admitting that successful attacks happen (yes, that ego thing got in the way), and held the entirely reasonable concern that sharing company-specific data could provide adversaries with information to facilitate future attacks.

The good news is that security folks got over their ego challenges, and also finally understand they cannot stand alone and expect to understand the extent of the attacks that come at them every day. So sharing external threat data is now common, and both open source and commercial offerings are available to provide insight, which is improving incident response. We documented how the I/R process needs to change to leverage threat intelligence, and you can refer to that paper for detail on how that works.

Facing down the Skills Gap

If incident response wasn’t already complicated enough because of the changes described above, there just aren’t enough skilled computer forensics specialists (who we call forensicators) to meet industry demand. You cannot just throw people at the problem, because they don’t exist. So your team needs to work smarter and more efficiently. That means using technology more for gathering and analyzing data, structuring investigations, and automating what you can. We will dig into emerging technologies in detail later in this series.

Evolving Incident Response

Like everything else in security, incident response is changing. The rest of this series will discuss exactly how. First we’ll dig into the impacts of the cloud, faster and virtualized networks, and threat intelligence on your incident response process. Then we’ll dig into how to streamline a response process to address the lack of people available to do the heavy lifting of incident response. Finally we’ll bring everything together with a scenario that illuminates the concepts in a far more tangible fashion. So buckle up – it’s time to evolve incident response for the next era in technology: the Cloud Age.

—Mike Rothman

Wednesday, May 18, 2016

Understanding and Selecting RASP: Technology Overview

By Adrian Lane

This post will discuss technical facets of RASP products, including how the technology works, how it integrates into an application environment, and the advantages or disadvantages of each. We will also spend some time on which application platforms supported are today, as this is one area where each provider is limited and working to expand, so it will impact your selection process. We will also consider a couple aspects of RASP technology which we expect to evolve over next couple years.

Integration

RASP works at the application layer, so each product needs to integrate with applications somehow. To monitor application requests and make sense of them, a RASP solution must have access to incoming calls. There are several methods for monitoring either application usage (calls) or execution (runtime), each deployed slightly differently, gathering a slightly different picture of how the application functions. Solutions are installed into the code production path, or monitor execution at runtime. To block and protect applications from malicious requests, a RASP solution must be inline.

  • Servlet Filters & Plugins: Some RASP platforms are implemented as web server plug-ins or Java Servlets, typically installed into either Apache Tomcat or Microsoft .NET to process inbound HTTP requests. Plugins filter requests before they reach application code, applying detection rules to each inbound request received. Requests that match known attack signatures are blocked. This is a relatively simple approach for retrofitting protection into the application environment, and can be effective at blocking malicious requests, but it doesn’t offer the in-depth application mapping possible with other types of integration.
  • Library/JVM Replacement: Some RASP products are installed by replacing the standard application libraries, JAR files, or even the Java Virtual Machine. This method basically hijacks calls to the underlying platform, whether library calls or the operating system. The RASP platform passively ‘sees’ application calls to supporting functions, applying rules as requests are intercepted. Under this model the RASP tool has a comprehensive view of application code paths and system calls, and can even learn state machine or sequence behaviors. The deeper analysis provides context, allowing for more granular detection rules.
  • Virtualization or Replication: This integration effectively creates a replica of an application, usually as either a virtualized container or a cloud instance, and instruments application behavior at runtime. By monitoring – and essentially learning – application code pathways, all dynamic or non-static code is mimicked in the cloud. Learning and detection take place in this copy. As with replacement, application paths, request structure, parameters, and I/O behaviors can be ‘learned’. Once learning is complete rules are applied to application requests, and malicious or malformed requests are blocked.

Language Support

The biggest divide between RASP providers today is their platform support. For each vendor we spoke with during our research, language support was a large part of their product roadmap. Most provide full support for Java; beyond that support is hit and miss. .NET support is increasingly common. Some vendors support Python, PHP, Node.js, and Ruby as well. If your application doesn’t run on Java you will need to discuss platform support with vendors. Within the next year or two we expect this issue to largely go away, but for now it is a key decision factor.

Deployment Models

Most RASP products are deployed as software, within an application software stack. These products work equally well on-premise and in cloud environments. Some solutions operate fully in a cloud replica of the application, as in the virtualization and replicated models mentioned above. Still others leverage a cloud component, essentially sending data from an application instance to a cloud service for request filtering. What generally doesn’t happen is dropping an appliance into a rack, or spinning up a virtual machine and re-routing network traffic.

Detection Rules

During our interviews with vendors it became clear that most are still focused on negative security: they detect known malicious behavior patterns. These vendors research and develop attack signatures for customers. Each signature explicitly describes one attack, such as SQL injection or a buffer overflow. For example most products include policies focused on the OWASP Top Ten critical web application vulnerabilities, commonly with multiple policies to detect variations of the top ten threat vectors. This makes their rules harder for attackers to evade. And many platforms include specific rules for various Common Vulnerabilities and Exposures, providing the RASP platform with signatures to block known exploits.

Active vs. Passive Learning

Most RASP platforms learn about the application they are protecting. In some cases this helps to refine detection rules, adapting generic rules to match specific application requests. In other cases this adds fraud detection capabilities, as the RASP learns to ‘understand’ application state or recognize an appropriate set of steps within the application. Understanding state is a prerequisite for detecting business logic attacks and multi-part transactions. Other RASP vendors are just starting to leverage a positive (whitelisting) security model. These RASP solutions learn how API calls are exercised or what certain lines of code should look like, and block unknown patterns.

To do more than filter known attacks, a RASP tool needs to build a baseline of application behaviors, reflecting the way an application is supposed to work. There are two approaches: passive and active learning. A passive approach builds a behavioral profile as users use the application. By monitoring application requests over time and cataloging each request, linking the progression of requests to understand valid sequences of events, and logging request parameters, a RASP system can recognizes normal usage. The other baselining approach is similar to what Dynamic Application Security Testing (DAST) platforms use: by crawling through all available code paths, the scope of application features can be mapped. By generating traffic to exercise new code as it is deployed, application code paths can be synthetically enumerated do produce a complete mapping predictably and more quickly.

Note that RASP’s positive security capabilities are nascent. We see threat intelligence and machine learning capabilities as a natural fit for RASP, but these capabilities have not yet fully arrived. Compared to competing platforms, they lack maturity and functionality. But RASP is still relatively new, and we expect the gaps to close over time. On the bright side, RASP addresses application security use cases which competitive technologies cannot.

We have done our best to provide a detailed look at RASP technology, both to help you understand how it works and to differentiate it from other security products which sound similar. If you have questions, or some aspect of this technology is confusing, please comment below, and we will work to address your questions. A wide variety of platforms – including cloud WAF, signal intelligence, attribute-based fraud detection, malware detection, and network oriented intelligence services – all market value propositions which overlap with RASP. But unless the product can work in the application layer, it’s not RASP.

Next we will discuss emerging use cases, and why firms are looking for alternatives to what they have today.

—Adrian Lane

Monday, May 16, 2016

Shining a Light on Shadow Devices: Seeing into the Shadows

By Mike Rothman

As we have posted this Shadow Devices series, we have discussed the millions (likely billions) of new devices which will be connecting to networks over the coming decade. Clearly many of them won’t be traditional computer devices, which can be scanned and assessed for security issues. We called these other devices shadow devices because this is about more than the “Internet of Things” – any networked device which can be used to steal information – whether directly or by providing a stepping stone to targeted information – needs to be considered.

Our last post explained how peripherals, medical devices, and control systems can be attacked. We showed that although traditional malware attacks on traditional computing and mobile get most of the attention in IT security circles, these other devices shouldn’t be ignored. As with most things, it’s not a matter of if but when these lower-profile devices will be used to perpetrate a major attack.

So now what? How can you figure out your real attack surface, and then move to protect the systems and devices providing access to your critical data? It’s back to Security 101, which pretty much always starts with visibility, and then moves to control once you figure out what you have and how it is exposed.

Risk Profiling

Your first step is to shine a light into the ‘shadows’ on your network to gain a picture of all devices. You have a couple options to gain this visibility:

  1. Active Scanning: You can run a scan across your entire IP address space to find out what’s there. This can be a serious task for a large address space, consuming resources while you run your scanner(s). This process can only happen periodically, because it wouldn’t be wise to run a scanner continuously on internal networks. Keep in mind that some devices, especially ancient control systems, were not build with resilience in mind, so even a simple vulnerability scan can knock them over.
  2. Passive Monitoring: The other alternative is basically to listen for new devices by monitoring network traffic. This assumes that you have access to all traffic on all networks in your environment, and that new devices will communicate to something. Pitfalls of this approach include needing access to the entire network, and that new devices can spoof other network devices to evade detection. On the plus side, you won’t knock anything over by listening.

But we don’t see a question of either/or for gaining full visibility into all devices on the network. There is a time and place for active scanning, but care must be taken to not take brittle systems down or consume undue network resources. We have also seen many scenarios where passive monitoring is needed to find new devices quickly once they show up on the network.

Once you have full visibility, the next step is to identify devices. You can certainly look for indicators of what type of device you found during an active scan. This is harder when passively scanning, but devices can be identified by traffic patterns and other indicators within packets. A critical selection criteria for passive monitoring technology the vendor’s ability to identify the bulk of devices likely to show up on your network. Obviously in a very dynamic environment a fraction of devices cannot be identified through scanning or monitoring network traffic. But you want these devices to be a small minority, because anything you can’t identify through scanning requires manual intervention.

Once you know what kind of device you are dealing with, you need to start evaluating risk, a combination of the device’s vulnerability and exploitability. Vulnerability is a question of what could possibly happen. An attacker can do certain things with a printer which are impossible with an infusion pump, and vice-versa. So device type is key context. You also need to assess security vulnerabilities within the device. They may warrant an active scan upon identification for more granular information. As we warned above, be careful with active scanning to avoid risking device availability. You can glean some information about vulnerabilities through passive scanning, but it requires quite a bit more interpretation, and is subject to higher false positive rates.

Exploitability depends on the security controls and/or configurations already in place on the device. A warehouse picker robot may run embedded Windows XP, but if the robot also runs a whitelist malicious code cannot execute, so it might show up as vulnerable but not exploitable. The other main aspect of exploitability is attack path. If an external entity cannot access the warehouse system because it has no Internet-facing networks, even the vulnerable picker robot poses relatively little risk unless the physical location is attacked.

The final aspect of determining risk to a device is looking at what it has access to. If a device has no access to anything sensitive, then again it poses little risk. Of course that assumes your networks are sufficiently isolated. Determining risk is all about prioritization. You only have so many resources, so you need to choose what to fix wisely, and evaluating risk is the best way to allocate those scarce resources.

Controls

Once you know what’s out there in the shadows, your next step is to figure out whether and perhaps how to protect those devices. This again comes back to the risk profiles discussed above. It doesn’t make much sense to spend a lot of time and money protecting devices which don’t present much risk to the organization. But in case a device does present sufficient risk, how will you go about protecting it?

First things first: you should be making sure the device is configured in the most secure fashion. Yeah, yeah, that sounds trite and simple, but we mention it anyway because it’s shocking how many devices can be exploited due to open services that can easily be turned off. Once you have basic device hygiene taken care, here are some other ways to protect it:

  1. Active Controls: The first and most direct way to protect a shadow device is by implementing an active control on it. The available controls vary depending on kind of device and its embedded operating system. The most reliable means of protection is to stop execution of non-authorized programs. Yes, we are referring to whitelisting. There are commercial products available for embedded Windows devices, and a few options (some commercial, some open source) for other operating environments. There are also other defenses, including malware detection and even some forensic offerings, to really determine what’s happening on devices. Just keep in mind that active controls consume resources and can impair device stability. So you’ll want to exhaustively test any controls to be implemented on these devices to ensure you understand their impact.
  2. Network Isolation: These devices are on the network, which means traditional network security defenses can (and should) be used in conjunction with active controls. This involves using firewalls and/or routers & switches to provide a measure of isolation between these device networks and your data stores.
  3. Egress Filtering: We also recommend egress filtering on networks with these shadow devices. You can detect command and control, as well as remote access, by looking at outbound network traffic. Remember, these devices aren’t inherently malicious, so if something bad it happening it’s because an adversary is controlling the device, which means it’s communicating to a bot network or other controller outside your network, and that can be tracked.

An increasing limitation for network-oriented controls is encrypted traffic. We have published research on dealing with encrypted networks – there are ways to peek into encrypted traffic streams. When deciding between network and active controls for a devices, you need to consider encryption – network-based controls are easiest because they don’t require device interaction, but they may introduce blind spots, particularly thanks to encryption.

Automation

Finally we should mention the challenges of implementing fixes and workarounds – not only when dealing with shadow devices, but for all devices on networks now. There just isn’t enough staff to address all requirements, which means organizations need to think creatively about how to make limited staff more productive. One emerging way of magnifying staff efforts is automation. You have existing controls, which can be reconfigured based on specific rules. Of course automation of existing technologies requires integration with the existing security controls, but lately we have seen considerable innovation in this area, born out of necessity.

Our last step in figuring out your strategy to provide visibility and control over shadow devices is to determine which fixes can be automated, and establishing a strategy for that automation. We caution again that automation is a double-edged sword, which must be used carefully to ensure the ‘cure’ isn’t much worse than the symptoms. So ensure sufficient testing, and have reasonable expectations for what you can automate – in both the short and longer terms.

With that we wrap up the our series on “Shining a Light on Shadow Devices”. As we described, there will be an explosion of devices connecting to networks you are tasked to protect, and without sufficient visibility over what is connecting to them you are blind to potential attacks. There are a variety of ways to find these devices, and to implement controls protecting them. But you can’t protect what you can’t see, so make sure to shine a light on all the dark places in your networks so you are fully aware of your attack surface.

—Mike Rothman

Tuesday, May 10, 2016

SIEM Kung Fu [New Paper]

By Mike Rothman

In the SIEM Kung Fu paper, we tell you what you need to know to get the most out of your SIEM, and solve the problems you face today by increasing your capabilities (the promised Kung Fu).

SIEM Kung Fu

We would like to thank Intel Security for licensing the content in this paper. Our unique Totally Transparent Research model allows us to do objective and useful research and still pay our bills, so we’re thankful to all of the companies that license our research.

Check out the page in the research library or download the paper directly (PDF).

—Mike Rothman

Understanding and Selecting RASP *edited* [New Series]

By Adrian Lane

In 2015 we researched Putting Security Into DevOps, with a close look at how automated continuous deployment and DevOps impacted IT and application security. We found DevOps provided a very real path to improve application security using continuous automated testing, run each time new code was checked in. We were surprised to discover developers and IT teams taking a larger role in selecting security solutions, and bringing a new set of buying criteria to the table. Security products must do more than address application security issues; they need to mesh with continuous integration and deployment approaches, with automated capabilities and better integration with developer tools.

But the biggest surprise was that every team which had moved past Continuous Integration and onto Continuous Deployment (CD) or DevOps asked us about RASP, Runtime Application Self-Protection. Each team was either considering RASP, or already engaged in a proof-of-concept with a RASP vendor. We understand we had a small sample size, and the number of firms who have embraced either CD or DevOps application delivery is a very small subset of the larger market. But we found that once they started continuous deployment, each firm hit the same issues. The ability to automate security, the ability to test in pre-production, configuration skew between pre-production and production, and the ability for security products to identify where issues were detected in the code. In fact it was our DevOps research which placed RASP at the top of our research calendar, thanks to perceived synergies.

There is no lack of data showing that applications are vulnerable to attack. Many applications are old and simply contain too many flaws to fix. You know, that back office application that should never have been allowed on the Internet to begin with. In most cases it would be cheaper to re-write the application from scratch than patch all the issues, but economics seldom justify (or even permit) the effort. Other application platforms, even those considered ‘secure’, are frequently found to contain vulnerabilities after decades of use. Heartbleed, anyone? New classes of attacks, and even new use cases, have a disturbing ability to unearth previously unknown application flaws. We see two types of applications: those with known vulnerabilities today, and those which will have known vulnerabilities in the future. So tools to protect against these attacks, which mesh well with the disruptive changes occuring in the development community, deserve a closer look.

Defining RASP

Runtime Application Self-Protection (RASP) is an application security technology which embeds into an application or application runtime environment, examining requests at the application layer to detect attacks and misuse in real time. RASP products typically contain the following capabilities:

  1. Monitor and block application requests; in some cases they can alter request to strip malicious content
  2. Full functionality through RESTful APIs
  3. Integration with the application or application instance (virtualization)
  4. Unpack and inspect requests in the application, rather than at the network layer
  5. Detect whether an attack would succeed
  6. Pinpoint the module, and possibly the specific line of code, where a vulnerability resides
  7. Instrument application usage

These capabilities overlap with white box & black box scanners, web application firewalls (WAF), next-generation firewalls (NGFW), and even application intelligence platforms. And RASP can be used in coordination with any or all of those other security tools. So the question you may be asking yourself is “Why would we need another technology that does some of the same stuff?” It has to do with the way it is used and how it is integrated.

Differing Market Drivers

As RASP is a (relatively) new technology, current market drivers are tightly focused on addressing the security needs of one or two distinct buying centers. But RASP offers a distinct blend of capabilities and usability options which makes it unique in the market.

  • Demand for security approaches focused on development, enabling pre-production and production application instances to provide real-time telemetry back to development tools
  • Need for fully automated application security, deployed in tandem with new application code
  • Technical debt, where essential applications contain known vulnerabilities which must be protected, either while defects are addressed or permanently if they cannot be fixed for any of various reasons
  • Application security supporting development and operations teams who are not all security experts

The remainder of this series will go into more detail on RASP technology, use cases, and deployment options:

  • Technical Overview: This post will discuss technical aspects of RASP products – including what the technology does; how it works; and how it integrates into libraries, runtime code, or web application services. We will discuss the various deployment models including on-premise, cloud, and hybrid. We will discuss some of the application platforms supported today, and how we expect the technology to evolve over the next couple years.
  • Use Cases: Why and how firms use this technology, with medium and large firm use case profiles. We will consider advantages of this technology, as well as the problems firms are trying to solve using it – especially around deficiencies in code security and development processes. We will provide some detail on monitoring vs. blocking threats, and discuss applicability to security and compliance requirements.
  • Deploying RASP: This post will focus on how to integrate RASP into a development and release management processes. We will also jump into a more detailed discussion of how RASP differs from adjacent technologies like WAF, NGFW, and IDS, as well as static and dynamic application testing. We will walk through the advantages of each technology, with a special focus on operational considerations and keeping detection/blocking up to date. We will discuss advantages and tradeoffs compared to other relevant security technologies. This post will close with an example of a development pipeline, and how RASP technology fits in.
  • Buyers Guide: This is a new market segment, so we will offer a basic analysis process for buyers to evaluate products. We will consider integration with existing development processes and rule management.

—Adrian Lane

Monday, May 09, 2016

Updates to Our Black Hat Cloud Security Training Classes

By Rich

We have been getting questions on our training classes this year, so I thought I should update everyone on major updates to our ‘old’ class, and what to expect from our ‘advanced’ class. The short version is that we are adding new material to our basic class, to align with upcoming Cloud Security Alliance changes and cover DevOps. It will still include some advanced material, but we are assuming the top 10% (in terms of technical skills) of students will move to our new advanced class instead, enabling us to focus the basic class on the meaty part of the bell curve.

Over the past few years our Black Hat Cloud Security Hands On class became so popular that we kept adding instructors and seats to keep up with demand. Last year we sold out both classes and increased the size to 60 students, then still sold out the weekday class. That’s a lot of students, but the course is tightly structured with well-supported labs to ensure we can still provide a high-quality experience. We even added a bunch of self-paced advanced labs for people with stronger skills who wanted to move faster.

The problem with that structure is that it really limits how well we can support more advanced students. Especially because we get a much wider range of technical skills than we expected at a Black Hat branded training. Every year we get sizable contingents from both extremes: people who no longer use their technical skills (managers/auditors/etc.), and students actively working in technology with hands-on cloud experience. When we started this training 6 years ago, nearly none of our students had ever launched a cloud instance.

Self-paced labs work reasonably well, but don’t really let you dig in the same way as focused training. There are also many cloud major advances we simply cannot cover in a class which has to appeal to such a wide range of students. So this year we launched a new class (which has already sold out, and expanded), and are updating the main class. Here are some details, with guidance on which is likely to fit best:

Cloud Security Hands-On (CCSK-Plus) is our introductory 2-day class for those with a background in security, but who haven’t worked much in the cloud yet. It is fully aligned with the Cloud Security Alliance CCSK curriculum: this is where we test out new material and course designs to roll out throughout the rest of the CSA. This year we will use a mixed lecture/lab structure, instead of one day of lecture with labs the second day.

We have started introducing material to align with the impending CSA Guidance 4.0 release, which we are writing. We still need to align with the current exam, because the class includes a token to take the test for the certificate, but we also wrote the test, so we should be able to balance that. This class still includes extra advanced material (labs) not normally in the CSA training and the self-paced advanced labs. Time permitting, we will also add an intro to DevOps.

But if you are more advanced you should really take Advanced Cloud Security and Applied SecDevOps instead. This 2-day class assumes you already know all the technical content in the Hands-On class and are comfortable with basic administration skills, launching instances in AWS, and scripting or programming. I am working on the labs now, and they cover everything from setting up accounts and VPCs usable for production application deployments, building a continuous deployment pipeline and integrating security controls, integrating PaaS services like S3, SQS, and SNS, through security automation through coding (both serverless with Lambda functions and server-based).

If you don’t understand any of that, take the Hands-On class instead.

The advanced class is nearly all labs, and even most lecture will be whiteboards instead of slides. The labs aren’t as tightly scripted, and there is a lot more room to experiment (and thus more margin for error). They do, however, all interlock to build a semblance of a real production deployment with integrated security controls and automation. I was pretty excited when I figured out how to build them up and tie them together, instead of having everything come out of a bucket of unrelated tasks.

Hopefully that clears things up, and we look forward to seeing some of you in August.

Oh, and if you work for BIGCORP and can’t make it, we also provide private trainings these days.

Here are the signup links:

—Rich

Shining a Light on Shadow Devices: Attacks

By Mike Rothman

What is the real risk of the Shadow Devices we described back in our first post? It is clear that more organizations don’t really take their risks seriously. They certainly don’t have workarounds in place, or proactively segment their environments to ensure that compromising these devices doesn’t provide opportunity for attackers to gain presence and a foothold in their environments. Let’s dig into three broad device categories to understand what attacks look like.

Peripherals

Do you remember how cool it was when the office printer got a WiFi connection? Suddenly you could put it wherever you wanted, preserving the Feng Shui of your office, instead of having it tethered to the network drop. And when the printer makers started calling their products image servers, not just printers? Yeah, that was when they started becoming more intelligent, and also tempting targets.

But what is the risk of taking over a printer? It turns out that even in our paperless offices of the future, organizations still print out some pretty sensitive stuff, and stuff they don’t want to keep may be scanned for storage/archival. Whether going in or out, sensitive content is hitting imaging servers. Many of them store the documents they print and scan until their memory (or embedded hard drive) is written over. So sensitive documents persist on devices, accessible to anyone with access to the device, either physical or remote.

Even better, many printers are vulnerable to common wireless attacks like the evil twin, where a fake device with a stronger wireless signal impersonates the real printer. So devices connect (and print) documents to the evil twin and not the real printer – the same attack works with routers too, but the risk is much broader. Nice. But that’s not all! The devices typically use some kind of stripped-down UNIX variant at the core, and many organizations don’t change the default passwords on their image servers, enabling attackers to trigger remote firmware updates and install compromised versions of the printer OS. Another attack vector is that these imaging devices now connect to cloud-based services to email documents, so they have all the plumbing to act as a spam relay.

Most printers use similar open source technologies to provide connectivity, so generic attacks generally work against a variety of manufacturers’ devices. These peripherals can be used to steal content, attack other devices, and provide a foothold inside your network perimeter. That makes these both direct and indirect targets.

These attacks aren’t just theoretical. We have seen printers hijacked to spread inflammatory propaganda on college campuses, and Chris Vickery showed proof of concept code to access a printer’s hard drive remotely.

Our last question is what kind of security controls run on imaging servers. The answer is: not much. To be fair, vendors have started looking at this more seriously, and were reasonably responsive in patching the attacks mentioned above. But that said, these products do not get the same scrutiny as other PC devices, or even some other connected devices we will discuss below. Imaging servers see relatively minimal security assessment before coming to market.

We aren’t just picking on printers here. Pretty much every intelligent peripheral is similarly vulnerable, because they all have operating systems and network stacks which can be attacked. It’s just that offices tend to have dozens of printers, which are frequently overlooked during risk assessment.

Medical Devices

If printers and other peripherals seem like low-value targets, let’s discuss something a bit higher-value: medical devices. In our era of increasingly connected medical devices – including monitors, pumps, pacemakers, and pretty much everything else – there hasn’t been much focus on product security, except in the few cases where external pressure is applied by regulators. These devices either have IP network stacks or can be configured via Bluetooth – neither of which is particularly well protected.

The most disturbing attacks threaten patient health. There are all too many examples of security researchers compromising infusion and insulin pumps, jackpotting drug dispensaries, and even the legendary Barnaby Jack messing with a pacemaker. We know one large medical facility that took it upon itself to hack all its devices in use, and deliver a list of issues to the manufacturers. But there has been no public disclosure of results, or whether device manufacturers have made changes to make their devices safe.

Despite the very real risk of medical devices being targeted to attack patient health, we believe most of the current risk involves information. User data is much easier for attackers to monetize; medical profiles have a much longer shelf-life and much higher value than typical financial information. So ensuring that Protected Health Information is adequately protected remains a key concern in healthcare.

That means making sure there aren’t any leakages in these devices, which is not easy without a full penetration test. On the positive front, many of these devices have purpose-built operating systems, so they cannot really be used as pivot points for lateral movement within the network. Yet few have any embedded security controls to ensure data does not leak. Further complicating matters, some devices still use deprecated operating systems such as Windows XP and even Windows 2000 (yes, seriously), and outdated compliance mandates often mean they cannot be patched without recertification. So administrators often don’t update the devices, and hope for the best. We can all agree that hope isn’t a sufficient strategy.

With lives at stake, medical device makers are starting to talk about more proactive security testing. Similarly to the way a major SaaS breach could prove an existential threat to the SaaS market, medical device makers should understand what is at risk, especially in terms of liability, but that doesn’t mean they understand how to solve the problem. So the burden lands on customers to manage their medical device inventories, and ensure they are not misused to steal data or harm patients.

Industrial Control Systems

The last category of shadow devices we will consider is control systems. These devices range from SCADA systems running power grids, to warehousing systems ensuring the right merchandise is picked and shipped, to manufacturing systems running robotics, and heavy building machinery. All these devices are networked (whether directly or indirectly) in today’s advanced factories, so there is attack surface to monitor and protect.

We know these systems can be attacked. Stuxnet was a very advanced attack on nuclear centrifuges. Once within the nuclear facilities network, the adversaries compromised a number of different types of control systems to access centrifuges and break them. In a recent attack on a German blast furnace, the control systems were compromised and general failsafes were inoperable; the facility went offline while they cleaned the systems up.

In both cases, and likely many others that aren’t publicized, the adversaries are very advanced. They need to be – to attack a centrifuge like Stuxnet you need your own centrifuges to test on, and they aren’t exactly easy to find on eBay. You cannot just load a blast furnace into the pick-up Saturday morning for a pen-test.

That may comfort some people, but it shouldn’t. The implication is that control system defenders aren’t dealing with the Metasploit crowd, but instead trying to repel well-funded and capable adversaries. They need a very clear idea of what their attack surface looks like, and some way of monitoring their devices; they cannot rely on compliance mandates to require advanced security on their control systems.

Another consideration with control systems is the brittle nature of many of them. They are hard to test because you could bring down the system while trying to figure out whether it’s vulnerable. Most organizations don’t like that trade-off, so they don’t test directly. This means you need indirect techniques – definitely to figure out how vulnerable they are, and probably to discover and monitor them as well.

Which makes a good segue to our next post, where we will dig into two aspects of protecting shadow devices: Visibility and Control. First and foremost you need to figure out where these devices really are. Then you can worry about how to ensure they are not being attacked or misused.

—Mike Rothman

Friday, May 06, 2016

Summary: May 5, 2016

By Rich

Rich here.

It’s been a busy couple weeks, and the pace is only ramping up. This week I gave a presentation and a workshop at Interop. It seemed to go well, and the networking-focused audience was very receptive. Next week I’m out at the Rocky Mountain Infosec Conference, which is really just an excuse to spend a few more days back near my old home in Colorado. I get home just in time for my wife to take a trip, then even before she’s back I’m off to Atlanta to keynote an IBM Cybersecurity Seminar (free, if you are in the area). I’m kind of psyched for that one because it’s at the aquarium, and I’ve been begging Mike to take me for years.

Not that I’ve been to Atlanta in years.

Then some client gigs, and (hopefully) things will slow down a little until Black Hat. I’m updating our existing (now ‘basic’) cloud security class, and building the content for our Advanced Cloud Security and Applied SecDevOps class. It looks like it will be nearly all labs and whiteboarding, without too many lecture slides, which is how I prefer to learn.

This week’s stories are wide-ranging, and we are nearly at the end of our series highlighting continuous integration security testing tools. Please drop me a line if you think we should include commercial tools. We work with some of those companies, so I generally try to avoid specific product mentions. Just email.

You can subscribe to only the Friday Summary.

Top Posts for the Week

Tool of the Week

It’s time to finish off our series on integrating security testing tools into deployment pipelines with Mittn, which is maintained by F-Secure. Mittn is like Gauntlt and BDD-Security in that it wraps other security testing tools, allowing you to script automated tests into your CI server. Each of these tools defaults to a slightly different set of integrated security tools, and there’s no reason you can’t combine multiple tools in a build process.

Basically, when you define a series of tests in your build process, you tie one of these into your CI server as a plugin or use scripted execution. You pass in security tests using the template for your particular tool, and it runs your automated tests. You can even spin up a full virtual network environment to test just like production.

I am currently building this out myself, both for our training classes and our new securosis.com platform. For the most part it’s pretty straightforward… I have Jenkins pulling updates from Git, and am working on integrating Packer and Ansible to build new server images. Then I’ll mix in the security tests (probably using Gauntlt to start). It isn’t rocket science or magic, but it does take a little research and practice.

Securosis Blog Posts this Week

Other Securosis News and Quotes

Another quiet week.

Training and Events

—Rich

Wednesday, May 04, 2016

Updating and Pruning our Mailing Lists

By Rich

As part of updating All Things Securosis, the time has come to migrate our mailing lists to a new provider (MailChimp, for the curious). The CAPTCHA at our old provider wasn’t working properly, so people couldn’t sign up. I’m not sure if that’s technically irony for a security company, but it was certainly unfortunate. So…

If you weren’t expecting this, for some reason our old provider had you listed as active!! If so we are really sorry and please click on the unsubscribe at the bottom of the email (yes some of you are just reading this on the blog). We did our best to prune the list and only migrated active subscriptions (our lists were always self-subscribe to start), but the numbers look a little funny and let’s just say there is a reason we switched providers. Really, we don’t want to spam you, we hate spam, and if this shows up in your inbox and is unwanted, the unsubscribe link will work, and feel free to email us/reply directly. I’m hoping it’s only a few people who unsubscribed during the transition.

If you want to be added, we have two different lists – one for the Friday Summary (which is all cloud, security automation, and DevOps focused), and the Daily Digest of all emails sent the previous day. We only use these lists to send out email feeds from the blog, which is why I’m posting this on the site and not sending directly. We take our promises seriously and those lists are never shared/sold/whatever, and we don’t even send anything to them outside blog posts. Here are the signup forms:

Now if you received this in email, and sign up again, that’s very meta of you and some hipster is probably smugly proud.

Thanks for sticking with us, and hopefully we will have a shiny new website to go with our shiny new email system soon. But the problem with hiring designers that live in another state is flogging becomes logistically complex, and even the cookie bribes don’t work that well (especially since their office is, literally, right above a Ben and Jerry’s).

And again, apologies if you didn’t expect or want this in your inbox; we spent hours trying to pull only active subscribers and then clean everything up but I have to assume mistakes still happened.

—Rich

Tuesday, May 03, 2016

Firestarter: What the hell is a cloud anyway?

By Rich

In our wanderings we’ve noticed that when we pull our heads out of the bubble, not everyone necessarily understands what cloud is or where it’s going. Heck, many smart IT people are still framing it within the context of what they currently do. It’s only natural, especially when they get crappy advice from clueless consultants, but it certainly can lead you down some ugly paths. This week Mike, Adrian and Rich are also joined by Dave Lewis (who accidentally sat down next to Rich at a conference) to talk about how people see cloud, the gaps, and how to navigate the waters.

Watch or listen:


—Rich

Friday, April 29, 2016

Summary: April 28, 2016

By Rich

Rich here.

Okay, have I mentioned how impatient I’m getting about updating our site? Alas, there is only so fast you can push a good design and implementation. The foundation is all set and we hope to start transferring everything into our new AWS architecture within the next month.

In the meantime I just pushed some new domain outlins for the Cloud Security Alliance Guidance into the GitHub repository for public feedback. I’m also starting to tie together the labs for our Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps training. I have this weird thing where I like labs to build up into a full stack that resembles something you might actually deploy. It works well, but takes a lot more time to piece together.

If you want to subscribe directly to the Friday Summary only list, just click here.

Top Posts for the Week

  • This continues the huge legal problems due to pressures from U.S. law enforcement. It’s aligned with the Microsoft case in Ireland and the Apple vs. FBI issues here. Basically, it’s going to be very hard for U.S. tech companies to compete internationally if they can’t assure customers they meet local privacy and security laws: Microsoft sues US government over ‘unconstitutional’ cloud data searches
  • This topic comes up a lot. One interesting thing I hadn’t seen before is the ability to inject activity into your AWS account so you can run a response test (slide 13). Let me know if this is possible on other cloud providers: Security Incident Response and Forensics on AWS
  • Google Compute Platform racks up some more certifications. Normally I don’t cover each of these, but from time to time it’s worth highlighting that the major providers are very aggressive on their audits and certifications: Now playing: New ISO security and privacy certifications for Google Cloud Platform
  • There are two papers linked on this Azure post on security and incident response. The IR one should be of particular interested to security pros: Microsoft Incident Response and shared responsibility for cloud computing
  • An interview and transcript from some top-notch DevOps security pros: Rugged DevOps: Making Invisible Things Visible
  • Zero trust is a concept that’s really starting to gain some ground. I know one client who literally doesn’t trust their own network and users need to VPN in even from the office, and all environments are compartmentalized. This is actually easier to do in the cloud vs. a traditional datacenter, especially if you use account segregation: Zero Trust Is a Key to DevOps Security.
  • While it doesn’t look like anyone exploited this vulnerability, still not good, and Office365 is one of the most highly tested platforms out there. Office 365 Vulnerability Exposed Any Federated Account
  • I keep bouncing around testing the different platforms. So far I like Ansible better for deployment pipelines, but Chef or Puppet for managing live assets. However, I don’t run much that isn’t immutable, so I thus don’t have a lot of experience running them at scale in production. If you have any opinions, please email me: Ansible vs Chef . Nothing interesting….

Tool of the Week

Two weeks ago I picked the Gauntlt security testing tool as the Tool of the Week. This week I’ll add to the collection with BDD-Security by ContinuumSecurity (it’s also available on GitHub).

BDD stands for “Behavior Driven Development”. It’s a programming concept outside of security that’s also used for application testing in general. Conceptually, you define a test as “given A when X then Y”. In security terms this could be, “given a user logs in, and it fails four times, then block the user”. BDD-Security supports these kinds of tests and includes both some internal assessment features as well as the ability to integrate external tools, including Nessus, similar to Gauntlt.

Here’s what it would look like directly from an Adobe blog post on the topic:

Scenario: Lock the user account out after 4 incorrect authentication attempts Meta: @id auth_lockout Given the default username from: users.table And an incorrect password And the user logs in from a fresh login page 4 times When the default password is used from: users.table And the user logs in from a fresh login page Then the user is not logged in

These tools are designed to automate security testing into the development pipeline but have the added advantage of speaking to developers on their own terms. We aren’t hitting applications with some black box scanner from the outside that only security understands, we are integrating our tools in a familiar, accepted way, using a common language.

Securosis Blog Posts this Week

Other Securosis News and Quotes

Quiet week

Training and Events

—Rich

Wednesday, April 27, 2016

Incite 4/27/2016: Tap the B.R.A.K.E.S.

By Mike Rothman

I mentioned back in January that XX1 has gotten her driver’s permit and was in command of a two ton weapon on a regular basis. Driving with her has been, uh, interesting. I try to give her an opportunity to drive where possible, like when I have to get her to school in the morning. She can navigate the couple of miles through traffic on the way to her school. And she drives to/from her tutor as well, but that’s still largely local travel.

Though I do have to say, I don’t feel like I need to run as frequently because the 15-20 minutes in the car with her gets my heart racing for the entire trip. Obviously having been driving for over 30 years, I see things as they develop in front of me. She doesn’t. So I have to squelch the urge to say, “Watch that dude over there, he’s about to change lanes.” Or “That’s a red light and that means stop, right?” Or “Hit the f***ing brakes before you hit that car, building, child, etc.”

She only leveled a garbage bin once. Which caused more damage to her ego and confidence than it did to the car or the bin. So overall, it’s going well. But I’m not taking chances, and I want her to really understand how to drive. So I signed her up for the B.R.A.K.E.S. teen defensive driver training. Due to some scheduling complexity taking the class in New Jersey worked better. So we flew up last weekend and we stayed with my Dad on the Jersey Shore.

First, a little digression. When you have 3 kids with crazy schedules, you don’t get a lot of individual time with any of the kids. So it was great to spend the weekend with her and I definitely got a much greater appreciation for the person she is in this moment. As we were sitting on the plane, I glanced over and she seemed so big. So grown up. I got a little choked up as I had to acknowledge how quickly time is passing. I remember bringing her home from the hospital like it was yesterday. Then we were at a family event on Saturday night with some cousins by marriage that she doesn’t know very well. To see her interact with these folks and hold a conversation and be funny and engaging and cute. I was overwhelmed with pride watching her bring light to the situation.

But then it was back to business. First thing Sunday morning we went over the race track. They did the obligatory video to scare the crap out of the kids. The story of B.R.A.K.E.S. is a heartbreaking one. Doug Herbert, who is a professional drag racer, started the program after losing his two sons in a teen driving accident. So he travels around the country with a band of other professional drivers teaching teens how to handle the vehicle.

BRAKES car

The statistics are shocking. Upwards of 80% of teens will get into an accident in their first 3 years of driving. 5,000 teen driving fatalities each year. And these kids get very little training before they are put behind the wheel to figure it out.

The drills for the kids are very cool. They practice accident avoidance and steering while panic breaking. They do a skid exercise to understand how to keep the car under control during a spin. They do slalom work to make sure they understand how far they can push the car and still maintain control. The parents even got to do some of the drills (which was very cool.) They also do a distracted driving drill, where the instructor messes with the kids to show them how dangerous it is to text and play with the radio when driving. They also have these very cool drunk goggles, which simulates your vision when under the influence. Hard to see how any of the kids would get behind the wheel drunk after trying to drive with those goggles on.

I can’t speak highly enough about the program. I let XX1 drive back from the airport and she navigated downtown Atlanta, a high traffic situation on a 7 lane highway, and was able to avoid an accident when a knucklehead slowed down to 30 on the highway trying to switch lanes to make an exit. Her comfort behind the wheel was totally different and her skills were clearly advanced in just the four hours. If you have an opportunity to attend with your teen, don’t think about it. Just do it. Here is the schedule of upcoming trainings, and you should sign up for their mailing list.

The training works. They have run 18,000 teens through the program and not one of them has had a fatal accident. That’s incredible. And important. Especially given my teen will be driving without me (or her Mom) in the car in 6 months. I want to tip the odds in my favor as much as I can.

–Mike


Security is changing. So is Securosis. Check out Rich’s post on how we are evolving our business.

We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF).

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Maximizing WAF Value

Resilient Cloud Network Architectures

Shadow Devices

Building a Vendor IT Risk Management Program

SIEM Kung Fu

Recently Published Papers


Incite 4 U

  1. Blockchain demystified: So I’m having dinner with a very old friend of mine, who is one of the big wheels at a very big research firm. We started together decades ago as networking folks, but I went towards security and he went towards managing huge teams of people. One of his coverage areas now is industry research and specifically financials. So these new currencies, including BitCoin is near and dear to his heart. But he didn’t call it BitCoin, he said blockchain. I had no idea what he was talking about, but our pals at NetworkWorld put up a primer on blockchain. Regardless of whether it was a loudmouth Australian that came up with the technology, it’s going to become a factor in how transactions are validated over time and large tech companies and banks are playing around with it. Being security folks and tasked at some point with protecting things, we probably need to at least understand how it works. – MR

  2. Luck Meets Perseverance: I’m not the first to say that being too early to a market and being dead wrong are virtually indistinguishable, certainly when the final tally is counted. When disruptive technologies emerge during tough economic times, visionaries teeter on oblivion. The recent Farnum Street post on The Creation of IBM’s Competitive Advantage has nothing to do with security. However, it is an excellent illustration of what it takes to succeed; a strong vision of the future, a drive to innovate, the fortitude do what others think is crazy, and enough cash to weather the storm for a while. If you’re not familiar with the story, it’s definitely worth your time. But this storyline remains relevant as dozens of product vendors struggle to sell the future of security to firms that are just trying to get through the day. We see many security startups with the first three attributes. That much is common. We don’t see the fourth attribute much at all, which is cash. Most innovation in technology is funded by VCs with the patience of a 5 year old who’s just eaten a bowl of Cap’n Crunch. IBM was lucky enough to sell off ineffectual businesses to build a war chest, then focused their time, effort and cash on the long term goal. That recipe never goes out of style, but with the common early stage technology funding model, we witness lots of very talented people crash and burn. – AL

  3. Internet Kings Do Research: With ever increasing profits expected by companies (yes, it’s that free market thing), it seems that there isn’t a lot of commercially funded basic research. You know, like the kind Microsoft did back in the day? Stuff that didn’t necessarily help sell more Windows, rather helped to push forward the use of technology. MSFT had the profits to support that. And now it seems Google and Facebook do too. In fact, rock star researchers are moving from one to the other to drive these basic research initiatives. Most recently, Regina Dugan who spent time at DARPA working on military research before heading off to Google to lead their advanced tech lab, now is with Facebook to build a similar team. I think it’s awesome and I’m glad that every time I like something on Facebook it contributes to funding research that may change the world at some point. – MR

  4. EU Data Protection Reform: The EU has approved a new data protection standard set to reform the standing 1995 rules. There are several documents published so it will take time for a thorough analysis, but there are a couple of things to like here. The right to be forgotten, the right to know when your data has been hacked, and the need for consent to process your data are excellent ideas conceptually. With storage virtually limitless, companies and governments have no incentive to forget you or take precautions on your behalf unless pushed. So this is a step in the right direction. But as usual, I’m skeptical at most of the proposal: There is no provision to extend protection to 3rd parties that access citizen’s data. There is no way to opt out of having your data shared amongst third parties or transparency when law enforcement goes rummaging around in your junk, which should be treated no differently than “being hacked”. In fact the EU seemed to go overboard to accommodate “law enforcement”, aka government access, without much oversight. Different day, same story. And without a way for someone to verify you’ve actually been ‘forgotten’ – and not just backed up to different servers – the clause is pretty much worthless. It’s a good next step, and hopefully we won’t wait another 20+ years for an update. – AL

  5. Shyster pen testers: It’s inevitable. There aren’t enough security folks, and services shops are expected to grow. So what do you do? The bait and switch, of course. Have a high level, well regarded tester and then have a bunch of knucklehead kids do the actual test. Then write some total crap report and cash the check. As the Chief Monkey details, there are a lot of shysters in the business now. This is problematic for a lot of reasons. First, a customer may actually listen to the hogwash written in the findings report. You’ve got to check out the post to see some of the beauty’s in there. It also makes it hard for reputable firms, who won’t start dropping the price when a customer challenges their quals. But we have’t repealed the laws of economics, so as long as there is a gap in the number of folks to do the job, then snake oil salespeople will be the reality in the business. – MR

—Mike Rothman

Monday, April 25, 2016

Building a Vendor IT Risk Management Program: Ongoing Monitoring and Communication

By Mike Rothman

As we mentioned last post, after you figure out what risk means to your organization, and determine the best way to quantify and rank your vendors in terms that concept of risk, you’ll need to revisit your risk assessment; because security in general, and each vendor’s environment specifically, is dynamic and constantly changing. We also need to address how to deal with vendor issues (breaches and otherwise) – both within your organization, and potentially to customers as well.

Ongoing Monitoring

When keeping tabs on your vendors you need to decide how often to update your assessments of their security posture. In a perfect world you’d like a continuous view of each vendor’s environment, to help you understand your risk at all times. Of course continuous monitoring costs. So part of defining a V(IT)RM program is figuring out the frequency of assessment.

We believe vendors should not all be treated alike. The vendors in your critical risk tier (described in our last post) should be assessed as often as possible. Hopefully you’ll have a way (most likely through third-party services) of continually monitoring their Internet footprint, and alerting you when something changes adversely. We need to caveat that with a warning about real-time alerts. If you are not staffed to deal with real-time alerts, then getting them faster doesn’t help. In other words, if it takes you 3 days to work through your alert queue, getting an alert within an hour cannot reduce your risk much.

Vendors in less risky tiers can be assessed less frequently. An annual self-assessment and a quarterly scan might be enough for them. Again, this depends on your ability to deal with issues and verify answers. If you aren’t going to look at the results, forcing a vendor to update their self-assessment quarterly is just mean, so be honest with yourself when determining the frequency for assessments.

With assessment frequency determined by risk tier, what next? You’ll find adverse changes to the security posture of some vendors. The next step in the V(IT)RM program is to figure out how to deal with these issues.

Taking Action

You got an alert that there is an issue with a vendor, and you need to take action. But what actions can you take, considering the risk posed by the issue and the contractual agreement already in place? We cannot overstate the importance of defining acceptable actions contractually as part of your vendor onboarding process. A critical aspect of setting up and starting your program is ensuring your contracts with vendors support your desired actions when an issue arises.

So what can you do? This list is pretty consistent with most other security processes:

  • Alert: At minimum you’ll want a line of communication open with the vendor to tell them you found an issue. This is no different than an escalation during an incident response. You’ll need to assemble the information you found, and package it up for the vendor to give them as much information as practical. But you need to balance how much time you’re willing to spend helping the vendor against everything else on your to do list.
  • Quarantine: As an interim measure, until you can figure out what happened and your best course of action, you could quarantine the vendor. That could mean a lot of things. You might segment their traffic from the rest of your network. Or scrutinize each transaction coming from them. Or analyze all egress traffic to ensure no intellectual property is leaking. The point is that you’ll need time to figure out the best course of action, and putting the vendor in a proverbial penalty box can buy you that time. This is also contingent on being able to put a boundary around a specific vendor or service provider, which may not be possible, depending on what services they provide.
  • Cut off: There is also the kill switch, which removes vendor access from your systems and likely ends the business relationship. This is a draconian action, but sometimes a vendor presents such risk, and/or doesn’t make the changes you require, so you may not have a choice. As mentioned above, you’ll need to make sure your contract supports this action. Unless you enjoy protracted litigation.

The latter two options impact the flow of business between your organization and the vendor, so you’ll need a process in place internally to determine if and when you quarantine and/or cut off a vendor. This escalation and action plan needs to be defined ahead of time. The rules of engagement, and the criteria to suspend or end a business relationship due to IT risk, need to be established ahead of time. Defined escalations ensure the internal stakeholders are in the loop as you consider flipping the kill switch.

A good rule of thumb is that you don’t want to surprise anyone when a vendor goes into quarantine or is cut off from your systems. If the business decision is made to keep the vendor active in your systems (a decision made well above your pay grade), at least you’ll have documentation that the risk was accepted by the business owner.

Communicating Issues

Once the action plan is defined, documented, and agreed upon, you’ll want to build a communication plan. That includes defining when you’ll notify the vendor and when you’ll communicate the issue internally. As part of the vendor onboarding process you need to define points of contact with the vendor. Do they have a security team you should interface with? Is it their business operations group? You need to know before you run into an issue.

You’ll also want to make sure to have an internal discussion about how much you will support the vendor as they work through any issues you find. If the vendor has an immature security team and/or program, you can easily end up doing a lot of work for them. And it’s not like you have a bunch of time to do someone else’s work, right?

Of course business owners may be unsympathetic to your plight when their key vendor is cut off. That’s why organizational buy-in for criteria for quarantining or cutting a vendor off is critical. The last thing you as a security professional want is to be in a firefight with a business leader over a key vendor. Establish your criteria, and manage to them. If you are overruled and the decision is to make an exception, you can’t do much about that. But at least you will be on record that the decision goes against the policies established within your vendor risk management program.

Breach Exposure

If you have enough vendors you will run into a situation where your vendor suffers a public breach. You will need to factor that specifically into your program, because you may have a responsibility to disclose the third-party breach to your customers as well. First things first: the vendor breach shouldn’t be a surprise to you. A vendor should proactively call their customers when they are breached and customers are at risk. But this is the real world, so we cannot afford to count on them acting correctly. What then?

This comes back to your organization’s Incident Response playbook. You have that documented, right? As described in our Incident Response Fundamentals research, you need to size up the issue, build a team, assess the damage, and then move to contain it. Of course this is a bit different for vendors, because there is a lot you don’t know about their systems. And depending on the vendor’s sophistication, they might not know either.

So (as always) internal communication and keeping senior management appraised of the situation are critical. You need to stay in close contact with the vendor, constantly assess your level of exposure, and decide if and when you need to disclose – to your board, audit committee, and possibly customers.

Also, as described in Our IR fundamentals research, make sure to work through a post-mortem with the vendor to make sure they learned from the experience. If you aren’t satisfied that it won’t happen again, perhaps you need to escalate to the business managers on your side, to reevaluate the relationship in light of the additional risk in doing business with them. Additionally, use this as an opportunity to refine your own process for next time a vendor gets popped.

—Mike Rothman

Friday, April 22, 2016

Building a Vendor IT Risk Management Program: Evaluating Vendor Risk

By Mike Rothman

As we discussed in the first post in this series, whether it’s thanks to increasingly tighter business processes/operations with vendors andtrading partners, or to regulation (especially in finance) you can no longer ignore vendor risk management. So we delved into the structure and mapped out a few key aspects of a VRM program. Of course we are focused on the IT aspects of vendor management, which should be a significant component of a broader risk management approach for your environment.

But that begs the question of how you can actually evaluate the risks of a vendor. What should you be worried about, and how can you gather enough information to make an objective judgement of the risk posed by every vendor? So that’s what we’ll explore in this post.

Risk in the Eye of the Beholder

The first aspect of evaluating vendor risk is actually defining what that risk means to your organization. Yes, that seems self-evident, but you’d be surprised how many organizations don’t document or get agreement on what presents vendor risk, and then wonder why their risk management programs never get anywhere. Sigh.

All the same, as mentioned above, vendor (IT) risk is a component of a larger enterprise risk management program. So first establish the risks of working with vendors. Those risks can be broken up into a variety of buckets, including:

  • Financial: This is about the viability of your vendors. Obviously this isn’t something you can control from an IT perspective, but if a key vendor goes belly up, that’s a bad day for your organization. So this needs to be factored in at the enterprise level, as well as considered from an IT perspective – especially as cloud services and SaaS proliferate. If your Database as a Service vendor (or any key service provider) goes away, for whatever reason, that presents risk to your organization.

  • Operational: You contract with vendors to do something for your organization. What is the risk if they cannot meet those commitments? Or if they violate service level agreements? Again it is enterprise-level risk of the organization, but it also peeks down into the IT world. Do you pack up shop and go somewhere else if your vendor’s service is down for a day? Are your applications and/or infrastructure portable enough to even do that?

  • Security: As security professionals this is our happy place. Or unhappy place, depending on how you feel about the challenges of securing much of anything nowadays. This gets to the risk of a vendor being hacked and losing your key data, impacting availability of your services, and/or allowing an adversary to jump access your networks and systems.

Within those buckets, there are probably a hundred different aspects that present risk to your organization. After defining those buckets of risk, you need to dig into the next level and figure out not just what presents risk, but also how to evaluate and quantify that risk. What data do you need to evaluate the financial viability of a vendor? How can you assess the operational competency of vendors? And finally, what can you do to stay on top of the security risk presented by vendors? We aren’t going to tackle financial or operational risk categories, but we’ll dig into the IT security aspects below.

Ask them

The first hoop most vendors have to jump through is self-assessment. As a vendor to a number of larger organizations, we are very familiar with the huge Excel spreadsheet or web app built to assess our security controls. Most of the questions revolve around your organization’s policies, controls, response, and remediation capabilities.

The path of least resistance for this self-assessment is usually a list of standard controls. Many organizations start with ISO 27002, COBIT, and PCI-DSS. Understand relevance is key here. For example, if a vendor is only providing your organization with nuts and bolts, their email doesn’t present a very significant risk. So you likely want a separate self-assessment tool for each risk category, as we’ll discuss below.

It’s pretty easy to lie on a spreadsheet or web application. And vendors do exactly that. But you don’t have the resources to check everything, so there is a measure of trust, but verify that your need to apply here. Just remember that it’s resource-intensive to evaluate every answer, so focus on what’s important, based on the risk definitions above.

External information

Just a few years ago, if you wanted to assess the security risk of a vendor, you needed to either have an on-site visit or pay for a penetration test to really see what an attacker could do to partners. That required a lot of negotiation and coordination with the vendor, which meant it could only be used for your most critical vendors. And half the time they’d tell you to go pound sand, pointing to the extensive self-assessment you forced them to fill out.

But now, with the introduction of external threat intelligence services, and techniques you can implement yourself, you can get a sense of what kind of security mess your vendors actually are. Here are a few types of relevant data sources:

  • Botnets: Botnets are public by definition, because they use compromised devices to communicate with each other. So if a botnet is penetrated you can see who is connecting to it at the time, and get a pretty good idea of which organizations have compromised devices. That’s exactly how a number of services tell you that certain networks are compromised without ever looking at the networks in question.

  • Spam: If you have a network that is blasting out a bunch of spam, that indicatives an issue. It’s straightforward to set up a number of dummy email accounts to gather spam, and see which networks are used to blast millions of messages a day. If a vendor owns one of those networks, that’s a disheartening indication of their security prowess.

  • Stolen credentials: There are a bunch of forums where stolen credentials are traded, and if a specific vendor shows up with tons of their accounts and passwords for sale, that means their security probably leaves a bit to be desired.

  • Malware distribution/infected hosts: Another indication of security failure is Internet-facing devices which are compromised and then used to either host phishing sites or distribute malware, or both. If a vendor’s Internet site is infected and distributing malware, they likely have no idea what they are doing.

  • Public Breaches: We’ll discuss this later, but if your vendor is a public company or deals with consumers, they have to disclose breaches to their customers. Although you’ve likely gotten kind of numb to yet another breach notification, if it mentions a key vendor, that’s a concern. We’ll discuss what to do when a vendor is breached later in this series.

  • Security Best Practices: There are also other tells that a vendor knows a bit about security. Do they encrypt all traffic to/from their public sites? Do they authenticate their email with technologies like SPF or DKIM? Do they use secure DNS requests? To be clear, these aren’t conclusive indicators, but they can certainly give you a clue to how serious a vendor is about security.

So how do you gather all of this information? You can certainly do it yourself. Set up a bunch of honeypots and dedicate some internal resources to mining through the data. If you have tens of thousands of vendors and are heavily regulated, you might do exactly this. Otherwise you’ll likely rely on an external information provider to perform this analysis for you.

We covered some aspects of these services in our Ecosystem Risk Management Paper, but we’ll quickly summarize here. You need to figure out if you are looking for this vendor to provide a score and ranking of your other vendors, or whether you want the raw data on which vendors have issues (whether with botnets, malware distribution, etc.) to perform your own analysis and draw your own conclusions.

Risk Tiers

To make this kind of program feasible, without requiring another 25 bodies, let’s discuss risk tiering. Larger organizations may have thousands of vendors. It’s hard to consistently perform deep risk analysis of every vendor you do business with. But you also cannot afford only a cursory look at a few key vendors which present significant risk. So you can tier different vendors into separate risk tiers.

We’re simple folks, so we find any more than 3 or 4 tiers unwieldy. One of your first actions, after defining your IT security risk, is to nail down and build consensus on how to tier vendors by risk. Then your analyses and assessments will be based on the risk tier, and not some arbitrary decision on how deeply to look at each vendor. You could use tiers such as: critical, important, and basic. You could call them “unimportant” vendors but that might damage their self-esteem. The names don’t matter – you just need a set of tiers to group them between.

Critical vendors will get the full boat. A self-assessment, a means to externally evaluate their security posture, and possibly a site visit. You’ll scrutinize their self-assessments and have alerts triggered when something changes with them. We’ll go into what options you have to deal with vendor risk our the next post, but for now suffice it to say you’ll be all over your critical vendors, to make sure they are secure and have a plan to address any deficiencies.

Important vendors may warrant a cursory look at the self-assessment and the external evaluation. The security bar might need to be lower for these folks, because they present less risk to your organization. Basic vendors send in their self-assessment, and maybe you perform a simple vulnerability scan on their external web properties, just to check some box on an auditor’s checklist. That’s about all you’ll have time for with these folks.

Could you have more risk tiers? Absolutely. But the amount of work increases exponentially with each additional tier. That’s why we favor only using a handful, knowing that from a risk management standpoint the best bang for your buck will be from focusing on your critical vendors.

Tracking over Time

Obviously security is highly dynamic, so what is secure today might not be tomorrow. Likewise, what was a mess a month ago may not be so bad right now. Yet most vendor risk assessments provide a single point-in-time view, representing what things looked like at that moment. Such an assessment has limited value, because every organization can have the proverbial bad day, and inevitably some data sources provide false positives.

You want to evaluate each partner over time, and track their progress. Have they shown less infected hosts over time? How long does it take them to remediate devices participating in botnets? Has a vendor that traditionally did security well, suddenly started blasting spam and joining a bunch of botnets? Part of defining your vendor (IT) risk management program is figuring out which of the quantitative risk metrics most closely represent real risk to your organization, and need to be tracked and managed over time.

Alerting is also a key aspect of executing on the program. If a critical vendor shows up on a botnet, do you drop everything and address it with the partner? That question provides a nice segue to our next post, which will discuss ongoing monitoring and communication for your vendor (IT) risk management program.

—Mike Rothman