Login  |  Register  |  Contact
Monday, June 13, 2016

Understanding and Selecting RASP: Buyers Guide

By Adrian Lane

Before we jump into today’s post, we want to thank Immunio for expressing interest in licensing this content. This type of support enables us to bring quality research to you, free of charge. If you are interested in licensing this Securosis research as well, please let us know. And we want to thank all of you who have been commenting throughout this series – we have received many good comments and questions. We have in fact edited most of the posts to integrate your feedback, and added new sections to address your questions. This research is certainly better for it! And it’s genuinely helpful that the community at large can engage is an open discussion, so thanks again to all you who have participated.

We will close out this series by directing your attention to several key areas for buyers to evaluate, in order to assess suitability for your needs. With new technologies it is not always clear where the ‘gotchas’ are. We find many security technologies meet basic security goals, but after they have been on-premise for some time, you discover management or scalability nightmares. To help you avoid some of these pitfalls, we offer the following outline of evaluation criteria. The product you choose should provide application protection, but it should also be flexible enough to work in your environment. And not just during Proof of Concept (PoC) – every day.

  • Language Coverage: Your evaluation should ensure that the RASP platforms you are considering all cover the programming languages and platforms you use. Most enterprises we speak with develop applications on multiple platforms, so ensure that there is appropriate coverage for all your applications – not just the ones you focus on during the evaluation process.
  • Blocking: Blocking is a key feature. Sure, some of you will use RASP for monitoring and instrumentation – at least in the short term – but blocking is a huge part of RASP’s value. Without blocking there is no protection – even more to the point, get blocking wrong and you break applications. Evaluating how well a RASP product blocks is essential. The goal here is twofold: make sure the RASP platform is detecting the attacks, and then determine if its blocking action negatively affects them. We recommend penetration testing during the PoC, both to verify that common attack vectors are handled, and to gauge RASP behavior when attacks are discovered. Some RASPs simply block the request and return an error message to the user. In some cases RASP can alter a request to make it benign, then proceed as normal. Some products alter user sessions and redirect users to login again, or jump through additional hoops before proceeding. Most RASP products provide customers a set of options for how they should respond to different types of attacks. Most vendors consider attack detection techniques part of their “secret sauce”, so we are unable to offer insight into the differences. But just as important is how well application continuity is preserved when responding to threats, which you can monitor directly during evaluation.
  • Policy Coverage: It’s not uncommon for one or more members of a development team to be proficient with application security. That said, it’s unreasonable to expect developers to understand the nuances of new attacks and the details behind every CVE. Vulnerability research, methods of detection, and appropriate methods to block attacks are large parts of the value each RASP vendor provides. Your vendor spends days – if not weeks – developing each policy embedded into their tool. During evaluation, it’s important to ensure that critical vulnerabilities are addressed. But it is arguably more important to determine how – and how often – vendors update policies, and verify they include ongoing coverage. A RASP product cannot better than its policies, so ongoing support is critical as new threats are discovered.
  • Policy Management: Two facets of policy management come up most often during our discussions. The first is identification of which protections map to specific threats. Security, risk, and compliance teams all ask, “Are we protected against XYZ threat?” You will need to show that you are. Evaluate policy lookup and reporting. The other is tuning how to respond to threats. As we mentioned above under ‘Blocking’, most vendors allow you to tune responses either by groups of issues, or on a threat-by-threat basis. Evaluate how easy this is to use, and whether you have sufficient options to tailor responses.
  • Performance: Being embedded into applications enables RASP to detect threats at different locations within your app, with context around the operation being performed. This context is passed. along with the user request, to a central enforcement point for analysis. The details behind detection vary widely between vendors, so performance varies as well. Each user request may generate dozens of checks, possibly including multiple external references. This latency can easily impact user experience, so sample how long analysis takes. Each code path will apply a different set of rules, so you will need to test several different paths, measuring both with and without RASP. You should do this under load to ensure that detection facilities do not bottleneck application performance. And you’ll want to understand what happens when some portion of RASP fails, and how it responds – does it “fail open”?
  • Scalability: Most web applications scale by leveraging multiple application instances, distributing user requests distributed via a load balancer. As RASP is typically built into the application, it scales right along with it, without need for additional changes. But if RASP leverages external threat intelligence, you will want to verify this does not hamper scalability. For RASP platforms where the point of analysis – as opposed to the point of interception – is outside your application, you need to verify how the analysis component scales. For RASP products that work as a cloud service using non-deterministic code inspection, evaluate how their services scale.
  • API Compatibility: Most interest in RASP is prompted by a desire to integrate into application development processes, automating security deployment alongside application code, so APIs are a central feature. Ensure the RASP products you consider are compatible with Jenkins, Ansible, Chef, Puppet, or whatever automated build tools you employ. On the back end make sure RASP feeds information back into your systems for defect tracking, logging, and Security Information and Event Management (SIEM). This data is typically available in JSON, syslog, and other formats, but ensure each product provides what you need.

That concludes our series on RASP. As always, we encourage comments, questions and critique, so please let us know what’s on your mind.

—Adrian Lane

Getting the SWIFT Boot

By Mike Rothman

As long as I have been in security and following the markets, I have observed that no one says security is unimportant. Not out loud, anyway. But their actions usually show a different view. Maybe there is a little more funding. Maybe somewhat better visibility at the board level. But mostly security gets a lot of lip service.

In other words, security doesn’t matter. Until it does.

boots

The international interbank payment system called SWIFT has successfully been hit multiple times by hackers, and a few other attempts have been foiled. Now they are going to start turning the screws on member banks, because SWIFT has finally realized they can be very secure but still get pwned. It doesn’t help when the New York Federal Reserve gets caught up in a ruse due to lax security at a bank in Bangladesh.

So now the lip service is becoming threats. That member banks will have their access to SWIFT revoked if they don’t maintain a sufficient security posture. Ah, more words. Will this be like the words uttered every time someone asks if security is important? Or will there be actual action behind them?

That action needs to include specific guidance on what security actually looks like. This is especially important for banks in emerging countries, which may not have a good idea of where to start. And yes, those organizations are out there. The action also needs to involve some level of third-party assessment. Self-assessment doesn’t cut it.

I think SWIFT can take a page from the Payment Card Industry. The initial PCI-DSS, and the resulting work to get laggards over a (low) security bar did help. It’s not an ongoing sustainable answer, because at some point the assessments became a joke and the controls required by the standard have predictably failed to keep pace with attacks.

But security at a lot of these emerging banks is a dumpster fire. And the folks who work with them realize where the weakest links are. But actions speak much louder than words, so watch for actions.

Photo credit: “Boots” originally uploaded by Rob Pongsajapan

—Mike Rothman

Friday, June 10, 2016

Summary: June 10, 2016

By Adrian Lane

Adrian here.

A phone call about Activity Monitoring administrative actions on mainframes, followed by a call on security architectures for new applications in AWS. A call on SAP vulnerability scans, followed by a call on Runtime Application Self-Protection. A call on protecting relational databases against SQL injection, followed by a discussion of relevant values to key security event data for a big data analytics project. Consulting with a firm which releases code every 12 months, and discussing release management with a firm that is moving to two-a-day in a continuous deployment model. This is what my call logs look like.

If you want to see how disruptive technology is changing security, you can just look at my calendar. On any given day I am working at both extremes in security. On one hand we have the old and well-worn security problems; familiar, comfortable and boring. On the other hand we have new security problems, largely part driven by cloud and mobile technologies, and the corresponding side-effects – such as hybrid architectures, distributed identity management, mobile device management, data security for uncontrolled environments, and DevOps. Answers are not rote, problems do not always have well-formed solutions, and crafting responses takes a lot of work. Worse, the answer I gave yesterday may be wrong tomorrow, if the pace of innovation invalidates my answer. This is our new reality.

Some days it makes me dizzy, but I’ve embraced the new, if for no other reason that to avoid being run over by it. It’s challenging as hell, but it’s not boring.

On to this week’s summary:

If you want to subscribe directly to the Friday Summary only list, just click here.

Top Posts for the Week

Tool of the Week

I decided to take some to and learn about tools more common to clouds other than AWS. I was told Kubernetes was the GCP open source version of Docker, so I though that would be a good place to start. After I spent some time playing with it, I realized what I was initially told was totally wrong! Kubernetes is called a “container manager”, but it’s really focused on setting up services. Docker focuses on addressing app dependencies and packaging; Kubernetes on app orchestration. And it runs anywhere you want – not just GCP and GCE, but in other clouds or on-premise. If you want to compare Kubernetes to something in the Docker universe, it’s closest to Docker Swarm, which tackles some of the management and scalability issues.

Kubernetes has three basic parts: controllers that handle things like replication and pod behaviors; a simple naming system – essentially using key-value pairs – to identify pods; and a services directory for discovery, routing, and load balancing. A pod can be one or more Docker containers, or a standalone application. These three primitives make it pretty easy to stand up code, direct application requests, manage clusters of services, and provide basic load balancing. It’s open source and works across different clouds, so your application should work the same on GCP, Azure, or AWS. It’s not super easy to set up, but it’s not a nightmare either. And it’s incredibly flexible – once set up, you can easily create pods for different services, with entirely different characteristics.

A word of caution: if you’re heavily invested in Docker, you might instead prefer Swarm. Early versions of Kubernetes seemed to have Docker containers in mind, but the current version does not integrate with native Docker tools and APIs, so you have to duct tape some stuff together to get Docker compliant containers. Swarm is compliant with Docker’s APIs and works seamlessly. But don’t be swayed by studies that compare container startup times as a main measure of performance; that is one of the least interesting metrics for comparing container management and orchestration tools. Operating performance, ease of use, and flexibility are all far more important. If you’re not already a Docker shop, check out Kubernetes – its design is well-thought-out and purpose-built to tackle micro-service deployment. And I have not yet had a chance to use Google’s Container Engine, but it is supposed to make setup easier, with a number of supporting services.

Securosis Blog Posts this Week

Other Securosis News and Quotes

Training and Events

—Adrian Lane

Thursday, June 09, 2016

Building Resilient Cloud Network Architectures [New Paper]

By Mike Rothman

Building Resilient Cloud Network Architectures builds on our Pragmatic Security Cloud and Hybrid Networks research, focusing on cloud-native network architectures that provide security and availability infeasible in a traditional data center. The key is that cloud computing provides architectural options which are either impossible or economically infeasible in traditional data centers, enabling greater protection and better availability.

RCNA Cover

We would like to thank Resilient Systems, an IBM Company, for licensing the content in this paper. We built the paper using our Totally Transparent Research model, leveraging what we’ve learned building cloud applications over the past 4 years.

You can get the paper from the landing page in our research library.

—Mike Rothman

Wednesday, June 08, 2016

Evolving Encryption Key Management Best Practices: Use Cases

By Rich

This is the third in a three-part series on evolving encryption key management best practices. The first post is available here. This research is also posted at GitHub for public review and feedback. My thanks to Hewlett Packard Enterprise for licensing this research, in accordance with our strict Totally Transparent Research policy, which enables us to release our independent and objective research for free.

Use Cases

Now that we’ve discussed best practices, it’s time to cover common use cases. Well, mostly common – one of our goals for this research is to highlight emerging practices, so a couple of our use cases cover newer data-at-rest key management scenarios, while the rest are more traditional options.

Traditional Data Center Storage

It feels a bit weird to use the word ‘traditional’ to describe a data center, but people give us strange looks when we call the most widely deployed storage technologies ‘legacy’. We’d say “old school”, but that sounds a bit too retro. Perhaps we should just say “big storage stuff that doesn’t involve the cloud or other weirdness”.

We typically see three major types of data storage encrypted at rest in traditional data centers: SAN/NAS, backup tapes, and databases. We also occasionally we also see file servers encrypted, but they are in the minority. Each of these is handled slightly differently, but normally one of three ‘meta-architectures’ is used:

  • Silos: Some storage tools include their own encryption capabilities, managed within the silo of the application/storage stack. For example a backup tape system with built-in encryption. The keys are managed by the tool within its own stack. In this case an external key manager isn’t used, which can lead to a risk of application dependency and key loss, unless it’s a very well-designed product.
  • Centralized key management: Rather than managing keys locally, a dedicated central key management tool is used. Many organizations start with silos, and later integrate them with central key management for advantages such as improved separation of duties, security, auditability, and portability. Increasing support for KMIP and the PKCS 11 standards enables major products to leverage remote key management capabilities, and exchange keys.
  • Distributed key management: This is very common when multiple data centers are either actively sharing information or available for disaster recovery (hot standby). You could route everything through a single key manager, but this single point of failure would be a recipe for disaster. Enterprise-class key management products can synchronize keys between multiple key managers. Remote storage tools should connect to the local key manager to avoid WAN dependency and latency. The biggest issue with this design is typically ensuring the different locations synchronize quickly enough, which tends to be more of an issue for distributed applications balanced across locations than for a hot standby sites, where data changes don’t occur on both sides simultaneously. Another major concern is ensuring you can centrally manage the entire distributed deployment, rather than needing to log into each site separately.

Each of those meta-architectures can manage keys for all of the storage options we see in use, assuming the tools are compatible, even using different products. The encryption engine need not come from the same source as the key manager, so long as they are able to communicate.

That’s the essential requirement: the key manager and encryption engines need to speak the same language, over a network connection with acceptable performance. This often dictates the physical and logical location of the key manager, and may even require additional key manager deployments within a single data center. But there is never a single key manager. You need more than one for availability, whether in a cluster or using a hot standby.

As we mentioned under best practices, some tools support distributing only needed keys to each ‘local’ key manager, which can strike a good balance between performance and security.

Distributed

Applications

There are as many different ways to encrypt an application as there are developers in the world (just ask them). But again we see most organizations coalescing around a few popular options:

  • Custom: Developers program their own encryption (often using common encryption libraries), and design and implement their own key management. These are rarely standards-based, and can become problematic if you later need to add key rotation, auditing, or other security or compliance features.
  • Custom with external key management: The encryption itself is, again, programmed in-house, but instead of handling key management itself, the application communicates with a central key manager, usually using an API. Architecturally the key manager needs to be relatively close to the application server to reduce latency, depending on the particulars of how the application is programmed. In this scenario, security depends strongly on how well the application is programmed.
  • Key manager software agent or SDK: This is the same architecture, but the application uses a software agent or pre-configured SDK provided with the key manager. This is a great option because it generally avoids common errors in building encryption systems, and should speed up integration, with more features and easier management. Assuming everything works as advertised.
  • Key manager based encryption: That’s an awkward way of saying that instead of providing encryption keys to applications, each application provides unencrypted data to the key manager and gets encrypted data in return, and vice-versa.

We deliberately skipped file and database encryption, because they are variants of our “traditional data center storage” category, but we do see both integrated into different application architectures.

Based on our client work (in other words, a lot of anecdotes), application encryption seems to be the fastest growing option. It’s also agnostic to your data center architecture, assuming the application has adequate access to the key manager. It doesn’t really care whether the key manager is in the cloud, on-premise, or a hybrid.

Hybrid Cloud

Speaking of hybrid cloud, after application encryption (usually in cloud deployments) this is where we see the most questions. There are two main use cases:

  • Extending existing key management to the cloud: Many organizations already have a key manager they are happy with. As they move into the cloud they may either want to maintain consistency by using the same product, or need to support a migrating application without having to gut their key management to build something new. One approach is to always call back over the network to the on-premise key manager. This reduces architectural changes (and perhaps additional licensing), but often runs into latency and performance issues, even with a direct network connection. Alternatively you can deploy a virtual appliance version of your key manager as a ‘bastion’ host, and synchronize keys so assets in the cloud connect to the distributed virtual server for better performance.
  • Building a root of trust for cloud deployments: Even if you are fully comfortable deploying your key manager in the cloud, you may still want an on-premise key manager to retain backups of keys or support interoperability across cloud providers.

Generally you will want to run a virtual version of your key manager within the cloud to satisfy performance requirements, even though you could route all requests back to your data center. It’s still essential to synchronize keys, backups, and even logs back on-premise or to multiple, distributed cloud-based key managers, because no single instance or virtual machine can provide sufficient reliability.

Hybrid Cloud

Bring Your Own Key

This is a very new option with some cloud providers who allow you to use an encryption service or product within their cloud, while you retain ownership of your keys. For example you might provide your own file encryption key to your cloud provider, who then uses it to encrypt your data, instead of using a key they manage.

The name of the game here is ‘proprietary’. Each cloud provider offers different ways of supporting customer-managed keys. You nearly always need to meet stringent network and location requirements to host your key manager yourself, or you need to use your cloud provider’s key management service, configured so you can manage your keys yourself.

—Rich

Incite 6/7/2016: Nature

By Mike Rothman

Like many of you, I spend a lot of time sitting on my butt banging away at my keyboard. I’m lucky that the nature of my work allows me to switch locations frequently, and I can choose to have a decent view of the world at any given time. Whether it’s looking at a wide assortment of people in the various Starbucks I frequent, my home office overlooking the courtyard, or pretty much any place I can open my computer on my frequent business travels. Others get to spend all day in their comfy (or not so comfy) cubicles, and maybe stroll to the cafeteria once a day.

I have long thought that spending the day behind a desk isn’t the most effective way to do things. Especially for security folks, who need to be building relationships with other groups in the organization and proselytizing the security mindset. But if you are reading this, your job likely involves a large dose of office work. Even if you are running from meeting to meeting, experiencing the best conference rooms, we spend our days inside breathing recycled air under the glare of florescent lights.

Panther Falls, GA

Every time I have the opportunity to explore nature a bit, I remember how cool it is. Over the long Memorial Day weekend, we took a short trip up to North Georgia for some short hikes, and checked out some cool waterfalls. The rustic hotel where we stayed didn’t have cell service (thanks AT&T), but that turned out to be great. Except when Mom got concerned because she got a message that my number was out of service. But through the magic of messaging over WiFi, I was able to assure her everything was OK. I had to exercise my rusty map skills, because evidently the navigation app doesn’t work when you have no cell service. Who knew?

It was really cool to feel the stress of my day-to-day activities and responsibilities just fade away once we got into the mountains. We wondered where the water comes from to make the streams and waterfalls. We took some time to speculate about how long it took the water to cut through the rocks, and we were astounded by the beauty of it all. We explored cute towns where things just run at a different pace. It really put a lot of stuff into context for me. I (like most of you) want it done yesterday, whatever we are talking about.

Being back in nature for a while reminded me there is no rush. The waterfalls and rivers were there long before I got here. And they’ll be there long after I’m gone. In the meantime I can certainly make a much greater effort to take some time during the day and get outside. Even though I live in a suburban area, I can find some green space. I can consciously remember that I’m just a small cog in a very large ecosystem. And I need to remember that the waterfall doesn’t care whether I get through everything on my To Do list. It just flows, as should I.

–Mike

Photo credit: “Panther Falls - Chattachoochee National Forest” - Mike Rothman May 28, 2016


Security is changing. So is Securosis. Check out Rich’s post on how we are evolving our business.

We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF).

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Evolving Encryption Key Management Best Practices

Incident Response in the Cloud Age

Understanding and Selecting RASP

Maximizing WAF Value

Shadow Devices

Recently Published Papers


Incite 4 U

  1. Healthcare endpoints are sick: Not that we didn’t already know, given all the recent breach notifications from healthcare organizations, but they are having a tough time securing their endpoints. The folks at Duo provide some perspective on why. It seems those endpoints log into twice as many apps, and a large proportion are based on leaky technology like Flash and Java. Even better, over 20% use unsupported (meaning unpatched) versions of Internet Explorer. LOL. What could possibly go wrong? I know it’s hard, and I don’t mean to beat up on our fine healthcare readers. We know there are funding issues, the endpoints are used by multiple people, and they are in open environments where almost anyone can go up and mess around with them. And don’t get me started on the lack of product security in too many medical systems and products. But all the same, it’s not like they have access to important information or anything. Wait… Oh, they do. Sigh. – MR

  2. Insecure by default: Scott Schober does a nice job outlining Google’s current thinking on data encryption and the security of users’ personal data. Essentially for the new class of Google’s products, the default is to disable end-to-end encryption. You do have the option of turning it on, but Google still manages the encryption keys (unlike Apple). But their current advertising business model, and the application of machine learning to aid users beyond what’s provided today, pretty much dictate Google’s need to collect and track personally identifiable information. Whether that is good or bad is in the eye of the beholder, but realize that when you plunk a Google Home device into your home, it’s always listening and will capture and analyze everything. We now understand that at the very least the NSA siphons off all content sent to the Google cloud, so we recommend enabling end-to-end encryption, which forces intelligence and law enforcement to crack the encryption or get a warrant to view personal information. Even though this removes useful capabilities. – AL

  3. Moby CEO: It looks like attackers are far better at catching whales than old Ahab. In what could be this year’s CEO cautionary tale (after the Target incident a few years back), an Austrian CEO got the ax because he got whaled to the tune of $56MM. Yes, million (US dollars, apparently). Of course if a finance staffer is requested to transfer millions in [$CURRENCY], there should be some means of verifying the request. It is not clear where the internal controls failed in this case. All the same, you have to figure that CEO will have “confirm internal financial controls” at the top of his list at his next gig. If there is one. – MR

  4. Tagged and tracked: It’s fascinating to watch the number of ways users’ online activity can be tracked, with just about every conceivable browser plug-in and feature minable for user identity and activity. A recent study from Princeton University called The Long Tail of Online Tracking outlines the who, what, and how of tracking software. It’s no surprise that Google, Facebook, and Twitter are tracking users on most sites. What is surprising is that many sites won’t load under the HTTPS protocol, and degenerate to HTTP to ensure content sharing with third parties. As is the extent to which tracking firms go to identify your devices – using AudioContext, browser configuration, browser extensions, and just about everything else they can access to build a number of digital fingerprints to identify people. If you’re interested in the science behind this, that post links to a variety of research, as well as the Technical Analysis of client identification mechanisms from the Google Chromium Security team. And they should know how to identify users (doh!). – AL

  5. Why build it once when you can build it 6 times? I still love that quote from the movie Contact. “Why build it once, when you can build it twice for twice the price?” Good thing they did when the first machine was bombed. It seems DARPA takes the same approach – they are evidently underwriting 6 different research shops to design a next generation DDoS defense. It’s not clear (from that article, anyway) whether the groups were tasked with different aspects of a larger solution. DDoS is a problem. But given the other serious problems facing IT organizations, is it the most serious? It doesn’t seem like it to me. But all the same, if these research shops make some progress, that’s a good thing and it’s your tax dollars at work (if you pay taxes in the US, anyway). – MR

—Mike Rothman

Tuesday, June 07, 2016

Mr. Market Loves Ransomware

By Mike Rothman

The old business rule is: when something works, do more of it. By that measure ransomware is clearly working. One indication is the number of new domains popping up which are associated with ransomware attacks. According to an Infoblox research report (and they provide DNS services, so they should know), there was a 35x increase in ransomware domains in Q1.

You have also seen the reports of businesses getting popped when an unsuspecting employee falls prey to a ransomware attack; the ransomware is smart enough to find a file share and encrypt all those files too. And even when an organization pays, the fraudster is unlikely to just give them the key and go away.

This is resulting in real losses to organizations – the FBI says organizations lost over $200 million in Q1 2016. Even if that number is inflated, it’s a real business, so you will see a lot more of it. The attackers follow Mr. Market’s lead, and clearly the ‘market’ loves ransomware right now.

So what can you do? Besides continue to train employees not to click stuff? An article at NetworkWorld claims to have the answer for how to deal with ransomware. They mention strategies for trying to recover faster via “regular and consistent backups along with tested and verified restores.” This is pretty important – just be aware that you may be backing up encrypted files, so make sure you have backups from far enough back that you can recover the files before the attack. This is obvious in retrospect, but backup/recovery is a good practice regardless of whether you are trying to deal with malware, ransomware, or hardware failure that puts data at risk.

Their other suggested defense is to prevent the infection. The article’s prescribed approach is application whitelisting (AWL). We are fans of AWL in specific use cases – here the ransomware wouldn’t be allowed to run on devices, because it’s not authorized. Of course the deployment issues with AWL, given how it can impact user experience, are well known. Though we do find whitelisting appropriate for devices that don’t change frequently or which hold particularly valuable information, so long as you can deal with the user resistance.

They don’t mention other endpoint protection solutions, such as isolation on endpoint devices. We have discussed the various advanced endpoint defense strategies, and will be updating that research over the next couple of months. Adding to the confusion, every endpoint defense vendor seems to be shipping a ‘ransomware’ solution… which is really just their old stuff, rebranded.

So what’s the bottom line? If you have an employee who falls prey to ransomware, you are going to lose data. The question is: How much? With advanced prevention technologies deployed, you may stop some of the attacks. With a solid backup strategy, you may minimize the amount of data you lose. But you won’t escape unscathed.

—Mike Rothman

Monday, June 06, 2016

Building a Vendor (IT) Risk Management Program [New Paper]

By Mike Rothman

In Building a Vendor (IT) Risk Management Program, we explain why you can no longer ignore the risk presented by third-party vendors and other business partners, including managing an expanded attack surface and new regulations demanding effective management of vendor risk. We then offer ideas for how to build a structured and systematic program to assess vendor (IT) risk, and take action when necessary.

VRM Cover

We would like to thank BitSight Technologies for licensing the content in this paper. Our unique Totally Transparent Research model allows us to perform objective and useful research without requiring paywalls or other such nonsense, which make it hard for the people who need our research to get it. A day doesn’t go by where we aren’t thankful to all the companies who license our research.

You can get the paper from the landing page in our research library.

—Mike Rothman

Friday, June 03, 2016

Evolving Encryption Key Management Best Practices: Part 2

By Rich

This is the second in a four-part series on evolving encryption key management best practices. The first post is available here. This research is also posted at GitHub for public review and feedback. My thanks to Hewlett Packard Enterprise for licensing this research, in accordance with our strict Totally Transparent Research policy, which enables us to release our independent and objective research for free.

Best Practices

If there is one thread tying together all the current trends influencing data centers and how we build applications, it’s distribution. We have greater demand for encryption in more locations in our application stacks – which now span physical environments, virtual environments, and increasing barriers even within our traditional environments.

Some of the best practices we will highlight have long been familiar to anyone responsible for enterprise encryption. Separation of duties, key rotation, and meeting compliance requirements have been on the checklist for a long time. Others are familiar, but have new importance thanks changes occurring in data centers. Providing key management as a service, and dispersing and integrating into required architectures aren’t technically new, but they are in much greater demand than before. Then there are the practices which might not make the list, such as supporting APIs and distributed architectures (potentially spanning physical and virtual appliances).

As you will see, the name of the game is consolidation for consistency and control; simultaneous with distribution to support diverse encryption needs, architectures, and project requirements.

But before we jump into recommendations, keep our focus in mind. This research is for enterprise data centers, including virtualization and cloud computing. There are plenty of other encryption use cases out there which don’t necessarily require everything we discuss, although you can likely still pick up a few good ideas.

Build a key management service

Supporting multiple projects with different needs can easily result in a bunch of key management silos using different tools and technologies, which become difficult to support. One for application data, another for databases, another for backup tapes, another for SANs, and possibly even multiple deployments for the same functions, as individual teams pick and choose their own preferred technologies. This is especially true in the project-based agile world of the cloud, microservices, and containers. There’s nothing inherently wrong with these silos, assuming they are all properly managed, but that is unfortunately rare. And overlapping technologies often increase costs.

Overall we tend to recommend building centralized security services to support the organization, and this definitely applies to encryption. Let a smaller team of security and product pros manage what they are best at and support everyone else, rather than merely issuing policy requirements that slow down projects or drive them underground.

For this to work the central service needs to be agile and responsive, ideally with internal Service Level Agreements to keep everyone accountable. Projects request encryption support; the team managing the central service determines the best way to integrate, and to meet security and compliance requirements; then they provide access and technical support to make it happen.

This enables you to consolidate and better manage key management tools, while maintaining security and compliance requirements such as audit and separation of duties. Whatever tool(s) you select clearly need to support your various distributed requirements. The last thing you want to do is centralize but establish processes, tools, and requirements that interfere with projects meeting their own goals.

And don’t focus so exclusively on new projects and technologies that you forget about what’s already in place. Our advice isn’t merely for projects based on microservices containers, and the cloud – it applies equally for backup tapes and SAN encryption.

Centralize but disperse, and support distributed needs

Once you establish a centralized service you need to support distributed access. There are two primary approaches, but we only recommend one for most organizations:

  • Allow access from anywhere. In this model you position the key manager in a location accessible from wherever it might be needed. Typically organizations select this option when they want to only maintain a single key manager (or cluster). It was common in traditional data centers, but isn’t well-suited for the kinds of situations we increasingly see today.
  • Distributed architecture. In this model you maintain a core “root of trust” key manager (which can, again, be a cluster), but then you position distributed key managers which tie back to the central service. These can be a mix of physical and virtual appliances or servers. Typically they only hold the keys for the local application, device, etc. that needs them (especially when using virtual appliances or software on a shared service). Rather than connecting back to complete every key operation, the local key manager handles those while synchronizing keys and configuration back to the central root of trust.

Why distribute key managers which still need a connection back home? Because they enable you to support greater local administrative control and meet local performance requirements. This architecture also keeps applications and services up and running in case of a network outage or other problem accessing the central service. This model provides an excellent balance between security and performance.

For example you could support a virtual appliance in a cloud project, physical appliances in backup data centers, and backup keys used within your cloud provider with their built-in encryption service.

This way you can also support different technologies for distributed projects. The local key manager doesn’t necessarily need to be the exact same product as the central one, so long as they can communicate and both meet your security and compliance requirements. We have seen architectures where the central service is a cluster of Hardware Security Modules (appliances with key management features) supporting a distributed set of HSMs, virtual appliances, and even custom software.

The biggest potential obstacle is providing safe, secure access back to the core. Architecturally you can usually manage this with some bastion systems to support key exchange, without opening the core to the Internet. There may still be use cases where you cannot tie everything together, but that should be your last option.

Be flexible: use the right tool for the right job

Building on our previous recommendation, you don’t need to force every project to use a single tool. One of the great things about key management is that modern systems support a number of standards for intercommunication. And when you get down to it, an encryption key is merely a chunk of text – not even a very large one.

With encryption systems, keys and the encryption engine don’t need to be the same product. Even your remote key manager doesn’t need to be the same as the central service if you need something different for that particular project.

We have seen large encryption projects fail because they tried to shoehorn everything into a single monolithic stack. You can increase your chances for success by allowing some flexibility in remote tools, so long as they meet your security requirements. This is especially true for the encryption engines that perform actual crypto operations.

Provide APIs, SDKs, and toolkits

Even off-the-shelf encryption engines sometimes ship with less than ideal defaults, and can easily be used incorrectly. Building a key management service isn’t merely creating a central key manager – you also need to provide hooks to support projects, along with processes and guidance to ensure they are able to get up and running quickly and securely.

  • Application Programming Interfaces: Most key management tools already support APIs, and this should be a selection requirement. Make sure you support RESTful APIs, which are particularly ubiquitous in the cloud and containers. SOAP APIs are considered burdensome these days.
  • Software Development Kits: SDKs are pre-built code modules that allow rapid integration into custom applications. Provide SDKs for common programming languages compatible with your key management service/products. If possible you can even pre-configure them to meet your encryption requirements and integrate with your service.
  • Toolkits: A toolkit includes all the technical pieces a team needs to get started. It can include SDKs, preconfigured software agents, configuration files, and anything else a project might need to integrate encryption into anyything from a new application to an old tape backup system.

Provide templates and recommendations, not just standards and requirements

All too often security sends out requirements, but fails to provide specific instructions for meeting those requirements. One of the advantages of standardization around a smaller set of tools is that you can provide detailed recommendations, instructions, and templates to satisfy requirements.

The more detail you can provide the better. We recommend literally creating instructional documents for how to use all approved tools, likely with screenshots, to meet encryption needs and integrate with your key management service. Make them easily available, perhaps through code repositories to better support application developers. On the operations side, include them not only for programming and APIs, but for software agents and integration into supported storage repositories and backup systems.

If a project comes up which doesn’t fit any existing toolkit or recommendations, build them with that project team and add the new guidance to your central repository. This dramatically speeds up encryption initiatives for existing and new platforms.

Meet core security requirements

So far we have focused on newer requirements to meet evolving data center architectures, the impact of the cloud, and new application design patterns; but all the old key management practices still apply:

  • Enforce separation of duties: Implement multiple levels of administrators. Ideally require dual authorities for operations directly impacting key security and other major administrative functions.
  • Support key rotation: Ideally key rotation shouldn’t create downtime. This typically requires both support in the key manager and configuration within encryption engines and agents.
  • Enable usage logs for audit, including purpose codes: Logs may be required for compliance, but are also key for security. Purpose codes tell you why a key was requested, not just by who or when.
  • Support standards: Whatever you use for key management must support both major encryption standards and key exchange/management standards. Don’t rely on fully proprietary systems that will overly limit your choices.
  • Understand the role of FIPS and its different flavors, and ensure you meet your requirements: FIPS 140-2 is the most commonly accepted standard for cryptographic modules and systems. Many products advertise FIPS compliance (which is often a requirement for other compliance, such as PCI). But FIPS is a graded standard with different levels ranging from a software module, to plugin cards, to a fully tamper-resistant dedicated appliance. Understand your FIPS requirements, and if you evaluate a “FIPS certified” ‘appliance’, don’t assume the entire appliance is certified – it might be only the software, not the whole system. You may not always need the highest level of assurance, but start by understanding your requirements, and then ensure your tool actually meets them.

There are many more technical best practices beyond the scope of this research, but the core advice that might differ from what you have seen in the past is:

  • Provide key management as a service to meet diverse encryption needs.
  • Be able to support distributed architectures and a range of use cases.
  • Be flexible on tool choice, then provide technical components and clear guidance on how to properly use tools and integrate them into your key management program.
  • Don’t neglect core security requirements.

In our next section we will start looking at specific use cases, some of which we have already hinted at.

—Rich

Summary: June 3, 2016

By Adrian Lane

Adrian here.

Unlike my business partners, who have been logging thousands of air miles, speaking at conferences and with clients around the country, I have been at home. And the mildest spring in Phoenix’s recorded history has been a blessing, as we’re 45 days past the point 100F days typically start. Bike rides. Hiking. Running. That is, when I get a chance to sneak outdoors and enjoy it. With our pivot there is even more writing and research going on than normal, which I wasn’t sure was possible. You will begin to see the results of this work within the next couple weeks, and we look forward to putting a fresh face on our business. That launch will coincide with us posting lots more hands-on advice for cloud security and migrations.

And as a heads-up, I will be talking big data security over at SC Magazine on the 20th. I’ll tweet out a link at @AdrianLane next week if you’re interested.

You can subscribe to only the Friday Summary.

Top Posts for the Week

Tool of the Week

“Server-less computing? What do you mean?” Rich and I were discussing cloud deployment options with one of the smartest engineering managers I know, and he was totally unaware of serverless cloud computing architectures. If he was unaware of this capability, lots of other people probably are as well. So this week’s Tool of the Week section will discuss not a single tool, but instead a functional paradigm offered by multiple cloud vendors. What are they? Google’s GCP page best captures the idea: essentially a “lightweight, event-based, asynchronous solution that allows you to create small, single-purpose functions that respond to Cloud events without the need to manage a server or a runtime environment” What Google does not mention there is that these functions tend to be very fast, and you can run multiple copies in parallel to scale capacity.

It really embodies microservices. You can construct an entire application from these functions. For example take a stream of data and run it through a series of functions to process it. It could be audio or image files’ or real-time event data inspection, transformation, enrichment, comparison… or any combination you can think of. The best part? There is no server. There is no OS to set up. No CPU or disk capacity to specify. No configuration files. No network ports to manage. It’s simply a logical function running out there in the ‘ether’ of your public cloud.

Google calls its version on GCP Cloud Functions. Amazon’s version on AWS is called (Lambda functions](http://docs.aws.amazon.com/lambda/latest/dg/welcome.html). Microsoft calls the version on Azure simply Functions. Check out their API documentation – they all work slightly differently, and some have specific storage requirements to act as endpoints, but the concept is the same. And the pricing for these services is pretty low – with Lambda, for example, the first million requests are free, and Amazon charges 20 cents per million requests after that.

This feature is one of the many reasons we tell companies to reconsider their application architectures when moving to cloud services. We’ll post some tidbits on security for these services in the future. For now, check them out!

Securosis Blog Posts this Week

Training and Events

—Adrian Lane

Thursday, June 02, 2016

Incident Response in the Cloud Age: In Action

By Mike Rothman

When we do a process-centric research project, it works best to wrap up with a scenario to illuminate the concepts we discuss through the series, and make things a bit more tangible.

In this situation imagine you work for a mid-sized retailer which uses a mixture of in-house technology and SaaS, and has recently moved a key warehousing system to an IaaS provider as part of rebuilding the application for cloud computing. You have a modest security team of 10, which is not enough, but a bit more than many of your peers. Senior management understands why security is important (to a point) and gives you decent leeway, especially regarding the new IaaS application. In fact you were consulted during the IaaS architecture phase and provided some guidance (with some help from your friends at Securosis) on building a resilient cloud network architecture, and how to secure the cloud control plane. You also had an opportunity to integrate some orchestration and automation technology into the new cloud technology stack.

The Trigger

You have your team on fairly high alert, because a number of your competitors have recently been targeted by an organized crime ring which has gained a foothold among your competitors; and proceeded to steal a ton of information about customers, pricing, and merchandising strategies. This isn’t your first rodeo, so you know that when there is smoke there is usually fire, and you decide to task one of your more talented security admins with a little proactive hunting in your environment. Just to make sure nothing bad is going on.

The admin starts poking around, searching internal security data with some of the more recent malware samples found in the attacks on the other retailers. The samples were provided by your industry’s ISAC (Information Sharing and Analysis Center). The analyst got a hit on one of the samples, confirming your concern. You have an active adversary on your network. So now you need to engage your incident response process.

Job 1: Initial Triage

Once you know there is a situation you assemble the response team. There aren’t that many of you, and half the team needs to pay attention to ongoing operational tasks, because taking down systems wouldn’t make you popular with senior management or investors. You also don’t want to jump the gun until you know what you’re dealing with, so you inform the senior team of the situation, but don’t take any systems down. Yet.

The adversary is active on your internal network, so they most likely entered via phishing or another social engineering attack. Searches found indications of the malware on 5 devices, so you take those devices off the network immediately. Not shut down, but put on a separate network with Internet access to avoid tipping off the adversary to their discovery.

Then you check your network forensics tool, looking for indications that data has been leaking. There are a few suspicious file transfers, but luckily you integrated your firewall egress filtering capability with your forensics tool. So once the firewall showed anomalous traffic being sent to known bad sites (via a threat intelligence integration on the firewall), you automatically started capturing network traffic from the devices which triggered the alert. Automation is sure easier that doing everything manually.

As part of your initial triage you got endpoint telemetry alerting you to issues, and network forensics data for a clue to what’s leaking. This is enough to know you not only have an active adversary, but that more than likely you lost data. So you fire up your case management system to structure your investigation and store all the artifacts of your investigation.

Your team is tasked with specific responsibilities, and sent on their way to get things done. You make the trek to the executive floor to keep senior management updated on the incident.

Check the Cloud

The attack seems to have started on your internal network, but you don’t want to take chances, and you need to make sure the new cloud-based application isn’t at risk. A quick check of the cloud console shows strange activity on one of your instances. A device within the presentation layer of the cloud stack was flagged by your IaaS provider’s monitoring system because there was an unauthorized change on that specific instance. It looks like the time you spent setting up that configuration monitoring service was well spent.

Security was involved in architecting the cloud stack, so you are in good shape. The application was built to be isolated. Even though it appears the presentation layer has been compromised, adversaries shouldn’t be able to reach anything of value. And the clean-up has already happened. Once the IaaS monitoring system threw an alert, that instance was taken offline and put into a special security group accessible only by investigators. A forensic server was spun up, and some additional analysis was performed. Orchestration and automation facilitating incident response again.

The presentation layer has large variances in how much traffic it needs to handle, so it was built using auto-scaling technology and immutable servers. Once the (potentially) compromised instance was removed from the group, another instance with a clean configuration was spun up and to share workload. But it’s not clear whether this attack is related to the other incident, so you take the information about the cloud attack, pull it down, and feed it into your case management system. But the reality is that this attack, even if related, doesn’t present a danger at this point, so it’s put to the side while you focus on the internal attack and probable exfiltration.

Building the Timeline

Now that you have completed initial triage, it’s time to dig into the attack and start building a timeline of what happened. You start by looking at the comprised endpoints and network metadata to see what the adversaries did. From examining endpoint telemetry you deduced that Patient Zero was a contractor on the Human Resources (HR) team. This individual was tasked with looking at resumes submitted to the main HR email account, and initial qualification screening for an open position. The resume was a compromised Word file using a pretty old Windows 7 attack. It turns out the contractor was using their own machine, which hadn’t been patched and was vulnerable. You can’t be that irritated with the contractor – it was their job to open those files. The malware rooted the device, connected up to a botnet, and then installed a Remote Access Trojan (RAT) to allow the adversary to take control of the device and start a systematic attack against the rest of your infrastructure.

You ponder how your organization’s BYOD policy enables contractors to use their own machines. The operational process failure was in not inspecting the machine on connection to the network; you didn’t make sure it was patched, or running an authorized configuration. That’s something to scrutinize as part of the post-mortem.

Once the adversary had presence on your network, they proceeded to compromise another 4 devices, ultimately ending up on both the CFO’s and the VP of Merchandising’s devices. Network forensic metadata shows how they moved laterally within the network, taking advantage of weak segmentation between internal networks. There are only so many hours in the day, and the focus had been on making sure the perimeter was strong and monitoring ingress traffic.

Once you know the CFO’s and VP of Merchandising’s devices were compromised, you can clearly see exfiltration in network metadata. A quick comparison of file sizes in data captured once the egress filter triggered shows that they probably got the latest quarterly board report, as well as a package of merchandising comps and plans for an exclusive launch with a very hot new fashion company. It was a bit of a surprise that the adversary didn’t bother encrypting the stolen data, but evidently they bet that a mid-sized retailer wouldn’t have sophisticated DLP or egress content filtering. Maybe they just didn’t care whether anyone found out what was exfiltrated after the fact, or perhaps they were in a hurry and wanted the data more than to remain undiscovered.

You pat yourself on the back, once, that your mature security program included an egress filter triggered a full packet capture of outbound traffic from all the compromised devices. So you know exactly what was taken, when, and where it went. That will be useful later, when talking to law enforcement and possibly prosecuting at some point, but right now that’s little consolation.

Cleaning up the Mess

Now that you have an incident timeline, it’s time to clean up and return your environment to a good state. The first step is to clean up the affected machines. Executives are cranky because you decided to reimage their machines, but your adversary worked to maintain persistence on compromised devices in other attacks, so prudence demands you wipe them.

The information on this incident will need to be aggregated, then packaged up for law enforcement and the general counsel, in preparation for the unavoidable public disclosure. You take another note that the team should consider using a case management system to track incident activity, provide a place to store case artifacts, and ensure proper chain of custody. Given your smaller team, that should help smooth your next incident response.

Finally, this incident was discovered by a savvy admin hunting across your networks. So to complete the active part of this investigation, you task the same admin with hunting back through the environment to make sure this attack has been fully eradicated, and no similar attacks are in process. Given the size of your team, it’s a significant choice to devote resources to hunting, but given the results, this is an activity you will need to perform on a monthly cadence.

Closing the Loop

To finalize this incident, you hold a post-mortem with the extended team, including representatives from the general counsel’s office. The threat intelligence being used needs to be revisited and scrutinized, because the adversary connected to a botnet but wasn’t detected. And the rules on your egress filters have been tightened because if the exfiltrated data had been encrypted, your response would have been much more complicated. The post-mortem also provided a great opportunity to reinforce the importance of having security involved in application architecture, given how well the new IaaS application stood up under attack.

Another reminder that sometimes a skilled admin who can follow their instincts is the best defense. Tools in place helped accelerate response and root cause identification, and made remediation more effective. But Incident Response in the Cloud Age involves both people and technology, along with internal and external data, to ensure effective and efficient investigation and successful remediation.

—Mike Rothman

Tuesday, May 31, 2016

Understanding and Selecting RASP: Integration

By Adrian Lane

This post will offer examples for how to integrate RASP into a development pipeline. We’ll cover both how RASP fits into the technology stack, and development processes used to deliver applications. We will close this post with a detailed discussion of how RASP differs from other security technologies, and discuss advantages and tradeoffs compared to other security technologies.

As we mentioned in our introduction, our research into DevOps produced many questions on how RASP worked, and whether it is an effective security technology. The questions came from non-traditional buyers of security products: application developers and product managers. Their teams, by and large, were running Agile development processes. The majority were leveraging automation to provide Continuous Integration – essentially rebuilding and retesting the application repeatedly and automatically as new code was checked in. Some had gone as far as Continuous Deployment (CD) and DevOps. To address this development-centric perspective, we offer the diagram below to illustrate a modern Continuous Deployment / DevOps application build environment. Consider each arrow a script automating some portion of source code control, building, packaging, testing, or deployment of an application.

CI Pipeline

Security tools that fit this model are actively being sought by development teams. They need granular API access to functions, quick production of test results, and delivery of status back to supporting services.

Application Integration

  • Installation: As we mentioned back in the technology overview, RASP products differ in how they embed within applications. They all offer APIs to script configuration and runtime policies, but how and where they fit in differ slightly between products. Servlet filters, plugins, and library replacement are performed as the application stack is assembled. These approaches augment an application or application ‘stack’ to perform detection and blocking. Virtualization and JVM replacement approaches augment run-time environments, modifying the subsystems that run your application modified to handle monitoring and detection. In all cases these, be it on-premise or as a cloud service, the process of installing RASP is pretty much identical to the build or deployment sequence you currently use.
  • Rules & Policies: We found the majority of RASP offerings include canned rules to detect or block most known attacks. Typically this blacklist of attack profiles maps closely to the OWASP Top Ten application vulnerability classes. Protection against common variants of standard attacks, such as SQL injection and session mis-management, is included. Once these rules are installed they are immediately enforced. You can enable or disable individual rules as you see fit. Some vendors offer specific packages for critical attacks, mapped to specific CVEs such as Heartbleed. Bundles for specific threats, rather than by generic attack classes, help security and risk teams demonstrate policy compliance, and make it easier to understand which threats have been addressed. But when shopping for RASP technologies you need to evaluate the provided rules carefully. There are many ways to attack a site with SQL injection, and many to detect and block such attacks, so you need to verify the included rules cover most of the known attack variants you are concerned with. You will also want to verify that you can augment or add rules as you see fit – rule management is a challenge for most security products, and RASP is no different.
  • Learning the application: Not all RASP technologies can learn how an application behaves, or offer whitelisting of application behaviors. Those that do vary greatly in how they function. Some behave like their WAF cousins, and need time to learn each application – whether by watching normal traffic over time, or by generating their own traffic to ‘crawl’ each application in a non-production environment. Some function similarly to white-box scanners, using application source to learn.
  • Coverage capabilities: During our research we found uneven RASP coverage of common platforms. Some started with Java or .Net, and are iterating to cover Python, Ruby, Node.js, and others. Your search for RASP technologies may be strongly influenced by available platform support. We find that more and more, applications are built as collections of microservices across distributed architectures. Application developers mix and match languages, choosing what works best in different scenarios. If your application is built on Java you’ll have no trouble finding RASP technology to meet your needs. But for mixed environments you will need to carefully evaluate each product’s platform coverage.

Development Process Integration

Software development teams leverage many different tools to promote security within their overarching application development and delivery processes. The graphic below illustrates the major phases teams go through. The callouts map the common types of security tests at specific phases within an Agile, CI, and DevOps frameworks. Keep in mind that it is still early days for automated deployment and DevOps. Many security tools were built before rapid and automated deployment existed or were well known. Older products are typically too slow, some cannot focus their tests on new code, and others do not offer API support. So orchestration of security tools – basically what works where – is far from settled territory. The time each type of test takes to run, and the type of result it returns, drives where it fits best into the phases below.

Security Tool Chain

RASP is designed to be bundled into applications, so it is part of the application delivery process. RASP offers two distinct approaches to help tackle application security. The first is in the pre-release or pre-deployment phase, while the second is in production. Either way, deployment looks very similar. But usage can vary considerably depending on which is chosen.

  • Pre-release testing: This is exactly what it sounds like: RASP is used when the application is fully constructed and going through final tests prior to being launched. Here RASP can be deployed in several ways. It can be deployed to monitor only, using application tests and instrumenting runtime behavior to learn how to protect the application. Alternatively RASP can monitor while security tests are invoked in an attempt to break the application, with RASP performing security analysis and transmitting its results. Development and Testing teams can learn whether RASP detected the tested attacks. Finally, RASP can be deployed in full blocking mode to see whether security tests were detected and blocked, and how they impacted the user experience. This provides an opportunity to change application code or augment the RASP rules before the application goes into production.
  • Production testing: Once an application is placed in a production environment, either before actual customers are using it (using Blue-Green deployment) or afterwards, RASP can be configured to block malicious application requests. Regardless of how the RASP tool works (whether via embedded runtime libraries, servlet filters, in-memory execution monitoring, or virtualized code paths), it protects applications by detecting attacks in live runtime behavior. This model essentially provides execution path scanning, monitoring all user requests and parameters. Unlike technologies which block requests at the network or web proxy layer, RASP inspects requests at the application layer, which means it has full access to the application’s inner workings. Working at the API layer provides better visibility to determine whether a request is malicious, and more focused blocking capabilities than external security products.
  • Runtime protection: Ultimately RASP is not just for testing, but for full runtime protection and blocking of attacks.

Regardless of where you deploy RASP, you need to test to ensure it is delivering on its promise. We advocate an ongoing testing process to ensure your policies are sound, and that you ultimately block what you need to block. Of course you can use other scanners to probe an application to ensure RASP is working prior to deployment, and other tools (such as Havij and SQLmap) to automate testing, but that’s only half the story. For full confidence that your apps are protected, we still recommend actual humans banging away at your applications. Penetration testing, at least periodically, helps verify your defenses are effective.

To WAF or not to WAF

Why did the market develop this brand-new security technology? Especially when existing technologies – most notably Web Application Firewalls (WAF) – already provided similar functions. Both block attacks on web-facing applications. They are both focused on known attack vectors, and include blacklists of attack patterns. Some optionally offer whitelists of known (approved) application functions. And both can ‘learn’ appropriate application behaviors. In fact most enterprises, especially which must comply with PCI-DSS, have already bought and deployed WAF. So why spend time and money on a new tool?

WAF management teams speak of the difficulty maintaining ‘positive’ security rules, and penetration testers grouse about how most WAFs are misconfigured, but neither was the primary driver of the search for an alternative which produced RASP. Development teams were looking for something different. Most stated their basic requirement was for something to work within their development pipeline. WAF’s lack of APIs for automatic setup, the time needed to learn application behavior, and most importantly the ability to pinpoint vulnerable code modules, were all cited as reasons WAF failed to satisfy developers. Granted, these requests came from more ‘Agile’ teams, more often building new applications than maintaining existing platforms. Still, we heard consistently that RASP meets a market demand unsatsfied by other application security technologies.

It is important to recognize that these technologies can be complementary, not necessarily competitive. There is absolutely no reason you can’t run RASP alongside your existing WAF. Some organizations continue to use cloud-based WAF as front-line protection, while embedding RASP into applications. Some use WAF to provide “threat intelligence”, DoS protection, and network security, while using RASP to fine-tune application security. Still others double down with overlapping security functions, much the way many organizations use layered anti-spam filters, accepting redundancy for broader coverage or unique benefits from each product. WAF platforms have a good ten-year head start, with broader coverage and very mature platforms, so some firms are loath to throw away WAF until RASP is fully proven.

Tomorrow we will close out this series with a brief buyers guide. We look forward to your comments!

—Adrian Lane

Firestarter: Where to start?

By Rich

It’s long past the day we need to convince you that cloud and DevOps is a thing. We all know it’s happening, but one of the biggest questions we get is “Where do I start?” In this episode we scratch the surface of how to start approaching the problem when you don’t get to join a hot unicorn startup and build everything from scratch with an infinite budget behind you.

Watch or listen:


—Rich

Friday, May 27, 2016

Incident Response in the Cloud Age: Addressing the Skills Gap

By Mike Rothman

As we described in our last post, incident response in the Cloud Age requires an evolved response process, in light of data sources you didn’t have before, including external threat intelligence, and the ability to analyze data in ways that weren’t possible just a few years ago. You also need to factor in the fact that access to specific telemetry, especially around the network, is limited because you don’t have control over networks anymore.

But even with these advances, the security industry needs to face the intractable problem that comes up in pretty much every discussion we have with senior security types. It’s people, folks. There simply are not enough skilled investigators (forensicators) to meet demand. And those who exist tend to hop from job to job, maximizing their earning potential. As they should – given free markets and all.

But this creates huge problems if you are running a security team and need to build and maintain a staff of analysts, hunters, and responders. So where can you find folks in a seller’s market? You have a few choices:

  1. Develop them: You certainly can take high-potential security professionals and teach them the art of incident response. Or given the skills gap, lower-potential security professionals. Sigh. This involves a significant investment in training, and a lot of the skills needed will be acquired in the crucible of an active incident.
  2. Buy them: If you have neither the time nor the inclination to develop your own team of forensicators, you can get your checkbook out. You’ll need to compete for these folks in an environment where consulting firms can keep them highly utilized, so they are willing to pay up for talent to keep their billable hours clicking along. And large enterprises can break their typical salary bands to get the talent they need as well. This approach is not cheap.
  3. Rent them: Speaking of consulting firms, you can also find forensicators by entering into an agreement with a firm that provides incident response services. Which seems to be every security company nowadays. It’s that free market thing again. This will obviously be the most expensive, because you are paying for the overhead of partners to do a bait and switch and send a newly minted SANS-certified resource to deal with your incident. OK, maybe that’s a little facetious. But only a bit.

The reality is that you’ll need all of the above to fully staff your team. Developing a team is your best long-term option, but understand that some of those folks will inevitably head to greener pastures right after you train them up. If you need to stand up an initial team you’ll need to buy your way in and then grow. And it’s a good idea to have a retainer in place with an external response firm to supplement your resources during significant incidents.

Changing the Game

It doesn’t make a lot of sense to play a game you know you aren’t going to win. Finding enough specialized resources to sufficiently staff your team probably fits into that category. So you need to change the game. Thinking about incident response differently covers a lot, including:

  • Narrow focus: As discussed earlier, you can leverage threat intelligence and security analytics to more effectively prioritize efforts when responding to incidents. Retrospectively searching for indicators of malicious activity and analyzing captured data to track anomalous activity enables you to focus efforts on those devices or networks where you can be pretty sure there are active adversaries.
  • On the job training: In all likelihood your folks are not yet ready to perform very sophisticated malware analysis and response, so they will need to learn on the job. Be patient with your I/R n00bs and know they’ll improve, likely pretty quickly. Mostly because they will have plenty of practice – incidents happen daily nowadays.
  • Streamline the process: To do things differently you need to optimize your response processes as well. That means not fully doing some things that, given more time and resources, you might. You need to make sure your team doesn’t get bogged down doing things that aren’t absolutely necessary, so it can triage and respond to as many incidents as possible.
  • Automate: Finally you can (and will need to) automate the I/R process where possible. With advancing orchestration and integration options as applications move to the cloud, it is becoming more feasible to apply large does of automation to remove a lot of the manual (and resource-intensive) activities from the hands of your valuable team members, letting machines do more of the heavy lifting.

Streamline and Automate

You can’t do everything. You don’t have enough time or people. Looking at the process map in our last post, the top half is about gathering and aggregating information, which is largely not a human-dependent function. You can procure threat intelligence data and integrate that directly into your security monitoring platform, which is already collecting and aggregating internal security data.

In terms of initial triage and sizing up incidents, this can be automated to a degree as well. We mentioned triggered capture, so when an alert triggers you can automatically start collecting data from potentially impacted devices and networks. This information can be packaged up and then compared to known indicators of malicious or misuse activities (both internal and external), and against your internal baselines.

At that point you can route the package of information to a responder, who can start to take action. The next step is to quarantine devices and take forensic images, which can be largely automated as well. As more and more infrastructure moves into the cloud, software-defined networks and infrastructure can automatically take devices in question out of the application flow and quarantine them. Forensic images can be taken automatically with an API call, and added to your investigation artifacts. If you don’t have fully virtualized infrastructure, there are a number of automation and orchestration tools are appearing to provide an integration layer for these kinds of functions.

When it comes time to do damage assessment, this can largely be streamlined due to new technologies as well. As mentioned above, retrospective searching allows you to search your environment for known bad malware samples and behaviors consistent with the incident being investigated. That will provide clues to the timeline and extent of compromise. Compare this to the olden days (like a year ago, ha!) when you had to wait for the device to start doing something suspicious, and hope the right folks were looking at the console when bad behavior began.

In a cloud-native environment (where the application was built specifically to run in the cloud), there really isn’t any mitigation or cleanup required, at least on the application stack. The instances taken out of the application for investigation are replaced with known-good instances that have not been compromised. The application remains up and unaffected by the attack. Attacks on endpoints still require either cleanup or reimaging, although endpoint isolation technologies make it quicker and easier to get devices back up and running.

In terms of watching for the same attack moving forward, you can feed the indicators you found during the investigation back into your security analytics engine and watch for them as things happen, rather than after the attack. Your detection capabilities should improve with each investigation, thanks to this positive feedback loop.

Magnify Impact

It also makes sense to invest in an incident response management system/platform that will structure activities in a way that standardizes your response process. These response workflows make sure the right stuff happens during every response, because the system requires it. Remember, you are dealing with folks who aren’t as experienced, so having a set of tasks for them to undertake, especially when dealing with an active adversary, can ensure a full and thorough investigation happens. This kind of structure and process automation can magnify the impact of limited staff with limited skills.

It may seem harsh, but successful I/R in the Cloud Age requires you to think differently. You need to take inexperienced responders, and make them more effective and efficient. Using a scale of 1-10, you should look for people ranked 4-6. Then with training, structured I/R process, and generous automation, you may be able to have then function at a level of 7-8, which is a huge difference in effectiveness.

—Mike Rothman

Wednesday, May 25, 2016

Incite 5/25/2016: Transitions

By Mike Rothman

I have always been pretty transparent about my life in the Incite. I figured maybe readers could learn something that helps them in life through my trials and tribulations, and if not perhaps they’d be entertained a bit. I also write Incites as a journal of sorts for myself. A couple times a year I search through some old Incites and remember where I was at that point in my life. There really wasn’t much I wouldn’t share, but I wondered if at some point I’d find a line I wouldn’t cross in writing about my life publicly.

It turns out I did find that line. I have alluded to significant changes in my life a few times over the past two years, but I never really got into specifics. I just couldn’t. It was too painful. Too raw. But time heals, and over the past weekend I realized it was time to tell more of the story. Mostly because I could see that my kids had gone through the transition along with me, and we are all doing great.

transitions

So in a nutshell, my marriage ended. There aren’t a lot of decisions that are harder to make, especially for someone like me. I lived through a pretty contentious divorce as a child and I didn’t want that for me, my former wife, or our kids. So I focused for the past three years on treating her with dignity and kindness, being present for my kids, and keeping the long-term future of those I care about most at the forefront of every action I took.

I’m happy to say my children are thriving. The first few months after we told them of the imminent split were tough. There were lots of tears and many questions I couldn’t or wouldn’t answer. But they came to outward acceptance quickly. They helped me pick out my new home, and embraced the time they had with me. They didn’t act out with me, their Mom, or their friends, didn’t get into trouble, and did very well in school. They have ridden through a difficult situation well and they still love me. Which was all I could have hoped for.

Holidays are hard. They were with their Mom for Memorial Day and Thanksgiving last year, which was weird for me. Thankfully I have some very special people in my life who welcomed me and let me celebrate those holidays with them, so I wasn’t alone. We’ve adapted and are starting to form new rituals in our new life. We took a great trip to Florida for winter break last December, and last summer we started a new tradition, an annual summer beach trip to the Jersey Shore to spend Father’s Day with my Dad.

To be clear, this isn’t what they wanted. But it’s what happened, and they have made the best of it. They accepted my decision and accept me as I am right now. I’ve found a new love, who has helped me be the best version of myself, and brought happiness and fulfillment to my life that I didn’t know was possible. My kids have welcomed her and her children into our lives. They say kids adapt to their situation, and I’m happy to say mine have. I believe you see what people are made of during difficult times. A lot of those times happen to be inevitable transitions in life. Based on how they have handled this transition, my kids are incredible, and I couldn’t be more proud of them.

And I’m proud of myself for navigating the last couple years the best I could. With kindness and grace.

–Mike

Photo credit: “Transitions from Arjan Almekinders


Security is changing. So is Securosis. Check out Rich’s post on how we are evolving our business.

We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF).

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Evolving Encryption Key Management Best Practices

Incident Response in the Cloud Age

Understanding and Selecting RASP

Maximizing WAF Value

Resilient Cloud Network Architectures

Shadow Devices

Building a Vendor IT Risk Management Program

Recently Published Papers


Incite 4 U

  1. Embrace and Extend: AWS is this generation’s version of Windows. Sure, there are other cloud providers like Microsoft Azure and Google, but right now AWS is king of the hill. And there are some similarities to how Microsoft behaved in the early 90s. Do you remember when Microsoft would roll new functions into Windows, and a handful of third-party utility vendors would go away? Yeah, that’s AWS today. but faster. Amazon rolls out new features and services monthly, and inevitably those new capabilities step on third parties. How did folks compete with Microsoft back in the day? Rich reminded me a few months about that these vendors needed their own version of embrace and extend. They have to understand that the gorilla is going to do what they do, so to survive smaller vendors must continually push functionality forward and extend their offerings. Ben Kepes at NetworkWorld asked whether a third-party vendor was really necessary, and then that vendor approached him to tell him their plans to stay relevant. Maybe the small fry makes it. Maybe they don’t. But that dynamic is driving the public cloud. Innovation happens within third parties, and at some point, if it’s a universal requirement, cloud providers will either buy the technology or build it themselves. That’s the way it has always been, and it won’t be different this time. – MR

  2. Signatures, exposed: Dan Guido offers a scathing review of the 2016 Verizon Data Breach Report (DBIR here). It’s a bit long but worth the read, as he walks through flaws in the report. In a nutshell, it’s a classic case in overweighting the data you have: signatures. And ignoring data you don’t have: actual exploit vectors! Worse, some of the vulnerability data is based on false positives, which further skew the results. As in years past, we think the DBIR does provide some valuable insights, and we still encourage you to look through the data and come to your own conclusions. In the meantime, the security PR hype machine will be taking sound bites and trumpeting them as the reason you must hurry up and buy their product, because the DBIR says so! – AL

  3. Jacking up your vendors… You realize that buying security products, and any products for that matter, is a game, right? Those who play the game can get better pricing or additional services or both. Vendors don’t like you to know about the game, but experienced procurement people do. Those who have been on the other side of a slick salesperson learned the game the hard way. Back in my Security Incite days I wrote a companion piece to the Pragmatic CSO about 10 years ago, focused on how to buy security products. Jeremiah Grossman, now that he doesn’t work for a vendor any more, has given you his perspective on how to play the game. His tips are on the money, although I look at multi-year deals as the absolute last tactic to use for price concessions. With the rate of change in security, the last thing I want to do is lock into a multi-year deal on technology that is certain to change. The other issue is being a customer reference. You can dangle that, and maybe the vendor will believe you. But ultimately your general counsel makes that decision. – MR

  4. Of dinosaurs and elephants: Peter Bailis over at Stanford had a wonderful post on How To Make Fossils Productive Again. With cheap compute resources and virtually free big data systems available to anyone with an Internet connection, we are seeing a huge uptake in data analytics. Left behind are the folks who cling tightly to relational databases, doing their best mainframe hugger impersonations. With such a dearth of big data managers (also known as data scientists) available, it’s silly that many people from the relational camp have been unwilling to embrace the new technologies. They seem to forget that these new technologies create new benchmarks for architectural ideals and propel us into the future. Peter’s advice to those relational folks? Don’t be afraid to rethink your definition of what a database is, and embrace the fact that these new platforms are designed to solve whole classes of problems outside the design scope of the relational model. You are likely to have fun doing so. – AL

  5. You can fool some of them, but not Rob: The good thing about the Internet and security in general is that there are very smart people out there who both test your contentions and call you out when you are full of crap. Some are trolls, but many are conscientious individuals focused on getting to the truth. Rob Graham is one of the good ones. He test things people say, and calls them out when they are not true. If you don’t read his blog, Errata Security, you are missing out. One of his latest missives is a pretty brutal takedown of the guy claiming to have started BitCoin. Rob actually proves, with code and all, that the guy isn’t who he says he is. Or maybe he is, but he hasn’t adequately proven it. Anyhow, without getting into arcane technology, read that post to see a master at work. – MR

  6. When I say it’s you, I really mean me: The folks who work on MongoDB, under fire in the press for some hacked databases, implied that MongoDB is secure, but some users are idiots. Maybe I missed the section in my business management class on the logic and long-term value of calling your customers idiots – they might be right, but that does not mean this will end well. In the big data and NoSQL market, I give the MongoDB team a lot of credit for going from zero security to a halfway decent mix of identity and platform security measures. That said, they have a ways to go. MongoDB is well behind the commercial Hadoop variants like Cloudera, Hortonworks, and MapR, and they lack the steady stream of security contributions the open source community is building for Hadoop. If the Mongo team would like to protect their idiots users in the future, they could write a vulnerability scanner to show users where they have misconfigured the database! It would be easy, and show people (including any idiots) their simple configuration errors. – AL

—Mike Rothman