To demonstrate our mastery of the obvious, it’s not getting easier to detect attacks. Not that it was ever really easy, but at least you used to know what tactics adversaries used, and you had a general idea of where they would end up, because you knew where your important data was, and which (single) type of device normally accessed it: the PC. It’s hard to believe we now long for the days of early PCs and centralized data repositories.

But that is not today’s world. You face professional adversaries (and possibly nation-states) who use agile methods to develop and test attacks. They have ways to obfuscate who they are and what they are trying to do, which further complicate detection. They prey on the ever-present gullible employees who click anything to gain a foothold in your environment. Further complicating matters is the inexorable march towards cloud services – which moves unstructured content to cloud storage, outsources back-office functions to a variety of service providers, and moves significant portions of the technology environment into the public cloud. And all these movements are accelerating – seemingly exponentially.

There has always been a playbook for dealing with attackers when we knew what they were trying to do. Whether or not you were able to effectively execute on that playbook, the fundamentals were fairly well understood. But as we explained in our Future of Security series, the old ways don’t work any more, which puts practitioners behind the 8-ball. The rules have changed and old security architectures are rapidly becoming obsolete. For instance it’s increasingly difficult to insert inspection bottlenecks into your cloud environment without adversely impacting the efficiency of your technology stack. Moreover, sophisticated adversaries can use exploits which aren’t caught by traditional assessment and detection technologies – even if they don’t need such fancy tricks often.

So you need a better way to assess your organization’s security posture, detect attacks, and determine applicable methods to work around and eventually remediate exposures in your environment. As much as the industry whinges about adversary innovation, the security industry has also made progress in improving your ability to assess and detect these attacks. We have written a lot about threat intelligence and security analytics over the past few years. Those are the cornerstone technologies for dealing with modern adversaries’ improved capabilities.

But these technologies and capabilities cannot stand alone. Just pumping some threat intel into your SIEM won’t help you understand the context and relevance of the data you have. And performing advanced analytics on the firehose of security data you collect is not enough either, because you might be missing a totally new attack vector.

What you need is a better way to assess your organizational security posture, determine when you are under attack, and figure out how to make the pain stop. This requires a combination of technology, process changes, and clear understanding of how your technology infrastructure is evolving toward the cloud. This is no longer just assessment or analytics – you need something bigger and better. It’s what we now call Security Decision Support (SDS). Snazzy, huh?

In this blog series, “Evolving to Security Decision Support”, we will delve into these concepts to show how to gain both visibility and context, so you can understand what you have to do and why. Security Decision Support provides a way to prioritize the thousands of things you can do, enabling you to zero in on the few things you must.

As with all Securosis’ research developed using our Totally Transparent methodology, we won’t mention specific vendors or products – instead we will focus on architecture and practically useful decision points. But we still need to pay the bills, so we’ll take a moment to thank Tenable, who has agreed to license the paper once it’s complete.

Visibility in the Olden Days

Securing pretty much anything starts with visibility. You can’t manage what you can’t see – and a zillion other overused adages all illustrate the same point. If you don’t know what’s on your network and where your critical data is, you don’t have much chance of protecting it.

In the olden days – you know, way back in the early 2000s – visibility was fairly straightforward. First you had data on mainframes in the data center. Even when we started using LANs to connect everything, data still lived on a raised floor, or in a pretty simple email system. Early client/server systems started complicating things a bit, but everything was still on networks you controlled in data centers you had the keys to. You could scan your address space and figure out where everything was, and what vulnerabilities needed to be dealt with.

That worked pretty well for a long time. There were scaling issues, and a need (desire) to scan higher in the technology stack, so we started seeing first stand-alone and then integrated application scanners. Once rogue devices started appearing on your network, it was no longer sufficient to scan your address space every couple weeks, so passive network monitoring allowed you to watch traffic and flag (and assess) unknown devices.

Those were the good old days, when things were relatively simple. Okay – maybe not really simple, but you could size the problem. That is no longer the case.

Visibility Challenged

We use a pretty funny meme in many of our presentations. It shows a man from the 1870s, remembering blissfully the good old days when he knew where his data was. That image always gets a lot of laughs from audiences. But it’s brought on by pain, because everyone in the room knows it illustrates a very real problem. Nowadays you don’t really know where your data is, which seriously compromises your capability to determine the security posture of the systems with access to it.

These challenges are a direct result of a number of key technology innovations:

  • SaaS: Securosis talks about how SaaS is the New Back Office, and that has rather drastic ramifications for visibility. Many organizations deploy CASB just to figure out which SaaS services they are using, because it’s not like business folks ask permission to use a business-oriented service. This isn’t a problem that’s going away. If anything more business processes will move to SaaS.
  • IaaS: Speaking of cloudy stuff, you have teams using Infrastructure as a Service (IaaS) – either moving existing systems out of your data centers, or building new systems in the cloud. IaaS really changes how you assess your environment, breaking most old techniques. Scanning is a lot harder, and some of the ‘servers’ (now called instances) live only for a few hours. Network addressing is different, and you cannot really implement taps to see all traffic. It’s a different world, where you are pretty much blind until you come up to speed with new techniques to replace tricks the cloud broke.
  • Containers: Another new foundational technology, containers bring much better portability and flexibility to building and deployment of application components. Without going into detail about why they’re cool, suffice it to say that your developers are likely working extensively with containers as they architect new applications, especially in the cloud. But containers bring new visibility and security challenges, in part because they are short-lived (they spin up and down automatically, responding to load and other triggers), self-contained (usually not externally addressable) and don’t provide access for traditional scans. They pretty well break existing discovery and assessment processes.
  • Mobility: It seems kind of old hat to even be mentioning the fact that you have critical data on smart devices (phones and tablets), but they expand your attack surface and make it hard to understand where your data is and how those devices are configured.
  • IoT: A little further out toward the horizon is the Internet of Things (IoT). Some argue it’s here today, and with the number of sensors being deployed and smart systems already network connected, they may be right. But either way, if you look even just a year or two out into the future, you can bet there will be a lot more network connected devices accessing your data and expanding your attack surface. So you’ll need to find and assess them.

And we are just getting started. It won’t be long before the next discontinuous innovation makes it harder to figure out where critical data resides and what’s happening with it. To put a bow on the challenges you face, we’ll talk about some reasonable bets to make. We are confident there will be more cloud tomorrow than today. And equally confident more devices accessing will be your stuff tomorrow. And that’s pretty much all you need to know to understand the extent and magnitude of the problem.

Challenge Accepted

To again state the obvious, it’s hard to be a security professional nowadays. We get it. But curling up into the fetal position on your data center floor isn’t an option. First of all, you may not even have a data center any more. And if you do it might have been repurposed as warehouse space or sold off to a cloud provider. But even if you have a place, curling up won’t actually solve any problems.

So what can you do? Remember you cannot manage or protect what you cannot see, so we need to focus on visibility as the first step toward Security Decision Support. Visibility across the enterprise, wherever your data resides, on whatever platform. That means discovery and assessment of all your stuff.

We’re pretty sure you haven’t been able to totally shut off your data centers and move everything to SaaS and IaaS – even though you might want to – so you need to make sure you aren’t missing anything within your traditional infrastructure. You need to continue your existing vulnerability management program.

  • Network, security, databases, and systems: You already scan your network and security devices, all the servers you control, and probably your databases as well (thanks, compliance mandates!), so you’ll keep doing all that. Hopefully you have been evolving your vulnerability management environment, and have some means of prioritizing all the stuff in your environment.
  • Applications: You are likely scanning your web applications as well. That’s a good thing – keep doing it. And working with developers to ensure they are fixing the issues you find before deploying them to millions of customers. Obviously as developers continue to adapt agile methods of building software, you will still need to evangelize finding issues with your application stacks and – given the velocity of software changes – fixing them faster.

That’s the stuff that you should already be doing. Maybe not as well as you should (there is always room for improvement, right?), but at least for compliance you are probably already doing something. It gets interesting when discovery and assessment intersect the new environments and innovations you need to grapple with. Let’s look at the innovations above, for a sense for how they change things in the new world.

SaaS

As mentioned, many of you have deployed a CASB (Cloud Access Security Broker) to monitor your network egress traffic and figure out which SaaS services you are actually using. It’s always entertaining to hear about a vendor asking a customer how many SaaS services they think are use and hear back: maybe a couple dozen. And then the vendor (with great dramatic effect) always seems to drop a report on the deck – it’s closer to 1,500.

To be clear, you don’t need a purpose-built device or service to figure out SaaS in use – many secure web gateways offer this kind of visibility, as do DLP solutions to control exfiltration. BUt one method of discovery is to examine egress traffic.

Another kind of discovery and assessment is through each SaaS provider’s API (Application Programming Interface). The more mature SaaS companies understand that visibility is a problem, so they offer reasonable granularity for usage and activity via API. You can pull this information down and integrate it with other security data for analysis. We’ll dig into this analysis in our next post.

IaaS

As your organization moves existing systems and builds new applications in the cloud you will need to work more proactively to get a sense of which resources actually live in the cloud. Unlike SaaS, where someone is presumably connecting to a service from inside your organization, where you have a chance to see it, an egress filter cannot provide much detail about what’s running within or going into a public cloud service.

In this case the API really is your friend. Any tools that focus on visibility need to poll the cloud provider’s API to learn what systems are running in their environment and assess them. One caution is the API limitations of some cloud providers. You cannot make infinite API calls to any cloud provider for obvious reasons, so you need to design your IaaS environment with this in mind.

We favor cloud architectures which use multiple accounts per application for many reasons. Overcoming API limitations is one, as well as minimizing the blast radius of an attack with stronger functional isolation between applications. Yet, that’s a much larger discussion for a different day, but see our latest video if you’re interested.

Befriend the Accountants

One general point about cloud services should already be familiar, from many contexts: follow the money. For both SaaS and IaaS, the only thing we can be sure of is that someone is getting paid for any services you use. So whoever pays the bills should be able to let you know – at least at a gross level – which services are in use, with pointers to who can tell you more.

So make sure you are friendly with Accounting. Take them out to lunch from time to time. Support their charitable causes. Whatever it takes to keep them on your side and responsive to your requests for accounting records for cloud services.

Of course an Accounting report is no replacement for pulling information from APIs or monitoring egress traffic. Attackers move fast and can do a lot of damage in the time it takes a provider to bill you, and Accounting to receive a bill and process it. Don’t just accept that 4-6 week delay behind events happening. So use this kind of information to verify what you should already know. And to identify the stuff you should know about, but perhaps don’t yet.

Containers

Containers encapsulate micro-services which are often not persistent, and cannot really be accessed or scanned from external entities (like vulnerability scanners), a separate capability to discover and assess containers won’t really work. So you need to build discovery and assessment into your container system. First make sure any of the containers you build are not vulnerable, by integrating assessment into your container build process. Any container you spin up should be built using an image which is not vulnerable.

Then track usage of your containers to make sure nothing drifts, which requires inserting some kind of technology (an agent or API call) into the build/deploy stage as containers spin up. That technology will track each container through its lifecycle and report back to your central repository and watch for signs of attack. Like most of security, you can’t really bolt this on later. So when you finish buying pizza for Accounting, you might want to have happy hour with the developers. Without their participation you’ll have little visibility into your container environment.

Mobility

It has been a while since you could stick your head in the sand and hope that mobile devices would turn out to be a passing fad. Today they are full participants in the IT environment, and innovative new applications are being rolled out to derive business advantage from their flexibility. But as any technology becomes ubiquitous, solutions emerge to address common problems.

Even after consolidation there are dozens of solutions to provide mobile device visibility and assessment. To access corporate data or install purpose-built mobile apps, any device should need to be registered with the corporate Mobile Device Management (MDM) environment. These platforms can provide an inventory not just of devices, but also what is installed on each device. More sophisticated offerings now can block certain apps from running, or stop a device from accessing some networks based on its configuration and assessment.

That’s the good news, but there is still work to be done to integrate that information into the rest of the Security Decision Support stack. You should be able to use telemetry from your MDM environment in your security analytics strategy. For example a person’s mobile device accessing cloud data stores they aren’t authorized to look at, or their desktop performing reconnaissance on the finance network – or even both! – might well indicate a compromise. Your analytics should detect and connect both events across the enterprise. But let’s not get ahead of ourselves – our next post will dive into analytics.

IoT

The problem with most IoT devices is they aren’t your run-of-the-mill PCs or mobile devices. They likely don’t have an API you can poll to figure out what’s going on, nor can you install an agent to monitor all activity. And these devices often appear on network segments which are less monitored and protected, such as the shop floor or the security video network.

Detecting the presence of these devices, assessing their security, and looking for potential misuse all require a different approach which is largely passive. Your best bet is to monitor those networks, profile the devices on each network, baseline typical traffic patterns, and then watch for devices acting unusually. Yet, it is more challenging than collecting NetFlow records from the shop floor network. IoT devices often use non-standard proprietary protocols, which further complicate discovery and assessment. As you account for these devices in your Security Decision Support strategy you’ll need to weigh the complexity of identifying and assessing such devices against the risk they pose.

(Re)Visitation

Of course we need a few caveats around these concepts. First, emerging technologies are moving targets. Let’s take IaaS as an example. Like other technology providers, cloud providers are rapidly introducing APIs and other mechanisms to provide a view into their environments. Device makers across all device types realize customers want to manage their technology as part of a larger system, so many (but not all, alas) are providing better access to their innards in more flexible ways. That’s the kind of progress you like to see.

But tomorrow’s promise cannot solve today’s problem. You need to build a process and implement tooling based on what’s available today. So build periodic reassessment into your SDS process, similar to how you probably revisit malware detection periodically.

We know reviewing your enterprise visibility approaches can be time-consuming, and expensive when something needs to change. Reversing course on decisions you made over the past year can be frustrating. But that’s the world we live in, and resisting will just cost worse in the long run. If you expect from the get-go to revisit all these decisions, and at times to toss some tools and embrace others, that makes it easier to take. Even more important, managing management expectations that this might happen (quite likely), will go a long way to maintaining your employment.

In summary, the first step toward Security Decision Support is enterprise visibility and understanding the exposure of assets and data, wherever they are. Next we’ll dig into figuring out what’s really at risk by integrating an external view of the security world (threat intel) and more sophisticated analytics of internal security data you collect.

Share: