Blog

Continuous Security Monitoring: Defining CSM

By Mike Rothman

In our introduction to Continuous Security Monitoring we discussed the rapid advancement of attacks, and why that means you can never “get ahead of the threat”. That means you need to react faster to what’s happening, which requires shortening the window of exposure by embracing extensive security monitoring. We tipped our hats to both PCI Council and the US government for requiring monitoring as a key aspect of their mandates. The US government pushed it a step further by including continuous in its definition of monitoring. We love the term ‘continuous’, but this one word has caused a lot of confusion in folks responsible for monitoring their environments.

As we are prone to do, it is time to wade through the hyperbole to define what we mean by Continuous Security Monitoring, and then identify some of the challenges you will face in moving towards this ideal.

Defining CSM

We will not spend any time defining security monitoring – we have been writing about it for years. But now we need to delve into how continuous any monitoring really needs to be given recent advances in attack tactics. Many solutions claim to offer “continuous monitoring”, but all to many simply scan or otherwise assess devices every couple of days — if that often.

Sorry, but no. We have heard many excuses for why it is not practical to monitor everything continuously, including concerns about consumption of device resources, excessive bandwidth usage, and inability to deal with an avalanche of alerts. All those issues ring hollow because intermittent assessment leaves a window of exposure for attackers, and for critical devices you don’t have that luxury. Our definition of continuous is more in line with the dictionary definition:

con.tin.u.ous: adjective \kən-ˈtin-yue-əs\ – marked by uninterrupted extension in space, time, or sequence

The key word there is uninterrupted: always active. The constructionist definition of continuous security monitoring should be that the devices in question are monitored at all times – there is no window where attackers can make a change without it being immediately detected. But we are neither constructionist nor religious – we take a realistic and pragmatic approach, which means accepting that not every organization can or should monitor all devices at all times.

So we include asset criticality in our usage of CSM. Some devices have access to very important stuff. You know, the stuff that if leaked will result in blood (likely yours and your team’s) flowing through the halls. The stuff that just cannot be compromised. Those devices need to be monitored continuously. And then there is everything else. In the “everything else” bucket land all those devices you still need to monitor and assess, but not as urgently or frequently. You will monitor these devices periodically, so long as you have other methods to detect and identify compromised devices, like network analytics/anomaly detection and/or aggressive egress filtering.

The secret to success at CSM is in choosing your high-criticality assets well, so we will get into that later in this series. Another critical success factor is discovering when new devices appear, classifying them quickly, and getting them into the monitoring system quickly. This requires strong process and technology to ensure you have visibility into all of your networks, can aggregate the data you need, and have sufficient computational horsepower for analysis.

Adapting the Network Security Operations process map we published a few years back, here is our Continuous Security Monitoring Process.

CSM Process Map

The process is broken down into three phases. In the Plan phase you define policies, classify assets, and continuously discover new assets within your environment. In the Monitor phase you pull data from devices and other sources, to aggregate and eventually analyze, in order to fire an alert if a potential attack or other situation of concern becomes apparent. You will monitor not only to detect attacks, but also to confirm changes and identify unauthorized changes, and substantiate compliance with organizational and regulatory standards (mandates). In the final phase you take action (really determine what action, if any, to take) by validating the alert and escalating as needed. As with all our process models, not all these activities will work or fit in your environment. We publish these maps to give you ideas about what you’ll need to do – they always require customization to your own needs.

The Challenge of Full Visibility

As we mentioned above, the key challenge in CSM is classifying assets, but your ability to do so is directly related to the visibility of your environment. You cannot monitor or protect devices you don’t know about. So the key enabler for this entire CSM concept is an understanding of your network topology and the devices that connect to your networks. The goal is to avoid an “oh crap” moment, when a bunch of unknown devices and/or applications show up – and you have no idea what they are, what they have access to, or whether they are steaming piles of malware. So we need to be sure you are clear on how to do discovery in this context.

There are a number of discovery techniques, including actively scanning your entire address space for devices and profiling what you find. That works well enough and is how most vulnerability management offerings handle discovery, so active discovery is one requirement. But a full address space scan can have a substantial network impact, so it isn’t appropriate during peak traffic times. And be sure to search both your IPv4 and IPv6 address spaces. You don’t have IPv6, you say? You will want to confirm that – many devices have IPv6 turned on by default, broadcasting those addresses to potential attackers.

You should supplement active discovery with a passive capability that monitors network traffic and identifies new devices from their network communications. Sophistication passive analysis can profile devices and identify vulnerabilities, but passive monitoring’s primary goal is to find new unmanaged devices faster, then trigger a full active scan on identification. Passive discovery is also helpful for identifying devices hidden behind firewalls and on protected segments, which block active discovery and vulnerability scanning.

It is also important to visualize your network topology – a drill-down map is worth a million words. Being able to isolate a device, understand where it fits in your topology, and access previous assessments, dramatically accelerate the process of discovering the root cause of issues during the validation and escalation phases of the CSM process.

Additional complicating factors for discovery include cloud computing and mobility. With the lack of control and visibility over devices outside the cozy confines of your network perimeter, figuring out which devices have access to critical data stores is increasingly difficult. Cloud computing provides the ability to spin up and take down instances at will without human involvement – perhaps outside your data center. This clearly impacts your efforts at full visibility, so your discovery processes need to be integrated with your cloud consoles to ensure you know about and can assess newly-minted instances.

Similarly, intelligent mobile devices with access to critical enterprise data creates easy targets for attackers probing your network. So mobile devices need to be assessed on connection using network security controls, to ensure they have an adequate security posture and access only to authorized data. Later in this series we will discuss specific tactics to discover both cloud-based and mobile devices, but for now suffice it to say this blind spot must be factored into any discovery process.

Our next post will dig into the process to classify your assets to determine which are very critical, since that’s the critical success criteria for CSM.

No Related Posts
Comments

@dwayne, wholeheartedly agree. Evidently I didn’t do a good enough job of making that point when discussing the “Define Policies” and the “Analyze” aspects of the process. Classifying the assets is the first step, but taking it to that next step (useful monitoring) requires that you have a mechanism to eliminate the noise to focus on the signal - based on the use cases you define as important.

For the purposes of this series, we’ll be talking about the security use case (detecting an attack), then talking about a monitoring change use case (to isolate operational errors/closing the loop on change control), and finally a compliance use case. But let’s not the get cart before the horse, eh?

By Mike Rothman


Full visibility is great, as long as you can add filters that help you discriminate. I liken this to your hearing - your ears take it all in, but your brain focuses on (generally) things that represent pleasure, pain, danger, or things that don’t seem to ‘belong.’ 

Adding a strong mechanism for security-oriented discrimination on top of your full visibility is a must - and with the same goal: focusing your organization on the good, the bad, the dangerous, and the suspicious within the sea of stuff you have collected.

By Dwayne Melancon


If you like to leave comments, and aren’t a spammer, register for the site and email us at info@securosis.com and we’ll turn off moderation for your account.