Vulnerability Management Evolution: Core Technologies
As we discussed in the last couple posts, any VM platform must be able to scan infrastructure and scan the application layer. But that’s still mostly tactical stuff. Run the scan, get a report, fix stuff (or not), and move on. When we talk about a strategic and evolved vulnerability management platform, the core technology needs to evolve to serve more than merely tactical goals – it must provide a foundation for a number of additional capabilities. Before we jump into the details we will reiterate the key requirements. You need to be able to scan/assess: Critical Assets: This includes the key elements in your critical data path; it requires both scanning and configuration assessment/policy checking for applications, databases, server and network devices, etc. Scale: Scalability requirements are largely in the eye of the beholder. You want to be sure the platform’s deployment architecture will provide timely results without consuming all your network bandwidth. Accuracy: You don’t have time to mess around, so you don’t want a report with 1,000 vulnerabilities, 400 of them false positives. There is no way to totally avoid false positives (aside from not scanning at all) so accuracy is a key selection criteria. Yes, that was pretty obvious. With a mature technology like vulnerability management the question is less about what you need to do and more about how – especially when positioning for evolution and advanced capabilities. So let’s first dig into the foundation of any kind of strategy platform: the data model. Integrated Data Model What’s the difference between a tactical scanner and an integrated vulnerability/threat management platform? Data sharing, of course. The platform needs the ability to consume and store more than just scan results. You also need configuration data, third party and internal research on vulnerabilities, research on attack paths, and a bunch of other data types we will discuss in the next post on advanced technology. Flexibility and extensibility are key for the data schema. Don’t get stuck with a rigid schema that won’t allow you to add whatever data you need to most effectively prioritize your efforts – whatever data that turns out to be. Once the data is in the foundation, the next requirement involves analytics. You need to set alerts and thresholds on the data and be able to correlate disparate information sources to glean perspective and help with decision support. We are focused on more effectively prioritizing security team efforts, so your platform needs analytical capabilities to help turn all that data into useful information. When you start evaluating specific vendor offerings you may get dragged into a religious discussion of storage approaches and technologies. You know – whether a relational backend, or an object store, or even a proprietary flat file system; provides the performance, flexibility, etc. to serve as the foundation of your platform. Understand that it really is a religious discussion. Your analysis efforts need to focus on the scale and flexibility of whatever data model underlies the platform. Also pay attention to evolution and migration strategies, especially if you plan to stick with your current vendor as they move to a new platform. This transition is akin to a brain transplant, so make sure the vendor has a clear and well-thought-out path to the new platform and data model. Obviously if your vendor stores their data in the cloud it’s not your problem, but don’t put the cart in front of the horse. We will discuss the cloud versus customer premises later in this post. Discovery Once you get to platform capabilities, first you need to find out what’s in your environment. That means a discovery process to find devices on your network and make sure everything is accounted for. You want to avoid the “oh crap” moment, when a bunch of unknown devices show up – and you have no idea what they are, what they have access to, or whether they are steaming piles of malware. Or at least shorten the window between something showing up on your network and the “oh crap” discovery moment. There are a number of techniques for discovery, including actively scanning your entire address space for devices and profiling what you find. That works well enough and tends to be the main way vulnerability management offerings handle discovery, so active discovery is still table stakes for VM offerings. You need to balance the network impact of active discovery against the need to quickly find new devices. Also make sure you can search your networks completely, which means both your IPv4 space and your emerging IPv6 environment. Oh, you don’t have IPv6? Think again. You’d be surprised at the number of devices that ship with IPv6 active by default and if you don’t plan to discover that address space as well, you’ll miss a significant attack surface. You never want to hold a network deployment while your VM vendor gets their act together. You can supplement active discovery with a passive capability that monitors network traffic and identifies new devices based on network communications. Depending on the sophistication of the passive analysis, devices can be profiled and vulnerabilities can be identified, but the primary goal of passive monitoring is to find new unmanaged devices faster. Once a new device is identified passively, you could then launch an active scan to figure out what it’s doing. Passive discovery is also helpful for devices that use firewalls to block active discovery and vulnerability scanning. But that’s not all – depending on the breadth of your vulnerability/threat management program you might want to include endpoints and mobile devices in the discovery process. We always want more data, so we are for determining all assets in your environment. That said, for determining what’s important in your environment (see the asset management/risk scoring section below), endpoints tend to be less important than databases with protected data, so prioritize the effort you expend on discovery and assessment. Finally, another complicating factor for discovery is the cloud. With the ability to spin up and take down instances at will, your platform needs to both track and assess