As we discussed in the Vulnerability Management Evolution introduction, traditional vulnerability scanners, focused purely on infrastructure devices, do not provide enough context to help organizations prioritize their efforts. Those traditional scanners are the plumbing of threat management. You don’t appreciate the scanner until your proverbial toilet is overflowing with attackers and you have no idea what are they targeting. We will spend most of this series on the case for transcending device scanning, but infrastructure scanning remains a core component of any evolved threat management platform. So let’s look at some key aspects of a traditional scanner.
Core Features
As a mature technology, pretty much all the commercial scanners have a core set of functions that work well. Of course different scanners have different strengths and weaknesses, but for the most part they all do the following:
- Discovery: You can’t protect something (or know it’s vulnerable) if you don’t know it exists. So the first key feature is discovery. The enemy of a security professional is surprise, so you want to make sure you know about new devices as quickly as possible, including rogue wireless access points and other mobile devices. Given the need to continuously perform discovery, passive scanning and/or network flow analysis can be an interesting and useful complement to active device discovery.
- Device/Protocol Support: Once you have found a device, you need to figure out its security posture. Compliance demands that we scan all devices with access to private/sensitive/protected data, so any scanner should assess the varieties of network and security devices running in your environment, as well as servers on all relevant operating systems. Of course databases and applications are important too, but we’ll discuss those later in this series. And be careful scanning brittle systems like SCADA, as knocking down production devices doesn’t make any friends in the Ops group.
- Inside/Out and Outside/In: You can’t assume adversaries are only external or internal, so you need the ability to assess your devices from both inside and outside your network. So some kind of scanner appliance (which could be virtualized) is needed to scan the innards of your environment. You’ll also want to monitor your IP space from the outside to identify new Internet facing devices, find open ports, etc.
- Accuracy: Unless you enjoy wild goose chases, you’ll come to appreciate a scanner that minimizes false positives by focusing on accuracy.
- Accessible Vulnerability Information: With every vulnerability found, decisions must be made on the severity of the issue, so it’s very helpful to have information from either the vendor’s research team or other third parties on the vulnerability, directly within the scanning console.
- Appropriate Scale: Adding capabilities to the evolved platform makes scale a much more serious issue. But first things first: the scanner must be able to scan your environment quickly and effectively, whether that is 200 or 200,000 devices. The point is to ensure the scanner is extensible to what you’ll need as you add devices, databases, apps, virtual instances, etc. over time. We will discuss platform technical architectures later in this series, but for now suffice it to say there will be a lot more data in the vulnerability management platform, and the underlying platform architecture needs to keep up.
- New & Updated Tests: Organizations face new attacks constantly and attacks evolve constantly. So your scanner needs to keep current to test for the latest attacks. Exploit code based on patches and public vulnerability disclosures typically appears within a day so time is of the essence. Expect your platform provider to make significant investments in research to track new vulnerabilities, attacks, and exploits. Scanners need to be updated almost daily, so you will need the ability to transparently update them with new tests – whether running on-premises or in the cloud.
Additional Capabilities
But that’s not all. Today’s infrastructure scanners also offer value-added functions that have become increasingly critical. These include:
- Configuration Assessment: There really shouldn’t be a distinction between scanning for a vulnerability and checking for a bad configuration. Either situation provide an opportunity for device compromise. For example, a patched firewall with an any-to-any policy doesn’t protect much – completely aside from any vulnerability defects. But unfortunately the industry’s focus on vulnerabilities means this capability is usually considered a scanner add-on. Over time these distinctions will fade away, as we expect both vulnerability scanning and configuration assessment to emerge as critical components of the platform. Further evolution will add the ability to monitor for system file changes and integrity – it is the same underlying technology.
- Patch Validation: As we described in Patch Management Quant, validating patches is an integral part of the process. With some strategic integration between patch and configuration management, the threat management platform can (and should) verify installed patches to confirm that the vulnerability has been remediated. Further integration involves sending information to and from IT Ops systems to close the loop between security and Operations.
- Cloud/Virtualization Support: With the increasing adoption of virtualization in data centers, you need to factor in the rapid addition and removal of virtual machines. This means not only assessing hypervisors as part of your attack surface, but also integrating information from the virtualization management console (vCenter, etc.) to discover what devices are in use and which are not. You’ll also want to verify the information coming from the virtualization console – you learned not to trust anything in security pre-school, didn’t you?
Leveraging Collection
So what’s the difference with all of these capabilities from what you already have? It’s all about making 1 + 1 = 3 by integrating data to derive information and drive priorities. We have seen some value-add capabilities (configuration assessment, patch validation, etc.) further integrated into infrastructure scanners to good effect. This positions the vulnerability/threat management platform as another source of intelligence for security professionals.
And we are only getting started – there are plenty of other data types to incorporate into this discussion. Next we will climb the proverbial stack and evaluate how database and application scanning play into the evolved platform story.
Reader interactions
2 Replies to “Vulnerability Management Evolution: Scanning the Infrastructure”
Mike,
Concerning “Discovery”, “Inside/Out – Ouside/In” and “Cloud/Virtualization Support” paragraphs, maybe you want to take into consideration that the boundaries of the network are disappearing and become blurry as more and more enterprise assets are hosted in the Cloud and in PaaS (platform as a service) environments. The Amazon EC2 use case is probably a good example to underscore the fact that these functions should be accountable for undertaking the challenge of giving to the user a good visibility on these assets that are dynamic by nature and have a faster lifecycle than the more traditional assets that live behind the firewalls, along with their security posture. In other words, the idea of IN and OUT (the private network) is redefined by the emergence of the cloud.
In this perspective, there is another core technology that I would mention; it is the “asset management” function which becomes the cornerstone of a threat management platform. Knowing perfectly your assets, in the context of the enterprise organization, enables the creation of relevant reports, risk assessment and remediation policies. In addition to this, tracking your assets is also a way to provide a better accuracy for the vulnerability reports. Not only do you need a good vulnerability scanning engine that provides high accuracy of detections (which can be extended to the idea of reliability that goes along with accuracy) but when it comes to reporting on the trends and the evolution of your security posture, you want to make sure that the solution can efficiently track the entire lifecycle of the vulnerability for a given host.
Last, but not least, I would recommend talking about open and extensive APIs that are the enabler for any kind of integrations. Either with a third party solution (like the ones you are mentioning later in your other articles) or with a custom integration, such as a homegrown solution.
Mike,
You mention the need for ‘Accessible Vulnerability Information’ and ‘New and Updated Tests’ but from you explanation it seems that these tests come directly from the vendor. For vulnerability data, it’s usually the vendor who packages the policy into a vulnerability test, but that’s not the case with configuration and security specific tests. You may want to consider an additional core feature as the ability to create/update/manage policies. Every enterprise has a list of suitable patches as they may have _custom or proprietary_ versions that have been internally vetted. These revisions are at odds with the scanning vendors recommendations. And enterprises often have oodles of customer configuration settings, allowed feature add-ons and accounts that meet their internally established configuration guidelines. SMB, usually not, but I’ve yet to see a vuln scanner of any type not heavily modified by large enterprise IT.
For ‘Additional Capabilities’, I think you should include scanners should be able to provide detailed credentialed scans or pure interrogative mode options. Credentialed scans are faster and more reliable. You may not care to differentiate between scanners that look to _exploit_ vulnerabilities as opposed to those that simply look for signs that a device is vulnerable (retrieval of patch revision values or communication protocol variations are both examples), but the first should be considered as part of the feature set.
-Adrian