We went through the risks and challenges of infrastructure hygiene, and then various approaches for fixing the vulnerabilities. Let’s wrap up the series by seeing how this kind of approach works in practice and how we’ll organize to ensure the consistent and successful execution of an infrastructure patch.
Before we dive in, we should reiterate that none of the approaches we’ve offered are mutually exclusive. A patch does eliminate the vulnerability on the component, but the most expedient path to reduce the risk might be a virtual patch. The best long-term solution may involve moving the data layer to a PaaS service. You figure out the best approach on a case-by-case basis, balancing risk, availability, and the willingness to consider refactoring the application.
Quick Win
High-priority vulnerabilities happen all the time, and how you deal with it typically determined the perceived capability/competence of the security team. In this scenario, we’ve got a small financial organization, maybe a regional bank. They have a legacy client/server application handling customer loan data that uses stored procedures heavily for back-end processing. The application team added a front-end web interface in 2008, but it’s been in maintenance mode since then. We know 1998 called and wants their application back. Still, all the same, when a vendor alert informs the team of a high-profile vulnerability impacting the back-end database, the security team must address the issue.
The first step in our process is risk analysis. Based on a quick analysis of threat intelligence, there is an exploit in the wild, which means doing nothing is not an option. And with the exploit available, time is critical. Next, you need a sense of the application’s importance, described above as having customer loan data, so clearly, it’s essential to the business. Since application usage typically occurs during business hours, a patch can happen after hours. The strategic direction is to migrate the application to the cloud, but that will take a while, so it’s not anything to figure into this analysis.
Next, look at short-term mitigation, needed because the exploit is used in the wild, and the database is somewhat accessible via the web front end. The security team deploys a virtual patch on the perimeter IPS device, which provides a means of mitigating the attack. As another precaution, the team decides to increase monitoring around the database to ensure that no insider activity is detected that would evade the virtual patch.
The operations team then needs to apply the patch during the next maintenance window. Given the severity of the exploit and the data’s value, you’d typically need to do a high-priority patch. But the virtual patch bought the team some time to test the patch to make sure it doesn’t impact the application. The patch test showed no adverse impact, so operations successfully applied it during the next maintenance window.
The last step involves a strategic review of the process to see if anything should be done differently and better next time. The application is slated to be refactored and moved into the bank’s cloud tenant, but not for 24 months. Does it make sense to increase the priority? Probably not; even if the next vulnerability doesn’t lend itself to a virtual patch, an off-hours emergency update could be done without a significant impact on application availability. As refactoring the application begins, it will make sense to look at moving some of the stored procedures to an app server tier and migrating the data later to PaaS to reduce both the application’s attack and operational surface.
Organization Alignment
The scenario showed how all of the options for infrastructure hygiene could play together to mitigate the risk of a high-priority database vulnerability effectively. Several teams were involved in the process, starting with security that identified the issue, worked through the remediation alternatives, and deployed the virtual patch and additional monitoring capabilities. The IT Ops team played an essential role in managing the testing and application of the database patch. The architecture team weighed in at the end about migrating and refactoring the application in light of the vulnerability.
For a process to work consistently, all of these teams need to be aligned and collaborating to ensure the desired outcome – application availability. However, we should mention another group that plays a crucial role in facilitating the process – the Finance team. Finance pays for things like a perimeter device that deploys the virtual patch, as well as a support/maintenance agreement to ensure access to patches, especially for easily forgotten legacy applications. As critical as technical skills remain to keep the infrastructure in top shape, ensuring the technical folks have the resources to do their jobs is just as important.
With that, let’s put a bow on the Infrastructure Hygiene series. We’ll be continuing to gather feedback on the research over the next week or so, and then we’ll package it up as a paper. Thanks again to Oracle for potentially licensing the content, and keep an eye out for an upcoming webcast on the topic.
Comments