When I lived in the Bay Area, each Spring we had the same news repeat. Like clockwork, every year, year after year, and often by the same reporter. The story was the huge, looming danger of forest or grass fires. And the basis for the story was either because the rainfall totals were above normal and had created lots of fuel, or that the below-average rainfall had dried everything out. For Northern California, there really are no other outcomes. Pretty much they were saying you’re screwed no matter what. And no one on their editorial staff considered this contradiction because there it was, every spring, and I guess they had nothing else all that interesting to report.
I am reminded of this every time I read posts about how Oracle databases remain un-patched for one, or *gasp* two whole patch cycles. Every few months I read this story, and every few months I shake my head. Sure, as a security practitioner I know it’s important to patch, and bad things may happen if I don’t. But any DBA who has been around for more than a couple years has gone through the experience of applying a patch and causing the database to crash hard. Now you get to spend the next 24-48 sleepless hours rolling back the patches, restoring the data, and trying to get the entire system working again. And it only cost you a few days of your time, a few thousand lost hours of employee productivity, and professional ridicule.
Try telling a database admin how urgent it is to apply a security patch when they have gone through that personal hell! A dead database tells no tales, and patching it becomes a moot point. And yet the story every year is the same: you’re really in danger if you don’t patch your databases. But practitioners know they could be just as screwed if they do patch. Most don’t need tools to tell them how screwed they are – they know. Dead databases are a real, live (well, not so ‘live’), noisy threat, whereas hackers and data theft are considerably more abstract concepts.
DBA’s and IT will demand that database patches, urgent or otherwise, are tested prior to deployment. That means a one or two cycle lag in most cases. If the company is really worried about security, they will implement DAM or firewalls; not because it is necessarily the right choice, but so they don’t have to change the patching cycles and increase the risk of IT instability. It’s not that we will never see a change in the patch process, but in all likelihood we will continue to see this story every year, year after year, ad nauseum.
Reader interactions
3 Replies to “Database Patches, Ad Nauseum”
Adrian, you are right that the real trick is to be consistent in providing the protections. Reversing of patches is not a big issue and if you have the “before” and “after” images, you can pretty much 100% be sure of the vulnerability and how to protect against it. More on Oracle CPU dissection is here – http://www.slaviks-blog.com/2009/01/20/oracle-cpu-dissected/
Some vulnerabilities are almost impossible to protect against without a real patch (Oracle views vulnerability for example) but 95% of them can be virtually patched.
Also, since there are many evasion techniques (Metasploit, Inguma encode and randomize the payloads) the only working way to actually catch these attacks is by viewing them from the database point of view (from the database memory or through instrumentation).
Cheers,
Slavik
Slavik,
Thanks for the comment. You are correct that virtual patching is a way to help fill in the gaps between patching cycles. For those not familiar with the concept, virtual patching is a form of activity blocking, and implemented as a sub-component of Database Activity Monitoring. In this case, when a specific fingerprint or pattern that indicates a database query is being used to exploit a vulnerability, the virtual patch will stop the query. There are different methods for doing this: everything from TCP resets to intercepting the statement and not passing it to the database.
Some companies we speak with have been implementing virtual patching at the WAF level, feeling it is better that the statement never reach the application or database servers as they may cause other side effects even if the exploit is not successful. Further, WAF provides a generic platform for a broader set of vulnerabilities, not just database. Still, WAFs are not always capable of detection or the precise granularity that the database scanners are, and the exploit may be launch internally from another compromised platform, and avoid the WAF entirely.
The trick with virtual patching is to have consistency that when exploits are discovered that the fingerprint be created and deployed ASAP. This takes some of the urgency off deploying a patch out-of-band. For those of you who read my rants on the lack of information contained in IBM and Oracle patch advisories, this is one of the reasons. If you don’t know what the exploit is, you don’t know how to fingerprint the vulnerability, and you don’t know what to look for. Reverse engineering a patch does not always indicate what the vulnerability in the code was, or how it can be exploited.
-Adrian
A bit late to comment but here goes…
As a DBA for many years I understand this conflict perfectly. On the other hand, closing your eyes and ignoring the dangers of not patching is not an answer. You know, as well as anyone in this space, that a day after Oracle releases a patch, you can find PoC code for a lot of the vulnerabilities – some of them are no-authentication full control ones. More over, analyzing the patch and understanding the vulnerabilities is not such a big issue. We do it. Blackhats do it as well.
There is middle ground here and it is “virtual patching”. No need to bring the database down, no need to re-test your applications, and it is provided for versions no longer supported by the vendor. Of course, actually patching is better but virtual patching at least provides you with a stop-gap solution until you get around to patch.
Disclosure – I’m from Sentrigo which offers vPatch so I’m obviously biased.
Cheers,
Slavik