This morning, database security company Sentrigo released some results from in informal survey they performed at a series of Oracle User Group meetings.
Results highlight that most organizations are not taking advantage of Oracle CPUs in a timely manner, if at all. Findings include: – When asked: “Have you installed the latest Oracle CPU?” — Just 31 people, or ten percent of the 305 respondents, reported that they applied the most recently issued Oracle CPU. – When asked: “Have you ever installed an Oracle CPU?” — 206 out of 305 OUG attendees surveyed, or 67.5 percent of the respondents said they had never applied any Oracle CPU.
These findings support my experiences in talking with database administrators and performing informal surveys (hand raising) during conference presentations. Most people seem to patch once a year.
I honestly believe we’ve been dodging database-security bullets for too long. I fully understand how hard it is to test and install a patch to a critical database, but these are also (often) the most important digital assets we own. Oracle will be releasing their quarterly Critical Patch Update, and your security and database teams should be preparing to evaluate the patches, perform a risk assessment, prioritize, and install. While I’ve been critical of how much information they release on updates, I’m a big fan of the quarterly update cycle. It gives enterprises time to prepare for the release and install it in a timely manner, and a quarterly cycle is much more reasonable for databases.
When I ask clients why they don’t patch in a timely fashion, it usually goes like this:
Me: So why haven’t you patched yet?
Them: Our databases are behind a firewall. There’s no Internet access so we only worry about it once a year.
Me: Is the database server firewalled from internal users?
Them: Yes.
Security Guy In Back Of The Room: No, not really.
Me: What would happen if someone wrote a virus that infected a sales guy’s laptop at Starbucks, then scanned and attacked database servers when he came back to work?
Them: Oh. Um. Well, we have antivirus.
Me: Oh. Well, you’ve got that going for you. Which is nice. You using any database security tools? Maybe an activity monitor, inline protection, or an agent?
[insert crickets here]
Reader interactions
7 Replies to “Please Patch Your Freaking Database Servers!”
[…] Please patch your freaking database servers! […]
Rani … wow. Thanks for sharing.
[…] are left un-patched especially after so many publicized incidents. But those in the know already knew […]
Adrian,
FWIW, I can tell you that across the 14 different OUGs where we posed the questions, the results were pretty consistent in terms of response distribution. The respondents represent a pretty good cross-section of large enterprises and SMBs across sectors (finance, telecom, retail, healthcare, education etc.)
I’‘m sure that if we’‘d conducted the survey only among banks and the defense sector, we’‘d get very different results. But I think they are the anomaly – the norm is what we saw at the OUGs.
And no, none were “executive track” presentations…
67% said they have never installed one? Was this an ‘‘executive track’’ presentation? This is really odd. I understand that Oracle is a bit different than the other DB vendors. The patches and best practices some times break normal operations, so I have usually QA’‘ed them prior to deployment, which means they may lag a quarter for for production. But I do install them. And to further DS’s comment above, Oracle has also trained their user community to adopt the ‘’.2’’ version into product. And as we have seen, even more so outside the US than within, 7.2, 8.2, v9R2, 10Gr2 are considered production and a lot of the patches were, at the time of release, in place. But still, there are new issues that pop up. Not sure if this is a statistical anomaly or not, but if this does represent the norm, this is not good!
We all know the answer to this one: “Ack, we haven’‘t rebooted the accounting system in 5 years, how do we know we’‘ll be able to get it back up after we patch it, and do you want us to add your name to the scathing commentary we give to the board of directors when they ask why we had a week of outage and our stock price was split in half?”
Not that it doesn’‘t need to be done, it’s just a little bit harder to stop everything so you can patch the database servers.
Good architecture helps (ie, 3-tier the thing out so the database really is behind a COUPLE layers of firewall) and designing the thing right (ie, a cluster where you can pop one node offline, patch it, and bring it up to test, then use it as the start of a new freshly-patched cluster) is invaluable.
Your point is valid, but Oracle trains their users to think this way with the excessively long lead time on patches, the number of unpatched issues, historically using a quarterly release cycle and the occasional tendency for patches to not actually fix the problem. After all, it was Oracle that had to issue a patch for their patch. I think that some of this may be contributed to by the number of Oracle DBs running on Unix systems as well. I know _many_ Unix SAs who don’‘t patch… their mission critical systems are just too downtime sensitive to bother.
Maybe Oracle is getting better at things, but they havee traditionally sucked^2. Neat features and all, but, IMO, they are to databases what Microsoft was to operating systems pre-XP SP2.
I suspect Microsoft SQL server DBAs would be much more likely to be current or near current, as patching is part of the DNA.