I just ran across Slashdot’s mention of the Measuring and Monitoring Technical Debt study funded by a research grant. Their basic conclusion is that a failure to modernize software is a form of debt obligation, and companies ultimately must pay off that debt moving forward. And until the modernization process happens, software degrades towards obsolescence or failure.
From Andy Kyte at Gartner:
“The issue is not just that maintenance keeps on getting deferred, it is that the lack of an application inventory and the absence of a structured review process for the application portfolio. This means the IT management team is simply never aware of the true scale of the problem,” Mr. Kyte said. “This problem, hidden from sight, is getting bigger every year and more difficult to deal with every year.”
I am on the fence on the research position – apparently others are as well – and I disagree with many of the assertions because the cost of inaction needs to be weighed against the cost of overhauls. The cost of migration is significant. Retraining users. Retraining IT. New software and maintenance contracts. The necessary testing environments and regression tests. The custom code that needs to be developed in order to work with the software packages. Third party consulting agreements. New workflow and management system integration. Fixing bugs introduced with the new code. And so on.
In 2008, 60% of the clients of my former firm were running on Oracle & IBM versions that were 10 years old – or older. They stayed on those version because the databases and applications worked. The business functions operated exactly as they needed them to – after 2-5 years of tweaking them to get them exactly right. A new migration was considered to be another 2-5 year process. So many firms selected bolt-on, perimeter-based security products because there was no way to build security into a platform in pure maintenance mode. And they were fine with that, as the business application was designed to a specification that did not account for changes to the security landscape, and depended on network and platform isolation. But the primary system function it was designed for worked, so overhaul was a non-starter.
Yes, the cost of new features and bug fixes on very old software, if needed, was steep. But that’s just it … there were very few features and bug fixes needed. The specifications for business processing were static. Configuration and maintenance costs we at a bare minimum. The biggest reason why “The bulk of the budget cut has fallen disproportionately on maintenance activities –” was because they were not paying for new software and maintenance contracts! Added complexity would have come with new software, rather than keeping the status quo. The biggest motivator to upgrade was that older hardware/OS platforms was either too slow, or began failing. A dozen or so financial firms I spoke with performed this cost analysis and felt that every day they did not upgrade saved them money. It was only in segments that required rapid changes to meet changing market – retail and shipping come to mind – that corporations benefitted from modernization and new functionality to improve customer experience.
I’ll be interested to see if this study sways IT organizations to modernize. The “deferred maintenance” message may resonate with some firms, but calling older software a liability is pure FUD. What I hope the study does is prompt firms to compare their current maintenance costs against upgrades and new maintenance – the only meaningful must be performed within a customer environment. That way they can intelligently plan upgrades when appropriate, and be aware of the costs in advance. You can bet every sales organization in the country will be delivering a copy of this research paper to their customers in order to poke and prod them into spending more money.
Reader interactions
3 Replies to “IT Debt: Real or FUD?”
I lived throught two major military ‘deferred maintenance’ programs. The Air Force Phase II replacement of the Burrough B3500 Systems and the DoD GCCS replacement of the WWMCS Honeywell/Bull systems. Both were fairly major projects to replace systems that the government paid the vendor for years to keep alive systems that they no longer manufactured. The WWMCS modernization program, later GCCS, was a major disaster only matched by the FAA modernization project (another deferred maintenance’ catch-up). Obviously the managers of 10 year old Oracle systems may disagree with me, but I believe that to fail to keep up with the current version of OS and DataBase is an effective way to end up being named in the Washington Post.
I think there is a valid point, that at some point (especially on proprietary hardware) your software will have an end of life. If that is twenty years away, then you don’t need to act on that immediately, but you do need to be aware that it is twenty years and not five-to-ten years.
If you are dependent on an aging and irreplaceable hardware stack, then you may need to start acting now to be still functioning in five years time.
If you can run on commodity/virtualized hardware, the hardware may not be a driver.
This notion of “deferred maintenance” is pretty typical in times of economic downturn, though typically in the context of physical maintenance and employee support (buildings, payroll, benefits, etc.). It’s unsurprising to me that the same notion might be extended to software projects. As you rightly point out, this would be especially true if the running system appears to be fine (functionally). It does make me wonder, then, if businesses actually see the reluctance to upgrade as a matter of “deferred maintenance” or simply more about maintaining current operating conditions. Unless there’s a compelling financial argument for upgrading, why should they? It always seems to come back to basic cost v. benefit, doesn’t it?