Application Downtime Costs the Global 1000 $Billions per Year

MFGBasketballMy first summer internship during graduate school was with a very large IT company in Austin, Texas. My job, besides playing on the intramural golf team,  was to interview users and figure out what I now would characterize as “pain” as it related to Information Technology (IT) shortcomings.   I have never forgotten what one of the department heads for manufacturing told me.  He said that IT was nothing more than a “necessary evil” and that he expected manufacturing applications to be down at least 25% of the time and probably more during the end of the month and end of the quarter when he and his crew were trying to meet their  quotas.

As a side note, some time later this year I plan to write an article about how this company had a yellow line in the shipping dock and if boxes were on the other side of that line, they were counted as shipped.

Anyway,  he went on to say that although he didn’t really have any good metrics (my words not his), he calculated that the inability of his manufacturing operations to deliver products that had been sold were primarily due to application downtime and not manufacturing issues.  I also remember that he said that his crew had so much downtime that he put up a basketball goal to give his workers something to do while they were waiting for IT to get his manufacturing applications back up and running.

Fast forward 20 plus years and very little has changed.   According to a recent IDC Survey, Unplanned application downtime costs the Fortune 1000 from $1.25 billion to $2.5 billion every year.  This report conducted in October and November of 2014 by IDC contains critical DevOps metrics collected from 20+ Fortune 1000 organizations and includes best practices for development, testing, application support, infrastructure, and operations teams.

“Our survey indicates that over 40 percent of these companies have a DevOps practice, and another 40 percent are actively evaluating DevOps,” said IDC Vice President Stephen Elliot. “The DevOps teams are deploying new tools, building connections among various stakeholders, and increasing project speed and success, among other benefits that they are delivering.”

The research also brought to light the real costs and impact of outages. On average, infrastructure failure costs large enterprises $100,000 per hour. Critical application failures exact a far steeper toll, from $500,000 to $1 million per hour.

Whether infrastructure or application, 35 percent of respondents reported time-to-repair of one to 12 hours. Double-digit percentages — 17 percent for infrastructure failures, and 13 percent for application failures — measured their time-to-repair in days rather than hours.

In an article written by Steven Wastie titled, “IDC Survey: Downtime Costs Large Companies Billions” and published on the APM Digest site on February 19, 2015, Mr. Wastie indicates, “While the majority of Fortune 1000 respondents have or are considering DevOps practices, it is not a path without challenges. Well over half of those surveyed cite “cultural inhibitors” as the biggest risk for DevOps implementation. More than 40 percent point to fragmented processes, and slightly more than a quarter say lack of executive support are the biggest challenges to DevOps implementation.”

At the same time, expectations for DevOps practices are high. Two-thirds of those surveyed expect DevOps to improve customer experience; almost as many are looking for it to lower IT costs. Other outcomes that place high on the list of expectations include improved productivity, higher profits, and improved IT employee satisfaction. DevOps is expected to accelerate delivery of capabilities to the customer by an average of 15 to 20 percent.

Toward those goals, the initiatives that respondents plan to implement as part of DevOps include automation (60%), continuous delivery (50%), continuous integration (43%), automated testing (43%), and application monitoring/management (43%), among other capabilities. Application management tops the list of new tools DevOps teams are likely to purchase.

Tool additions or replacements are a definite priority, as IT organizations that have tried to custom-adjust their current tools for DevOps practices have a failure rate of 80 percent.

No Single Point of Enterprise Wide Control

I believe that looking at the problem of application downtown from an application development centric standpoint disregards the fact that Global 1000 corporations are run by disparate legacy systems with no single point of control.  As a result no one within the corporate IT or business “chain of command” has a enterprise wide picture or control over all of the IT platforms that are running the enterprise.    Therefore, adding new platforms to this mix only exacerbates that problem.

IT Automation platforms from BMC, IBM Tivoli, HP and SMA Solutions (to name just a few of the vendors in this space) enable the enterprise to monitor and “take corrective action” for some of these “IT silos”.  And, with the introduction of Information Technology Operation Analytics (ITOA), there is more and more real time information upon which to act and reduce application downtown.

However, their is still no single vendor that can deliver the “Holy Grail” of IT Automation which is a single platform that can monitor and automatically control (via workflows) every single application on every single platform within an organization.

Whichever vendor brings that solution to the marketplace will be the “Next Big Thing”.

About Charles Skamser
Charles Skamser is an internationally recognized technology sales, marketing and product management leader with over 25 years of experience in Information Governance, eDiscovery, Machine Learning, Computer Assisted Analytics, Cloud Computing, Big Data Analytics, IT Automation and ITOA. Charles is the founder and Senior Analyst for eDiscovery Solutions Group, a global provider of information management consulting, market intelligence and advisory services specializing in information governance, eDiscovery, Big Data analytics and cloud computing solutions. Previously, Charles served in various executive roles with disruptive technology start ups and well known industry technology providers. Charles is a prolific author and a regular speaker on the technology that the Global 2000 require to manage the accelerating increase in Electronically Stored Information (ESI). Charles holds a BA in Political Science and Economics from Macalester College.