A business continuity strategy is an expensive proposition. In a previous post, we discussed how duplication of data and processes, the need for geographic distance and high-speed connectivity pile up costs, and how a service provider can offer existing facilities, economies of scale and cost certainty.
This time, let’s talk about the cost of *not* having a business continuity strategy. A few numbers, courtesy of the Web:
According to research house Ponemon Institute, a minute of downtime costs the average enterprise $7,900 (all figures US). That’s a number from 2013. And since the number was 40 per cent higher than it was in 2010, we can presume the 2015 number is considerably higher.
Of course, there are as many numbers as there are research reports. USA Today pegged the cost at more than $50,000 an hour according to 80 per cent of data centre managers surveyed, while 25 per cent said the cost was more than $500,000 an hour. And every enterprise isn’t the same. For some—particularly e-commerce firms and those that provide networking services to customers—it’s more expensive than others. So, your mileage may vary.
IT operations analytics firm Evolven, based in Jersey City, breaks out the costs of data centre downtime quite thoroughly, based on information from benchmarked data centers. For an average outage—not on an hourly basis, but per outage—Evolven’s Top three business costs are:
- Business disruption: $180,000
- Lost revenue: $118,000
- End-user productivity: $96,000
Those are some steep numbers, and they don’t include the cost of detecting the cause, remediating the problem, and IT hours spent, among other things.
The cause of outages is often beyond the control of the enterprise. Ponemon, for example, cites external attacks (34 per cent) and weather (30 per cent) as major contributors to failures. (Apropos to nothing in particular, the Weather Company, the most accurate forecasting organization in the world, with more than two billion sensors at its disposal, gets it right 75 per cent of the time, so if you think you’ve got the weather sussed, have another think.)
But many outages are a result of systemic failure. Ponemon also cites UPS failure (55 per cent), exceeded UPS capacity (46 per cent) and human error (48 per cent) as contributors. Yes, we’re well over 100 per cent, but we’re talking multiple failures, apparently. And that’s a concern.
It’s also possibly part of the reason that enterprise data centres have more outages than collocated facilities. Among enterprise data centers surveyed by Uptime Institute, seven per cent reported more than five outages in the previous 12 months. Only three per cent of collocation providers said the same.
My theory is that it is a matter of focus and experience. The enterprise IT department doesn’t just have to run a data centre, it has to provide end user support—and end users are not technologically sophisticated enough to deal with upgrades, patches, licensing, procurement … the list goes on. Service providers such as Rogers run data centres. Some of them run dozens or even hundreds. There is a volume of lessons learned, best practices discovered and applied.
So it goes, also, with your best defence against business disruption—your disaster recovery/business continuity (DR/BC) strategy. Keeping it in-house may provide a feeling of superior security and control, but it’s adding another layer to IT’s already complex job. Allocating the responsibility for DR/BC to a service provider—after negotiating a very strict service level agreement (SLA)—allows IT that bit of breathing room so the department can focus more on strategic alignment with the business and what it can do to grow the bottom line.