The cost of business continuity, and why it’s worth every penny

Sponsored By: Rogers

At some point, every enterprise is going to have to put a disaster recovery (DR) and business continuity (BC) plan into action.


The Enterprise Connectivity Series
Future-proofing your business

Why managed Wi-Fi makes sense for business

Reducing the cost and complexity of network security

How upgrading your network can deliver a competitive advantage

Keeping it simple: Tackling infrastructure complexity

Three ways businesses can shed the burden of managing mobile devices and data

An enterprise that thinks its data centre is bulletproof, that it has all the bases covered, hasn’t learned from a few natural disaster lessons. Hurricane Sandy in 2012 put enterprise data centres underwater on the East Coast, just like Katrina did on the Gulf Coast in 2005. In August 2003, a blackout took down the entire power grid on the East Coast for days.

It’s not just Mother Nature that’s after your data centers. Malware authors are trying to bring down your systems. Ransomware criminals want to hold you hostage by demanding money to decrypt files they’ve compromised. Sometimes, there’s simple negligence on the inside. It’s irresponsible not to have an enterprise disaster recovery and business continuity strategy. The problem is, it can be a very expensive proposition, and IT has to beg for every budget dollar it gets, just like every other department. That’s why a cloud or managed services solution should be considered.

Three Principles

There are three basic DR/BC principles that allow an enterprise to be brought back online in a timely fashion: duplication of data and processes, geographic distance, and dedicated high-speed connectivity.

Data backup has been a key strategy for many enterprises for many years. It hasn’t always been remarkably sophisticated; back up the data you have at the end of the day to some kind of archival medium, generally LTO (Linear Tape Open) tape devices, and ship them somewhere offsite for safety’s sake. If there’s a problem and the data has to be restored, the tape gets shipped back, mounted on a drive, and the data is restored. It’s manually intensive and time-consuming, but critical transaction and archival data is preserved.

But in today’s velocity-based business environment, just having the data backed up isn’t enough. The infrastructure that can allow business processes to resume operations as soon as possible has to be duplicated, too. Applications, data, access—all have to be replicated on a moment’s notice. That adds substantially to your infrastructure costs.

Far, Far Away

Technology has evolved, of course, and two innovations in particular have had an impact on DR/BC—improvements in tiered storage strategies and virtualization. Tiered storage allows the most critical information to be stored nearest—in terms of accessibility—to the data centre infrastructure. With the increase in performance and decreased cost of solid state storage, there has even been a fourth storage tier added to the online/nearline/offline pyramid—Tier Zero storage.

Virtualization technology allows IT to quickly mimic a compute workload, duplicating and updating the software infrastructure needed to restore operations if a primary virtual machine (VM) image goes down. It’s relatively easy to move VM images around the servers in a data center.

If your system failure involves a single machine, that’s an adequate backup. But if your entire data centre is underwater, that motioning doesn’t help.

That’s why geographic distance is an important element of a DR/BC strategy. If your backup is in the same building, or even in the same city, whatever caused your primary system to go down will probably swamp your backup. Your DR/BC facility has to have some geographic distance from your primary one—different power grid, different weather patterns, etc.—to be effective. Again, you’ve added a hit to your expenses, this time on building/leasing, power and cooling.

Connectivity

With your duplicated hardware/software/real estate costs, you’ve got two disparate facilities that need to be connected. What that adds to the bill depends on the bandwidth needed for your applications and your strategy.

Unless you’ve buried your own fiber, you’ll need to work with a service provider. And if you need to work with a service provider anyway, you should look at how much of the DR/BC equation it makes sense to put in the hands of a trusted service provider like Rogers. Service providers have economies of scale, multiple locations, service level agreements (SLAs) and maintenance provisions that can take a lot of the financial load off of an organizations DR/BC strategy. They also offer a degree of cost certainty—if finance doesn’t have to provision for emergency hardware replacement or maintenance, budgeting becomes a much more comfortable process.

So we’ve talked about the costs of having a DR/BC plan. Next time, we’ll take a look at the costs of *not* having one.

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Sponsored By: Rogers

Dave Webb
Dave Webb
Dave Webb is a technology journalist with more than 15 years' experience. He has edited numerous technology publications including Network World Canada, ComputerWorld Canada, Computing Canada and eBusiness Journal. He now runs content development shop Dweeb Media.