Connecting the boxes

Today’s storage-area networks (SANs) are only the beginning. The next steps will be building long-haul links to tie together isolated SANs, allowing rapid communication among SANs in different data centres that may be many kilometres apart, and using storage virtualization to hide the complexity

of these increasingly large and intricate storage networks from the processors that use them.

The business of linking SANs over longer distances is not very large today, says Ron Kline, program director for optical networks for North America at San Francisco research firm RHK Inc., but it is growing. Kline puts the market at “”somewhere north of US$200 million”” at present.

There are several ways of connecting SANs over longer distances, and all of them use fibre optics. Currently the most common method is dense wavelength division multiplexing (DWDM), says Kline. As telecommunications carriers launch next-generation Synchronous Optical Network (SONET) services, though, “”that will become a viable option as well.””

Organizations that want to connect SANs can obtain such services from carriers large or small, Kline says, or they can lease dark fibre, install their own equipment and do it themselves. The do-it-yourself option is most viable for larger organizations with the expertise to handle it, and working with a large carrier is probably the least risky option for others.

DISASTER RECOVERY THE DRIVING FORCE FOR WIDE-AREA SANS

The DWDM and SONET options are expensive, though, and out of reach for all but the largest companies. Kline says “”big, data-hungry organization”” like banks and airlines can afford to travel this route, but many others can’t. Options are appearing for medium-sized organizations, though. One is metropolitan-area Ethernet. Another is extending Fiber Channel over long distances on Internet Protocol (IP).

Carriers such as AT&T Corp. are also beginning to provide outsourcing services that address long-haul connections among SANs, Kline says.

The most common reason for linking SANs across longer distances is disaster recovery. Organizations that were seriously concerned about keeping their businesses going in the aftermath of a disaster destroying the main data centre have for years stored backups off-site. In many cases, these organizations maintained secondary data centres ready to spring into action if the need arose. By mirroring data to a second data centre on a frequent or even real-time basis, they can reduce dramatically the amount of time it would take to get back up and running if some disaster befell the primary data centre.

“”If you’re a large financial firm, you stand to lose over a million dollars an hour for every hour your network is down,”” Kline says. That’s a strong motivator for finding ways to speed up recovery.

Alan Freedman, research manager for infrastructure hardware at IDC Canada Ltd. in Toronto, says disaster recovery is the “”driving force”” for extending SANs across wider areas.

The Sept. 11 terrorist attacks two years ago got many people in North America thinking more seriously about disaster recovery. The power blackout that rolled across Ontario and most of the northeastern U.S on Aug. 14 has only reinforced the lesson. People have been talking a lot more about disaster recovery in the past 18 to 24 months, Freedman says, and “”a lot of companies are actually putting that talk into action and implementing some well-thought-out disaster recovery plans.””

Previously, even if regular backups were sent to an alternate site, it could take a lot of time to prepare that data for use if it was needed, says Rich Clifton, vice-president of the SAN/iSAN business unit of Sunnyvale, Calif.-based Network Appliance Inc. Today, Network Appliance offers an option that lets customers mirror their data from a primary SAN to a remote one and have the data ready for use instantly should the primary SAN go down.

Preparing for a disaster is not the only motivation for linking SANs across data centres, says Ken Steinhardt, director of technology analysis at Hopkinton, Mass.-base EMC Corp.

Another reason is the desire to bring from different locations together so it can be consolidated more easily for reporting. “”The only way to really be able to do that is to be able to get some sort of higher-level consolidated view,”” Steinhardt says. A way to do that is to pull data from storage in multiple locations over wide-area links to a central point.

Many large organizations have multiple data centres in different locations, and long-haul links among SANs are a good way to create replicas of data in more than one data centre for a number of purposes, says Brian Truskowski, general manager of storage software at IBM.

Another reason for tying separate SANs together is efficiency, says IDC’s Freedman. With separate storage-area networks in different locations, one may have spare capacity while another is full to bursting. Rather than spend money on adding capacity to the overloaded SAN, an organization might use wide-area links between them to take advantage of the available space somewhere else.

SONET WILL BECOME A VIABLE OPTION FOR LINKING SANS

Long-distance links among SANs can also reduce administration costs by allowing for remote management, says Steinhardt at EMC. For instance, he says, one major transportation company manages SANs at two locations from a single point.

“”People are trying to get more out of their installed networks and out of their existing infrastructure,”” says Freedman. “”A lot of people don’t have capital expenditure budgets these days.””

A similar motive helps drive interest in storage virtualization – which Freedman notes is increasingly being referred to as storage management.

Steinhardt defines virtualization as “”fooling some component into thinking that it’s seeing something that’s substantially simpler than what is really under the covers.”” Storage virtualization in its earliest and simplest form meant making a number of disks look like one big disk, so that applications would not have to deal with the complexity of multiple storage devices.

EACH VENDOR HAS ITS OWN VIRTUALIZATION STANDARD

A layer of storage virtualization software sitting between the application server and the physical storage hides the complexity of the storage systems from the applications, Truskowski says. This way, changes in the physical storage systems don’t affect the applications.

Like wide-area links among SANs, virtualization is interesting partly because of its ability to make better use of existing facilities. It can combine a large number of separate storage devices into a single pool of storage available to many applications. Many businesses are using as little as 20 to 40 per cent of their storage capacity, Truskowski says. “”If we could help them drive their utilization rate up by 20 per cent, the typical customer with a SAN would see about a quarter of a million dollars in annual savings.””

But the major problem with storage virtualization today is a lack of common standards and definitions throughout the industry, says IDC’s Freedman. “”Every vendor has their own vision of virtualization and their one definition of what virtualization is,”” he says. For that reason, interoperability isn’t what it could be. “”While virtualization is fairly advanced in a homogeneous environment, in the heterogeneous environments that isn’t working as well.””

However, Freedman says, standards bodies are making serious efforts to create effective standards for virtualization.

Truskowski says IBM is talking with competing vendors about information-sharing and interoperability-testing agreements to make various brands of products work together.

Virtualization can complement wide-area connections among SANs, Steinhardt points out, because virtualization can shield applications from the complexity of dealing with two or more SANs and connections among them.

“”Wouldn’t it be nice, particularly for a business continuity solution, if I could define different service requirements for different applications?”” Steinhardt says. Those requirements might include remote replication of data for key applications. The applications themselves would not be concerned with writing data to a remote backup site — they would simply write to what appeared to them to be local disks, and the SAN would do the rest.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Grant Buckler
Grant Buckler
Freelance journalist specializing in information technology, telecommunications, energy & clean tech. Theatre-lover & trainee hobby farmer.

Featured Story

How the CTO can Maintain Cloud Momentum Across the Enterprise

Embracing cloud is easy for some individuals. But embedding widespread cloud adoption at the enterprise level is...

Related Tech News

Get ITBusiness Delivered

Our experienced team of journalists brings you engaging content targeted to IT professionals and line-of-business executives delivered directly to your inbox.

Featured Tech Jobs