About 15 miles from its medical center in downtown San Antonio, Christus Health is building a US$23 million data center to house a flood of digital information ranging from patient insurance records to CT scans.
At 48,000 square feet (of which only 7,500 square feet will be used at first), the new facility will dwarf the hospital’s current 4,000-square-foot data center, which has been bursting at the seams for years even though the IT staff has made every effort to virtualize servers and otherwise squeeze more life out of it.
“Imaging data is growing by leaps and bounds as more types of information gets digitized,” says Mark Middleton, system director for IT architecture at the Irving, Texas-based health care provider. “We’ve done all the remediation we can but have eaten up all of the electricity, all of the cooling and all the physical space we had.”
Christus Health is a prime example of why, even in a wobbly economy, many organizations are still rebuilding or redesigning their data centers.
Many need to reconfigure their IT operations to save money. Some can’t afford, or get, enough electricity and cooling into their current facilities to handle rack after rack of the latest, densely packed blade servers. Others need more computing, storage or network capacity to handle new applications or to cope with acquisitions. Still others need to improve their disaster recovery capabilities.
Demands Up, Budgets Down
More than half of 27 CIOs and senior IT leaders interviewed by market researcher IDC this spring reported reductions in their budgets for this year. Many also reported that they had consolidated data centers, applications and data, and IDC analyst Henry Morris says the study revealed “a significant shift toward cost reduction rather than revenue generation as a driver of IT investment.”
The lights aren’t completely out in John Nester’s data center, but there are fewer of them burning late at night and on weekends these days.
That’s because remote management tools running on notebook PCs and even BlackBerries allow his staff to remotely monitor servers, networks and storage and to solve problems without going near the data center.
“We can be out of the office and check a server, reset accounts, set up e-mail or do any number of things from our BlackBerries,” says Nester, a data center administrator in the Pennsylvania attorney general’s office. “We get e-mails when there are power issues, when there’s a cooling issue, when there’s a server issue. We can take care of the situation before it even happens.”
His team relies on remote access tools from several vendors, including Microsoft, Citrix Systems, Cisco Systems, American Power Conversion and Rove Inc., to monitor everything from servers and storage to power and cooling. “I can sit at home and manage the entire data center,” Nester says.
At a Gartner Inc. conference in November 2007, more than one-third of the attendees said their newest data centers are seven years old or older, meaning they weren’t designed for the power and cooling needs of today’s high-density servers. Half of the respondents predicted that they would need to expand their data centers over the next three years, with many saying that they expected to go “from one to two data centers, in part to address disaster recovery,” according to Gartner.
Whether they’re building new data centers from the ground up or revamping existing space, data center managers are virtualizing multiple applications onto single physical servers, consolidating many data centers into one or two facilities to save on real estate and other infrastructure costs, and redesigning the physical arrangement of servers to save on cooling costs.
“We’re doing a thermal heat analysis to understand where we have hot spots, and implementing hot and cold aisles” between racks to prevent cool air from being contaminated by exhaust heat, says Jim Lowder, vice president of technology at health care provider OhioHealth in Columbus.
Since starting an overhaul of its data center in 2003, the Pennsylvania attorney general’s office has achieved close to “lights-out” operation of 130 servers, more than 1,000 computers and 20 offices across the state. It uses virtualized servers and remote management software to slash the time and effort required for routine functions. “We can completely rebuild a server and have it ready to roll out in about seven minutes,” says data center administrator John Nester. He has also kept head count and energy use flat since 2005 while more than doubling the number of servers.
For his part, Richard Balentine, now the CIO at Greensboro, N.C.-based NewBridge Bank, says he had to interrupt a planned upgrade of his application infrastructure at Lexington State Bank to cope with the 2007 merger between Lexington State and FNB Southeast that created NewBridge Bank.
The original modernization effort included upgrading applications such as customer relationship management systems and bringing outsourced software back in-house. Among other upgrades, Lexington State installed its first storage-area network, an EMC Corp. system, and upgraded its network.
But in early 2007, the looming merger required Balentine to refocus on consolidating the two banks’ infrastructures, which included reconfiguring FNB’s data center in Reidsville, N.C., as a disaster recovery site. Consolidating and combining bank operations took seven months and included expanding the SAN and deploying new, virtualized servers using software from VMware Inc.
Creating — or reducing the cost of — a disaster recovery site is often part of a data center upgrade plan. For example, once it has moved to its new facility, Christus Health will move its disaster recovery site from an outsourcer and into its new data center.
Some vendors, as well as customers, argue that data center managers shouldn’t just look for cost savings as they reconfigure their centers; they should aim to fundamentally transform how they deliver IT services.
The Pennsylvania attorney general’s office has virtualized servers with VMware software, moved from direct-attached storage to network-attached storage from NetApp Inc. and upgraded its network to 10 Gigabit Ethernet. Power capacity and cooling weren’t issues, since the data center was built to handle mainframe needs and is, if anything, too cold rather than too warm. Nester says he is using two-thirds less space than he did before the upgrade.
His staff can remotely and proactively deal with servers and network problems, and deliver new IT services more quickly (see story above). One example is an application that supports Pennsylvania’s revised “Do Not Call” law, which allows consumers to register their cellular and land-line numbers. Staffers delivered the application within 45 days, on time and within budget, Nester says.
“A few years ago, that would have been an eight-month project,” says Assistant CIO Jim Ingalzo.
Built to Last?
In some cases, building and design issues drive decision-making. Hawaii Pacific Health found that it couldn’t bring more power into its main data center because the floor was too weak to support any more uninterruptible power supplies.
Rather than spending $500,000 to reinforce the floor, Hawaii Pacific has kept its electricity usage level by virtualizing more and more of its 300 servers, and it has moved to new blade servers whose power can be raised or lowered as processing loads change.
IT Director Colbert Seto will also upgrade the health care provider’s Hewlett-Packard SANs to support the rollout of a new electronic medical record system for 3,000 concurrent users. The growth in storage demand, he says, threatens to overwhelm any cost savings he might achieve through server virtualization. Seto is counting on the eventual use of de-duplication technologies (which store only changes in previously backed-up data) to reduce future storage growth, along with new policies that reduce how much data users keep in the first place.
Some data center managers find that budgets are so tight that all they can do is play for time. That’s the case for Randi Levin, chief technology officer and general manager of IT for the city of Los Angeles. Like many other IT managers, she’s about to run out of power and cooling at the city’s main downtown data center. But with the city facing a deficit of more than $400 million, it’s unlikely she’ll get the $28 million to $30 million needed to upgrade the facility — even if that was a worthwhile investment for a data center at the bottom of a high-rise building in an earthquake-prone city.
Levin says IBM is in the early stages of a study to find a way to virtualize the 600 servers in the facility down to as few as 30 or 40 physical machines. She says she hopes server virtualization will let her use the existing facility for another two or three years while she develops other long-term options.
At NewBridge Bank, Balentine believes he has created an infrastructure that could support the bank’s growth to “a $10 billion organization in five years,” and he says that scalability “will be a feather in our cap when we have the opportunity to acquire other banks.”
Middleton hopes his new building will give Christus Health room to grow for the next 10 to 15 years.
For various reasons, and within a range of time frames, the push is on to revamp data centers to save money in the short run — and build new ones to increase efficiency and effectiveness in the long run.
Scheier is a freelance writer in Boylston, Mass. You can contact him at firstname.lastname@example.org.