ITBusiness.ca

Keeping your cool

Welcome to Calgary, the epicentre of Canada’s energy industry. Power is in abundance here – or so you would think. But it doesn’t always mean you can get the power you need when and where you need it. Just ask Matrix Solutions Inc., a fast-growing environmental and engineering consulting firm serving the local oil and gas industry.

After Matrix upgraded to IBM BladeCenter servers earlier this year to accommodate its rocket-like growth from 40 to 180 employees in two years, the company found it couldn’t get the electrical power it needed in its downtown office building to properly support the new computing environment. It could get one 220-volt feed to the server room, but to get a second, for redundancy, would mean installing a new power supply for the whole building, at considerable cost.

To make matters worse, Matrix couldn’t get enough air conditioning ducts to the room either, with the result that fans on the blade units ran so hard and so constantly that it caused a major noise problem for office workers. The company knew the blade servers – pizza box-size units racked one on top of the other – ran hot, but didn’t realize how significant a problem it would be.

“The feeling at the outset was that we could deal with the power and cooling issues well enough to have an adequate environment,” says Adrian Brudnicki, the firm’s IT manager. “But it became apparent very quickly that that wasn’t the case.”

Matrix solved the problem by co-locating its servers at a secure data centre facility just five blocks away, operated by Toronto-based Q9 Networks Inc. It’s something more and more small and medium-size businesses are choosing to do, and often for the same reason. Problems with power and cooling for data centres in office buildings have become endemic.

“It’s a major motivator for our clients,” says Q9 CEO Osama Arafat. “Office buildings were made for human beings, not computers. Humans require far less power and they put out far less heat.”

There are a few more specific reasons power and cooling problems have come to the fore in recent years, says Matt Brudzynski, a senior research analyst at Info-Tech Research Group. Servers are cheaper so companies can afford to install more. They’re smaller and typically packed closer together so they generate more heat – blade servers represent the extreme in this regard. And power supplies are heavier-duty now, typically 450 watts per unit. Blade servers, which require 220-volt service, again pose special problems.

At the same time, computers have become much more important. “Most companies anymore are basically relying on technology to run every aspect of their businesses,” says Arafat. “It’s not only that [servers] are [packed] more densely, are more powerful and kick out more heat. There are also just many, many more of them.”

Matrix rents two cabinets to house its blade servers and assorted communications gear. Q9, as a reseller for the telecom arm of Enmax, the local power utility, also provides a high-speed Internet connection. It’s a faster connection than Matrix could have supplied to the office and the price was right, too. Another part of the Q9 package is network monitoring that is far better than Matrix ever had when the servers were just down the hall, and 24/7 technical support.

A 100-megabit-per-second (Mbps) fibre link connects the Q9 facility to the company’s offices. “To all intents and purposes, it’s the same as the servers being here,” says Mike Morley, a senior consultant in Matrix’s information engineering group. “We’re talking about zero or one millisecond of latency.” The size of the pipe connecting clients to servers was a concern initially, Morley says, but as it turned out nobody really noticed the difference, even though some users had been connected on the local area network at gigabit speeds.

In other respects, the decision to go to Q9 was a no-brainer. For starters, it was cheaper. “The cost was substantially less than what it would have been to provide adequate power and cooling at our office,” Morley says. Investing in the building’s power supply was never in the cards because the aging structure is slated to be flattened in two or three years and replaced with a skyscraper. But in the absence of a fully redundant power supply, the company would have had to invest heavily in UPS. Not now.

The Q9 facility was built from the ground up as a data centre at a cost of $20 to $25 million. It’s essentially a sealed concrete box inside an office highrise, with separate and fully redundant power and cooling – in fact, more power and cooling than the rest of the building combined, Arafat says. It provides a far better environment than Matrix could have achieved, however much money it threw at an onsite data centre.

Dust was a constant problem in the office, for example, and might eventually have shortened the life of the servers, Morley says. The Q9 facility is dust free. It’s also far more secure than Matrix’s offices and since much of the data stored on the servers was highly sensitive and confidential, security was always a concern. Not anymore.

Arafat claims co-location saves small and medium-size companies not just a little bit but “tons” of money. “Generally, you’ll see companies saving half of what it would cost to build an inhouse data centre with adequate power and cooling – or spending as little as 10% of what it would cost them to do it inhouse,” he says.

But co-location is only one option, says Info-Tech’s Brudzynski. He recently completed a report on the energy cost saving benefits of server virtualization – using software to set up several logical servers on one piece of hardware. Companies typically turn to virtualization to reduce hardware costs and exploit unused server capacity. Energy savings – as much as 50 percent according to the Info-Tech study – are an unexpected bonus. While reducing energy consumption doesn’t necessarily solve all the power problems Matrix and others have faced, it certainly can’t hurt.

Even if you’re what Brudzynski calls a “server hugger” – an IT manager who insists on having completely separate servers for each major application – it may still be possible to reduce the number of physical units (and thus power) by analyzing requirements and simply asking, ‘Do I really need this server?’ “One company we worked with had 47 servers,” he says. “They were able to identify four through this exercise that they could afford to get rid of.”

The other option: bite the bullet and build an enterprise-grade data centre. But getting the power and cooling right can be a daunting task, as Robert Smith, chief technology consultant for the MaRS Discovery District, the downtown Toronto bio-medical research community, discovered last year when MaRS was building its Toronto Medical Discovery Tower. The tower is a 400-square-foot research facility that also houses the community’s data centre, with over 100 servers and other equipment.

The problem, Smith says, is that traditional building engineers don’t understand data centre power and cooling, and too often won’t listen to people who do. “I’d tell them, ‘Here’s what I need’ and they’d look at me like I had ten heads because that’s not how they do it,” he says. In the end, Smith wrestled crucial parts of the project out of their hands and brought in experts recommended by American Power Conversion (APC) Corp., his power equipment supplier.

The building engineers had wanted to put in two large air conditioning units. It not only wouldn’t provide optimum cooling, Smith says, it didn’t offer much in the way of redundancy. The expert solution: four strategically placed units with the same total cooling capacity. If one broke, it would only take out a quarter of the centre’s air conditioning. The units also turn on and off in response to sensors and communicate to optimize air flow. This design not only ensures better cooling, it reduces energy consumption and saves money.

Some of the expert analysis of power and cooling requirements came too late. Smith’s consultants discovered that while the data centre had redundant backup generators on the roof, the cable feeds to the room came along the same physical pathway, creating a single point of failure. “This was just missed,” he says. “If there’s a fire or something between us and the roof, we’ll lose full redundancy. When we discovered it, it was too late. We had to suck it up.”

Power and cooling for data centres may once have been a relatively simple engineering problem, but with today’s densely-populated server farms running mission-critical applications, it’s both complex and fraught with pitfalls. As Smith says, “As you get into this world, you realize it’s very specialized, and you need to have a specialist involved to be successful.”

If you’re a small or medium-sized business, that pretty much means turning to an outsourcer.

Exit mobile version