How to . . . manage your power and cooling

What’s the problem?

Osama Arafat, CEO, Q9 Networks:
The equipment is becoming denser and denser and computers are consuming more power. A computer of five or six years ago would use up something like 50 to 100 Vas. We’re seeing some of the new ones use 300+ Vas. This is just for a standard one or two U server. This is creating more heat problems, and of course, demand in the enterprise is growing substantially. It’s not just that the CPUs are hotter and denser, the enterprises just need more and more of them because there is more reliance on computing.

Jerry Murphy, senior vice-president and service director, Robert Frances Group:
A lot of stuff has been designed to run as fast as possible. People weren’t focused on making things cool, they were focused on running them as fast as possible.

If I had a data centre where all the equipment used were at 70 per cent(efficiency), If I’m spending $1 million a year in electricity – and to some people that’s a small number — $300,000 of the money I’m spending is going right into the toilet. There’s little things like that that people didn’t pay attention four years ago.

Nauman Haque Info-Tech Research Group:
With energy costs rising, we’ve seen a lot of companies concerned about their consumption in the data centre. In particular, cooling. With a lot of trends like consolidation and virtualization, and particularly now with blade servers, we’re getting a lot more equipment and power density. In the same amount of space, you’ve got a lot of computing power. It’s causing equipment to get a lot hotter. Cooling is a big issue for IT departments. I’ve seen a lot of surveys suggest that 40 or 50 per cent of the energy consumption is cooling.

How do I fix it?

Arafat:
Either you upgrade whatever you have in your enterprise – i.e. putting the proper HVAC systems in place, which is a big challenge. If you have the power systems in place, you need to have back-up generation. Because of the increase power-demand you need many more UPSes to handle the increasing load. Without generation, you’re only looking at a limited amount of time with UPSes alone. If you put the proper air conditioning in, it also has to be backed up. If you ever have a power outage where you do have power back-up but don’t have ACC backed up, this is a useless strategy, you’re going to have heat problems while you’re computers are working. It’s very simple: you either build the infrastructure on your own or you outsource it to a data centre provider.

Murphy:
You can look at people like AMD and Intel, who are starting to announce (cooler) chips.
The typical ones today are running anywhere from 100 to 160 watts per chip. A lot of the new stuff coming out today is 60 watts, but with the same performance. One thing you can do is buy cooler chips. The systems manufacturers, Sun, IBM, HP, Dell, are starting the whole system itself – not just the chip – more efficient.

Haque:
With a hot aisle/cold aisle configuration, you orient servers so that the backs are facing each other, so you have a hot aisle then a cold aisle. Enclose those and duct them to improve the efficiency of the air flow.

Simply improving air flow management can increase the cooling capacity by as much as 50 per cent.

Murphy:
Maybe the most important thing is what I call portfolio management. If I look at all the applications on servers or storage, if I can find out that there’s an application I don’t need, I can get rid of that server the application is running on. A lot times, when we interview large companies, they literally have thousands of applications, but when you ask which ones are critical to the business they’ll know a dozen of them. If you go through and rationalize those things – if you can get rid of five systems that are generating five kilowatts of heat, that’s 25 kilowatts of heat I can get out of the data centre.

Haque:
When you look at new purchases and new equipment, include energy efficiency in your RFP. The amount of heat generated by servers typically isn’t a No. 1 concern amongst IT staff when they’re looking at new purchases.

Power supply isn’t something you’re going be switch with your existing systems – IT managers will be worried about voiding the warrantees and things like that. But certainly, looking at new servers they’ll try to find ones with more efficient power supplies.

Murphy:
Another you see people doing is server and storage consolidation. If I’ve got 10 servers that are all at 10 per cent utilization, if I can put all of those application on one server, I can get rid of 90 per cent of my power requirements. That’s also who you see a lot of people using software virtualization. Most of the time, I don’t see that being done on mission critical systems but for a lot of internal HR systems and things like that, that kind of consolidation makes a lot of sense from a cooling perspective.

How much is this going to cost?

Arafat:
No one in their right mind would build our infrastructure unless you’re a big bank. It works out being, for about the same price as you putting something together internally, you can have a world-class infrastructure by outsourcing it to a company like Q9.

Murphy:
There’s the capital cost of, ‘If I bought something last year, do I throw that investment away do get a new server?’ Some of the things that people can do tactically is use things like blanket plates. If you look at a rack, you can only fill it half full of servers, because a typically raised-floor data centre can cool about 10 kilowatts per rack. If you have a rack full of blade servers, you could go up to 30 kilowatts. You just physically can’t cool that. Instead of leaving an air gap (between servers), you basically seal it with a metal plate. a blanket plate so you eliminate an eddy of air current getting trapped at the bottom.

What’s next?

Haque:
Liquid cooling
– an old technology from mainframes – is coming back into vogue. We’re starting to see them with blade servers. Certainly there’s vendors that are starting to offer that – Sun, HP, IBM, all recently announced liquid cooling that are supposed to the better than the traditional forced air methods.

Comment: [email protected]

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Story

How the CTO can Maintain Cloud Momentum Across the Enterprise

Embracing cloud is easy for some individuals. But embedding widespread cloud adoption at the enterprise level is...

Related Tech News

Get ITBusiness Delivered

Our experienced team of journalists brings you engaging content targeted to IT professionals and line-of-business executives delivered directly to your inbox.

Featured Tech Jobs