ITBusiness.ca

Building your own scaled-down supercomputer is a piece of cake

When I think of a supercomputer, I still envision a monster box, its console ablaze with flashing lights. Alas, supercomputers are much duller today. In fact, according to top500.org, more than half of the most powerful computers on the planet aren’t computers at all, they’re groups of computers cobbled together into what’s known as high performance computing clusters (HPCC)

Still, when Dell Computer invited me to build my own supercomputer, my inner geek rejoiced. I’d get to spend a couple of days with engineers in Dell’s labs, putting together an HPCC and benchmarking it.

The folks at top500.org report on the performance of general purpose systems using a benchmark called Linpack, which exercises the machines by solving a dense system of linear equations. All that means is that the computers or clusters are bashing their little electronic brains out to perform millions of 64-bit floating-point operations per second. No. 1 on the top 500 list, IBM’s BlueGene/L, which lives at Lawrence Livermore National Laboratory in California, scored 280.6 teraflops per second. My supercomputer was a bit more modest – 14 dual core, dual processor 2.66 GHz servers, each with 4 GB of RAM as compute nodes. Then there was a console node, and an admin node, also perfectly ordinary commodity servers. Any IT person worth his or her salt could have set them up in a few hours. Things got a little less ordinary when you looked at how they were connected: to wring the best performance out of what was essentially a very compact network, Dell used InfiniBand, a high-speed communications technology designed for applications such as HPCCs and SANs. The machines also had twin Gigabit Ethernet NICs for less demanding communing.

But the really special part was the software that ran the show. It was Platform Rocks, a customized version of San Diego Super Computer Center’s Rocks cluster deployment and management product from Markham, Ont.’s Platform Computing. It was outrageously easy to install – I installed the master node, including Red Hat Linux, from CD, answered a few questions and was then able to initiate the installation of the software for the compute nodes over the network. And how did it perform? Did it rival BlueGene? Um, well, not exactly. My humble 28 processor cluster managed to bop along in Linpack at 31 gigaflops per second. IBM’s cluster had, I kid you not, 131,072 processors and 32768 GB of memory, so it had a bit of an advantage.

But my cluster can be had for under $150,000, including implementation services, and you have to add several more zeroes to get even a fragment of Blue Gene/L (it, and two smaller systems, cost $US290 million).

Pity it wouldn’t fit into my carry-on.

Exit mobile version