Building your own scaled-down supercomputer is a piece of cake

When I think of a supercomputer, I still envision a monster box, its console ablaze with flashing lights. Alas, supercomputers are much duller today. In fact, according to, more than half of the most powerful computers on the planet aren’t computers at all, they’re groups of computers cobbled together into what’s known as high performance computing clusters (HPCC)

Still, when Dell Computer invited me to build my own supercomputer, my inner geek rejoiced. I’d get to spend a couple of days with engineers in Dell’s labs, putting together an HPCC and benchmarking it.

The folks at report on the performance of general purpose systems using a benchmark called Linpack, which exercises the machines by solving a dense system of linear equations. All that means is that the computers or clusters are bashing their little electronic brains out to perform millions of 64-bit floating-point operations per second. No. 1 on the top 500 list, IBM’s BlueGene/L, which lives at Lawrence Livermore National Laboratory in California, scored 280.6 teraflops per second. My supercomputer was a bit more modest – 14 dual core, dual processor 2.66 GHz servers, each with 4 GB of RAM as compute nodes. Then there was a console node, and an admin node, also perfectly ordinary commodity servers. Any IT person worth his or her salt could have set them up in a few hours. Things got a little less ordinary when you looked at how they were connected: to wring the best performance out of what was essentially a very compact network, Dell used InfiniBand, a high-speed communications technology designed for applications such as HPCCs and SANs. The machines also had twin Gigabit Ethernet NICs for less demanding communing.

But the really special part was the software that ran the show. It was Platform Rocks, a customized version of San Diego Super Computer Center’s Rocks cluster deployment and management product from Markham, Ont.’s Platform Computing. It was outrageously easy to install – I installed the master node, including Red Hat Linux, from CD, answered a few questions and was then able to initiate the installation of the software for the compute nodes over the network. And how did it perform? Did it rival BlueGene? Um, well, not exactly. My humble 28 processor cluster managed to bop along in Linpack at 31 gigaflops per second. IBM’s cluster had, I kid you not, 131,072 processors and 32768 GB of memory, so it had a bit of an advantage.

But my cluster can be had for under $150,000, including implementation services, and you have to add several more zeroes to get even a fragment of Blue Gene/L (it, and two smaller systems, cost $US290 million).

Pity it wouldn’t fit into my carry-on.

Would you recommend this article?


Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.

Jim Love, Chief Content Officer, IT World Canada

Featured Download

Lynn Greiner
Lynn Greiner
Lynn Greiner has been interpreting tech for businesses for over 20 years and has worked in the industry as well as writing about it, giving her a unique perspective into the issues companies face. She has both IT credentials and a business degree.

Featured Story

How the CTO can Maintain Cloud Momentum Across the Enterprise

Embracing cloud is easy for some individuals. But embedding widespread cloud adoption at the enterprise level is...

Related Tech News

Get ITBusiness Delivered

Our experienced team of journalists brings you engaging content targeted to IT professionals and line-of-business executives delivered directly to your inbox.

Featured Tech Jobs