University of British Columbia, IBM have created a Monster

The University of British Columbia has a Monster on its hands, and it will be using it to prepare the province against forest fires, earthquakes and avalanches.

Monster is the code-name for an IBM

eServer xSeries-based Linux supercomputer that will be used by the University of British Columbia’s GeoDisaster Centre. Running Red Hat Linux 6.2 on 264 Pentium 1Ghz processors, the system is capable of 170 billion calculations per second.

Weather forecasts are made on the machines through grid-point models that use fluid dynamics codes, which break down fluids into smaller volumes. The fluid, in this case, is the atmosphere. With a bigger computer, GeoDisaster Centre director Roland Stull said that UBC can use smaller-sized grids in the grid cells. This is important to get wider coverage of mountainous areas like British Columbia, which create a very large computational domain.

Right now, the school is running a nested model where the coarsest mesh has a 90 kilometre domain. Stull hopes the grid cells will get down to 3.3 km, which will provide a better resolution for the images the researchers study. “”That way we’ll be able to see differences in rain on one side of a mountain or another, versus the broad brush strokes that we can get now from the computer,”” he said.

Before Monster, the research team at UBC used an eight-processor Origin 2000, a four-processor Origin 200, a home-made Beowulf cluster of Intel boxes and a few smaller SGI Unix workstations.

Denis Staples, IBM Canada’s client manager for B.C. high education and research, said IBM has seen the same large Linux clusters in other domains, including oil exploration research, chemistry, and the sort of high-particle physics that’s being done at CERN (Organisation Européenne pour la Recherche Nucléaire) in Switzerland.

“”I don’t think life sciences and weather are the only domains where this is popular,”” he said. “”It’s a pretty significant trend in supercomputing across the board. As long as the applications are Linux-friendly and anywhere where pure research is being done, that’s a common trend, because you’ve got university researchers who are close to the Linux movement anyway.””

Every day the forecasters suck in gigabytes of data to use as the starting point for initial conditions of their forecasts. After that, the team spends several hours producing graphics and animations of the forecasts. A two-day forecast is finished in about half a day. The new computer will allow the researchers to expand this forecast from just Vancouver and Victoria to the entire province, Stull said. The timing in this market is much different than other industries which might use fluid dynamic codes to build flow-through jet engines, for example.

“”Most of these people can let the computer take as long as it needs in order to get to the solution,”” he said. “”In weather forecasting, we don’t have that luxury. We need the forecast to finish before the weather actually happens, otherwise the forecast is useless. We have the need for speed.””

There are several large government forecast centres like the Canadian Meteorlogical Centre in Montreal, the National Centre for Environmental Prediction in Washington, D.C. and a European centre for medium-range weather forecasts. All these centres have supercomputers, but each one tends to run their own version of the fluid dynamics codes for the atmosphere because of the various numerical approximations that have to be made. When they run them, they get slightly different forecasts, but in this case diversity of opinion is helpful to the research.

“”We actually like that, because there’s no way of knowing ahead of time which of the forecasts will be better on any given day,”” Stull said.

Many of the centres are collaborating to create an “”ensemble”” of forecasts that would have good statistical properties to improve the overall accuracy of predictions. “”The equasions that describe the atmosphere aren’t known exactly yet,”” Stull said. “”It’s better to have many different scientists trying the approaches.””

Staples said Monster would also ease network administration for UBC through Cable Chaining Technology, which will allow the cluster to be managed at a single console.

“”As soon as you put lots and lots of 1U packaged servers in a rack environment, then you need everything possible to reduce the cable clutter,”” he said. “”We think we’re going to get more density in the racks for big Web farm and big research clusters.””

The Monster project is 40 per cent funded by the Canadian Foundation for Innovation (CFI), while another 40 per cent came from the B.C. Knowledge Development Fund and 20 per cent in endowments to the university.

The machine has been installed and went through its acceptance testing at the end of December. Stull said the school is in the process of porting all its models on it and hopes to have it put to maximum use in about a month.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Shane Schick
Shane Schick
Your guide to the ongoing story of how technology is changing the world

Featured Story

How the CTO can Maintain Cloud Momentum Across the Enterprise

Embracing cloud is easy for some individuals. But embedding widespread cloud adoption at the enterprise level is...

Related Tech News

Get ITBusiness Delivered

Our experienced team of journalists brings you engaging content targeted to IT professionals and line-of-business executives delivered directly to your inbox.

Featured Tech Jobs