Supercomputers Boost Grid Computing

New research suggests that grid computing is on the rise, driven by major supercomputer deployments in the scientific sector

July 15, 2004

3 Min Read
Network Computing logo

The use of grid computing is growing, driven by U.S. research organizations spiraling requirements for more processing power and storage

The Ohio Supercomputing Center (OSC) and the National Center for Supercomputing Applications (NCSA) today placed orders for some seriously heavy-duty kit from Cray Inc. (Nasdaq: CRAY) and Silicon Graphics Inc. (SGI) (NYSE: SGI).

The latest research figures from Evans Data Corp. say that grid computing use has grown 75 percent over the last six months, prompted by the research sector.

Research organizations' need for improved server and mainframe performance, which is measured in millions of instructions per second (MIPS), is growing at an alarming rate, according to Evans Data analyst Joe McKendrick. “Their need for storage and processing is expanding quite rapidly,” he says. "At least three out of four organizations double their MIPS requirement every year."

But McKendrick believes the commercial sector is now catching on to grid computing, which involves harnessing the computing power and storage of distributed systems. Evans Data’s research found that 37 percent of database developers are either implementing or planning to implement a grid architecture for their firms.Research bodies such as the NSCA are looking to add new hardware to further their grid strategies, and McKendrick feels that the commercial sector sees the grid as a way to avoid expenditure on new kit. “Companies can’t afford to keep upgrading their systems and hardware,” he says.

”Grid computing offers an intelligent way for companies to better redeploy IT systems within their enterprises."

The Champaign, Ill.–based NCSA is looking to boost its computing capacity to 35 teraflops with the acquisition of a 6-teraflop system from SGI, which has been dubbed Cobalt. This is some seriously speedy equipment: One teraflop equals a trillion operations per second.

Cobalt consists of an SGI Altix system running 1,024 Itanium 2 processors from Intel Corp. (Nasdaq: INTC) and Linux. The system also contains 3 terabytes of globally accessible memory and 370 terabytes of storage that will serve as the NCSA’s shared file system.

Researchers at the NCSA are planning to link Cobalt up with the Teragrid., a project to harness the processing power of nine separate supercomputing sites across the U.S., including the Argonne National Laboratory and Oak Ridge National Laboratory.The OSC, however, prefers not to give too much away about its plans for the new Cray X1 supercomputer and XD1 clustering system that will be based at its Springfield, Ohio, data center. A spokesman for the OSC confirmed that, although the systems had not been purchased to be part of any specific initiative such as Teragrid, they are likely to be used for "collaborative experiments" with distributed storage and data-intensive applications.

But, the OSC’s new supercomputer bears one striking similarity to the new machines at the NSCA: They are both Linux-based. Use of open-source software is growing as organizations move away from the Unix and AIX operating systems for their grid deployments. “The liberal licensing terms of Linux mean that it can be acquired at very low cost,” says McKendrick.

The other big benefit of using open source is its similarity to the Unix operating system kernel. This means that organizations can quickly train their Unix and AIX development staff to work with Linux.

— James Rogers, Site Editor, Next-gen Data Center Forum

For further education, visit the archives of related Light Reading Webinars:

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights