Clusters vs Supercomputers

Despite cluster hype, users say supercomputers may be the better fit for some applications

September 28, 2005

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

OAK RIDGE, Tenn. Clusters are grabbing plenty of attention, but "approach with caution" is the message from users at the High Performance Computing User Forum at the Oak Ridge National Laboratory this week.

Over recent months, a number of vendors have been touting clusters as a cost-effective technology for high-performance computing. (See IBM's Cluster Bluster, New HP Clusters Simplify HPC, and Sun Intros Bioinformatics Cluster.) Some users have already turned to clusters of standard, low-cost servers as an alterative to traditional supercomputers. (See Sandia Blasts Off Blade Cluster and Luebeck Looks to Clusters.)

But Michael Resch, the director of the high-performance computing center at the University of Stuttgart in Germany, warns users not to be blinded by all the cluster bluster. “There has been a lot of hype, but it’s not justified,” he says.

Whereas clusters might be a good fit for bioinformatics or particle physics applications, Resch says that supercomputers offer much faster processing speeds. Thanks largely to their memory subsystems and interconnects, he maintains, supercomputers are ideal for the likes of fluid dynamics and weather forecasting.

Resch has experience with both clusters and supercomputers. In addition to its supercomputer, which uses 576 vector processors from NEC Corp. (Nasdaq: NIPNY; Tokyo: 6701), the University of Stuttgart also has two clusters in its high-performance computing center. One of these uses 400 Intel Corp. (Nasdaq: INTC) PC processors. The second cluster relies on 256 Advanced Micro Devices (NYSE: AMD) Opteron chips and is used by center client Porsche for crash test modeling.Resch believes that one of the best things about clustering is cost. ”The good thing about the cluster is that with a small amount of money, small research groups can get a reasonable amount of performance."

Sharan Kalwani, a high-performance computing specialist at General Motors Corp., agrees that clusters are not ideal for every type of application. “Clusters work only for a certain class of problem,” he says. “The I/O bandwidth is not there.”

Kalwani, who has used both clusters and supercomputers at GM, tells NDCF that clusters are more appropriate for highly compute-intensive applications that need little I/O. “Always use the right tool for the right job,” he notes.

For its part, GM has taken the supercomputer route for its crash testing and design, unveiling a new IBM Corp. (NYSE: IBM) system last year. This helped push the firm’s supercomputer capacity up from 4,982 gigaflops, or billions of operations per second, to over 11,000 gigaflops, according to Kalwani (see GM Buys Major IBM Supercomputer and IBM Speeds GM Crash Tests).

Clearly, time is money in the automobile industry. With the new supercomputer, GM can get its cars to market within 18 months, Kalwani told attendees at Oak Ridge. This is a stark contrast to nine years ago, when it took a full 48 months to design and launch a car, and Kalwani says GM is looking to push this envelope even further. “I have just been handed my next assignment,” he says. “It’s a year!”GM’s obsession with time is hardly surprising. “Every month we shave off new car development, it’s [worth] almost $200 million,” explains Kalwani.

Kalwani says GM’s supercomputer system in North America contains more than 2,000 IBM Power 5 processors, as well as 40 terabytes of disk. Dense wavelength-division multiplexing (DWDM) links are used for networking, he adds. In addition to IBM, he says, GM also partners with Sun Microsystems Inc. (Nasdaq: SUNW), Cisco Systems Inc. (Nasdaq: CSCO), Storage Technology Corp. (StorageTek) (NYSE: STK), EMC Corp. (NYSE: EMC), and Brocade Communications Systems Inc. (Nasdaq: BRCD) to support its design and testing efforts.

GM and the University of Stuttgart are not the only organizations that like the idea of supercomputers. The Oak Ridge National Lab itself, for example, has opted to deploy a beast of a supercomputer, citing shortcomings in cluster technology. (See Oak Ridge Plans Petaflop Supercomputer.)

— James Rogers, Site Editor, Next-Gen Data Center Forum

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights