OAK RIDGE, Tenn. Clusters are grabbing plenty of attention, but "approach with caution" is the message from users at the High Performance Computing User Forum at the Oak Ridge National Laboratory this week.
Over recent months, a number of vendors have been touting clusters as a cost-effective technology for high-performance computing. (See IBM's Cluster Bluster, New HP Clusters Simplify HPC, and Sun Intros Bioinformatics Cluster.) Some users have already turned to clusters of standard, low-cost servers as an alterative to traditional supercomputers. (See Sandia Blasts Off Blade Cluster and Luebeck Looks to Clusters.)
But Michael Resch, the director of the high-performance computing center at the University of Stuttgart in Germany, warns users not to be blinded by all the cluster bluster. There has been a lot of hype, but its not justified, he says.
Whereas clusters might be a good fit for bioinformatics or particle physics applications, Resch says that supercomputers offer much faster processing speeds. Thanks largely to their memory subsystems and interconnects, he maintains, supercomputers are ideal for the likes of fluid dynamics and weather forecasting.
Resch has experience with both clusters and supercomputers. In addition to its supercomputer, which uses 576 vector processors from NEC Corp. (Nasdaq: NIPNY; Tokyo: 6701), the University of Stuttgart also has two clusters in its high-performance computing center. One of these uses 400 Intel Corp. (Nasdaq: INTC) PC processors. The second cluster relies on 256 Advanced Micro Devices (NYSE: AMD) Opteron chips and is used by center client Porsche for crash test modeling.