Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

USC's Cluster Buster

Talk all you like about blades and virtualization. When it came time to upgrade its major data center, the University of Southern California (USC) shunned the conventional technical and marketing wisdom and opted for standard servers with the latest processors to boost supercomputing power by 50 percent.

Back in September USC deployed a total of 360 V20z servers from Sun Microsystems Inc. at its Center for High Performance Computing and Communications. Jim Pepin, USCs CTO, told Byte and Switch that the new servers have already pushed the cluster’s performance from 7.1 teraflops (trillions of calculations per second) to 10.75 teraflops.

According to Pepin, the capacity hike is critical to the Center’s work, which ranges from earthquake research to language translation and genetics. “It increases the number and scope of the problems that we can simulate,” he explains. This involves, for example, producing more seismic data from earthquake tests or increasing the sheer volume of USC’s genome work.

With the new AMD Opteron processor-based Sun servers, USC also increased the size of its cluster from 1,716 to 1,830 compute nodes. The Sun servers replaced 256 machines based on Intel Pentium III chips that had previously been used within the cluster.

The Intel-based machines had become obsolete, according to USC execs, and have now been redeployed on smaller projects elsewhere in the center.

  • 1