SAN JOSE, Calif The U.S. government is relying too heavily on clusters of commodity microprocessors and needs to speed more time and money nurturing new custom supercomputer designs, according to a report released by the National Research Council Tuesday (Nov 8).
The government should draft, with broad support from industry and academia, a road map for its supercomputer needs for the next five to 10 years. It should also spend an additional $140 million a year on hardware and software research into custom supercomputer designs, according to the two-year report commissioned by the Energy Department.
Thanks to their compelling price/performance, clusters of typically x86-based servers have become mainstream in high-performance computing, representing more than half of the systems in the Top500 list. However, such systems won't keep pace with the needs of government and academic researchers, according to the report called "Getting up to Speed with the Future of Supercomputing."
"The government can't keep stringing together commodity processors and getting the benefits from that. That's going to slow down," said Susan L. Graham, computer science professor at the University of California, Berkeley and co-chair of the report.
Slowing improvements in single-thread microprocessors and growing memory and network latencies will undercut the benefits the relatively low-cost clusters. That will lead to a generation of subpar supers, with nothing in the pipeline to take their place, according to the report which was reviewed by 17 researchers in the field.
"We are extrapolating that by 2020 a computer node can execute a million instructions in the time it takes to communicate with another node," said Marc Snir, head of the computer science department at the University of Illinois and cochair of the report. "That's not tenable. We don't know how to write algorithms for such machines," Snir added.
Snir said the trend threatens to harm the research agenda of scientists across a broad range of fields from climate to physics.