What A Super Computer!

You may not realize the myriad ways in which supercomputing can affect midsize and even small businesses.

August 30, 2004

4 Min Read
Network Computing logo

Late last month, Silicon Graphics Inc., (SGI) said that it has a deal with the National Aeronautics and Space Administration (NASA) to develop and install a really super computer. The new monster computer will have 10,240 processors arranged in five "nodes." Each of the nodes will have an associated 2TB of memory that is shared by each of the processors simultaneously.

SGI says it's really a gonzo machine. With this super computer, NASA will be able to do theoretical studies that weren't available before, and it will also let other government agencies have time on the machine for their own high-performance computing requirements.

But if you're in the data center of a medium or large corporation, you are probably saying, "So what? How does this information help me keep my users happy, keep my email flowing, keep my databases online?"

While it might not, you ought to know about it nonetheless, because computing machines like this one, while they'll probably not become mainstream machines, have some capabilities that you might find in the enterprise at some point in the future.

Indeed, smaller versions of this mega monster have been sold to commercial enterprises for what might be called ordinary data-processing applications, although Jeff Greenwald, director of marketing for SGI's Server Group, says that the company prefers to concentrate on the high-performance computing arena.The NASA computer is based on Intel's Itanium 64-bit processors. SGI also has another line of such computers based on the MIPs chip. But the Itanium machine uses an SGI version or Linux, and allows for very high scalability with that chip and operating system, according to.

The thing that makes this machine so significant, however, is its memory architecture. Greenwald says that unlike computers that have a given amount of memory associated with a processor, or number of processors, in this architecture, which SGI calls the NUMAlink interconnect fabric. This interconnect scheme makes all the memory in the computer available to each of the processors simultaneously. This, SGI says, means that memory-to-processor delays are greatly reduced; rather than relying on sending data through messages, data transfers happen much more quickly. Moreover, give the system enough memory, and you can put an entire dataset into memory so that there are no more required disk-to-memory transfers during an application's running, which vastly speeds up execution.

You can think of the system architecture, which comprises processors, memory, and switches, as being akin to a crystal lattice, in which atoms occupy fixed points, held together by bonding forces. In the case of the NUMAlink, the processors, memory and switches are like the atoms, and the high-bandwidth interconnect can be thought of as the "bond" that holds the whole thing together, explains Andy Fenselau, SGI's director of strategic technical initiatives.

"It's the fourth generation of a technology we started on 14 years ago," he says. "You can think of it as a flexible, switched backplane." With such a scheme, each of the processors in the system can get to the memory it needs when it needs to, eliminating many memory latency problems.

But for a data center, that isn't the thing that really raises eyebrows. "At the data center," says Greenwald, "the person who gets the most excited about this system is the data-center manager, because he gets to manage fewer things."He explains that with this architecture of multi-processor nodes, each able to access all the global shared memory of the entire system, that system looks like a cluster or group of computers equal to the number of nodes. So for the system that NASA is deploying, for instance, "from the management perspective, it's easier to deploy and manage this architecture, which looks like five 2048 processor nodes," instead of the much more complex setup that would have to be installed with competing architectures. "That super cluster capability is what [the data-center manager] manages at the system level. That's the 'Aha!'"

There aren't very many organizations that will need the thousand processors to handle number crunching. But the neat thing about this architecture is that it starts small, and it can be tailored for the number of processors per node, nodes per system, and memory per system. So any data center could start small and add on as data sets and requirements (inevitably) grow larger and larger.

So what do you care about some supercomputer announcement? The technology could make your life a lot easier, if and when it gets to your shop. And it's already there, in some places.

David Gabel has been testing and writing about computers for more than 25 years.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights