Cornell University

Its high-performance computing center gets ready to deploy InfiniBand cluster

September 13, 2002

3 Min Read
Network Computing logo

Cornell Theory Center (CTC), the high-performance computing and research center at Cornell University, is ready to roll on a 16-node Windows server cluster using... wait for it... InfiniBand (see Cornell Center Clusters InfiniBand).

The fabric is expected to be the first Windows-based InfiniBand cluster put into production in a real-world environment.

The announcement is important for a couple of reasons. First, the deployment includes technology from two of the largest original supporters of InfiniBand -- Microsoft Corp. (Nasdaq: MSFT) and Intel Corp. (Nasdaq: INTC) -- both of which have been quietly backing off their commitments to support the architecture (see Microsoft Backs Off InfiniBand and Intel Bails on InfiniBand).

And second, it's a key indicator of how this technology is going to be deployed.

CTC's InfiniBand cluster project consists of 16 servers with Intel's Xeon processor configured together on an InfiniSwitch Corp. fabric. The HCA (host channel adapter) card supplier is yet to be determined, but is likely to be IBM Corp. (NYSE: IBM) or Mellanox Technologies Ltd., according to Dave Lifka, CTC's CTO. Lifka expects to have the cluster up and running on October 1, 2002.He says Intel's decision to not produce the InfiniBand silicon and cards itself worried him temporarily -- but he says the company reassured him that, although it's not developing the technology in-house, it continues to support all the startups that are developing it.

"Intel realized that InfiniBand was expensive to develop and would not replace TCP/IP in the data center, but would instead find its place in high-performance computing applications," Lifka says. "It is not the Ethernet replacement they had hoped it would be."

According to Lifka, 10-Gigabit Ethernet doesn't offer the kind of low latency his applications require. Some of Cornell's largest customers are automotive companies for which CTC does crash-test simulations and design analysis. These applications run on parallel clusters and require high bandwidth and low latency, he says. The maximum latency in an InfiniBand cluster is approximately 10 microseconds; Ethernet is an order of magnitude greater than that because of the burdensome requirements of processing TCP/IP.

CTC's InfiniBand project will include 16 of its Dell Computer Corp. (Nasdaq: DELL) 2650 servers at the research center's Manhattan site. The goal over time is to deploy it on 128 dual-processor Xeon servers at CTC's main site upstate in Ithaca, N.Y. "The project starts out at Manhattan, and when we are happy with it and need to scale up, we'll move it to Ithaca," says Lifka.

As these kinds of deployments roll out, the industry can safely assume InfiniBand has found its market.Intel, meanwhile, says that if it becomes clear that there is a business model argument for it to manufacture InfiniBand technology in high volume, "we will revisit it," says Diana Wilson, a company spokeswoman. Currently, though, Intel has no plans to produce its own InfiniBand systems or components.

Jo Maitland, Senior Editor, Byte and Switch
www.byteandswitch.com

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights