High-Speed Links Favor Ethernet

Interconnect techniques for storage and data center gear highlight Ethernet growth

June 25, 2005

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Ethernet dominates high-speed interconnection in the world's biggest supercomputing sites, and it's likely to keep that momentum.

The unveiling of the world's biggest supercomputers at this week's International Supercomputer Conference in Heidelberg, Germany, showed Gigabit Ethernet is the preferred network in 42.4 percent of the top 500 sites (see Top Supercomputers Revealed).

Myrinet, a packet-based switching technology sold by Myricom Inc., is in second place, used in 28.2 percent of sites. InfiniBand, despite the hype of vocal supporters, occupies just over 3 percent of sites. The rest of the top supercomputers rely on a range of proprietary and even arcane solutions.

Table 1: Top 500 Supercomputer Interconnects

Interconnect

Count

Share %

Gigabit Ethernet

212

42.4

Myrinet

141

28.2

SP Switch

45

9

NUMAlink

21

4.2

Crossbar

21

4.2

Proprietary

17

3.4

Infiniband

16

3.2

Quadrics

13

2.6

Cray Interconnect

7

1.4

Mixed

3

0.6

Fireplane

2

0.4

HIPPI

1

0.2

Giganet

1

0.2

All

500

100%

Gigabit Ethernet has risen 11 percent since the last Top 500 list in November 2004; Myrinet use has toppled 22 percent. While there is fluctuation in the use of other technologies, it appears that Ethernet's a clear winner in the world's fastest sites.The increased popularity of supercomputing in commercial networks makes the trend toward Ethernet in the Top 500 worth some reflection (see Invasion of the Coneheads). If Ethernet can accommodate the requirements of these hefty sites, it should be able to grow in grids, storage networks, and clusters elsewhere.

Evidence that Ethernet continues to drive the bus showed in several high-speed interconnect announcements at this week's show:

  • Myricom, seeing the handwriting on the wall, introduced Myri-10G, a network that adds 10-Gbit/s Ethernet functionality and interoperability to the vendor's Myrinet gear (see Myricom Brings HPC to Ethernet). The company's CEO Chuck Seitz says Myrinet is tapping its Ethernet roots to become fully interoperable with Ethernet, while providing higher speed at lower cost. The new Myri-10G gear, set to ship in September, won't require any changes to the existing Ethernet equipment in a given network.

  • Ammasso Inc. announced a new milestone for its Remote direct Memory Access (RDMA)-over-Ethernet (iWARP) adapters for server-to-server links. Like a handful of other suppliers, including Chelsio Communications Inc. and Siliquent Technologies Ltd., Ammasso is aiming to tap iWARP's ability to improve Ethernet performance by using RDMA to increase CPU efficiency and throughput.

    While Ammasso's adapters work off the shelf with any Ethernet equipment, the vendor says that to take full advantage of its RDMA capabilities mean tweaking existing interfaces with Ammasso's APIs. To simplify matters, Ammass has certified interoperability with the message-passing interface used by Hewlett-Packard Co. (NYSE: HPQ) in that vendor's technical and cluster computing products (see HP Verifies Ammasso Adapters). This could be a boon, given that HP itself is looking to build up its clustering base (see HP Polishes Its Clusters). Ammasso says it's working with other suppliers on interoperability as well.

  • Level 5 Networks Inc. released its EtherFabric, a device that accelerates Ethernet by removing TCP/IP processing from the operating system kernel and putting it in the application via use of a special runtime library (see Level 5 Networks). EtherFabric is sold as a two-port, 1-Gbit/s PCI adapter for use in Linux-based blade servers, Web servers, clusters, and storage devices. So far, just one other supplier, Precision I/O, has a similar solution.

These solutions all differ in their levels of interoperability with existing networks. Many of their claims for performance improvements, particularly in areas like latency and bandwidth, remain to be proven. They also aren't the only developments in high-speed interconnection. But unlike news emerging from InfiniBand suppliers like Mellanox Technologies Ltd., they are focused on leveraging Ethernet. And at least one analyst thinks that could ensure their future (see Mellanox Doubles InfiniBand).

"The short answer is one I learned over the last 20 yeras: 'never bet against Ethernet,'" writes David Passmore, research director at the Burton Group, when asked via email about the prospects of the various interconnects. Passmore maintains that solutions like the ones mentioned above are aimed at key problems holding Ethernet back from high-speed connectivity in data centers and storage. As these problems are solved, Ethernet will squeeze out some other technologies."Fibre Channel will soon succumb to iSCSI in storage networks," he writes. And InfiniBand will be a "niche technology."

Mary Jander, Site Editor, Byte and Switch

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights