Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Ethernet Has A Goldilocks Problem

We're in the midst of a collision between data center networking and enterprise storage. Convergence is the clarion call from the halls of storage giants like EMC, Brocade, NetApp, QLogic and Emulex, as well as from networking powerhouses like Cisco, Intel and Broadcom. Although everyone seems sure that the future will converge on Ethernet, it is not clear how we will get there. Gigabit Ethernet is too slow for converged I/O, and 10Gbit hardware and cabling remains prohibitively expensive. Proponents of "everything over Ethernet" are stymied when they try to make a cost-based use case.

For two decades, Ethernet has advanced along a logarithmic scale. I was a Unix administrator when the transition from 10Mbit Ethernet to 100Mbit Fast Ethernet occurred, but I switched to a career in storage (and gigabit-speed Fibre Channel) shortly afterward. When Gigabit Ethernet arrived, the storage state of the art had already moved forward, with 2Gbit and 4Gbit Fibre Channel seeing wide adoption. The performance advantage was clear: Even wide availability of iSCSI initiators and targets and the low cost of Gigabit Ethernet couldn't push the storage community away from Fibre Channel.

Now, as the drumbeat for converged networking grows louder, we see the real impact of the decision to move Ethernet directly to 10Gbit speed. Data centers that already employed Fibre Channel are moving rapidly to the 8Gbit performance point. These are the prime targets for Fibre Channel over Ethernet, but 10Gbit speed holds little attraction for them. It is only when they move to a converged stack like the Cisco/EMC vBlock that adopting an "everything over Ethernet strategy" makes sense.

At this point, many would suggest falling back on cheaper Gigabit Ethernet and iSCSI. One of the many Ethernet bonding techniques should, in theory, allow for a competitive 4Gbit or 8Gbit Ethernet SAN. However, as an experienced network engineer like Ethan Banks would tell you, none of these techniques results in a truly flexible result. Cisco EtherChannel, for example, should allow four Gigabit Ethernet links bonded together to carry iSCSI traffic that rivals 4Gbit Fibre Channel. But the commonly available load balancing methods for EtherChannel will always forward traffic across the same channel, regardless of congestion. This means that in iSCSI SAN that relies on Gigabit Ethernet and EtherChannel can never compete with the throughput and quality of service offered by Fibre Channel.

Server-side multipathing drivers can offset this issue to a certain extent, as well, but they exist more for high availability than high performance. Building a faster-than-gigabit iSCSI SAN using multiple Gigabit Ethernet links and Multipath IO requires an extensive investment in software and a very capable storage array on the other side. And none of these techniques addresses the critical server-to-network link: It is impractical in the extreme to connect each server with four or more separate Gigabit Ethernet cables, not to mention the cost of network interface cards and switch ports. A strategy of multiple Gigabit Ethernet links end-to-end just doesn't make sense.

  • 1