Ethernet Has A Goldilocks Problem

We're in the midst of a collision between data center networking and enterprise storage. Convergence is the clarion call from the halls of storage giants like EMC, Brocade, NetApp, QLogic and Emulex, as well as from networking powerhouses like Cisco, Intel and Broadcom. Although everyone seems sure that the future will converge on Ethernet, it is not clear how we will get there. Gigabit Ethernet is too slow for converged I/O, and 10Gbit hardware and cabling remains prohibitively expensive. Propo

Stephen Foskett

January 10, 2011

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

We're in the midst of a collision between data center networking and enterprise storage. Convergence is the clarion call from the halls of storage giants like EMC, Brocade, NetApp, QLogic and Emulex, as well as from networking powerhouses like Cisco, Intel and Broadcom. Although everyone seems sure that the future will converge on Ethernet, it is not clear how we will get there. Gigabit Ethernet is too slow for converged I/O, and 10Gbit hardware and cabling remains prohibitively expensive. Proponents of "everything over Ethernet" are stymied when they try to make a cost-based use case.

For two decades, Ethernet has advanced along a logarithmic scale. I was a Unix administrator when the transition from 10Mbit Ethernet to 100Mbit Fast Ethernet occurred, but I switched to a career in storage (and gigabit-speed Fibre Channel) shortly afterward. When Gigabit Ethernet arrived, the storage state of the art had already moved forward, with 2Gbit and 4Gbit Fibre Channel seeing wide adoption. The performance advantage was clear: Even wide availability of iSCSI initiators and targets and the low cost of Gigabit Ethernet couldn't push the storage community away from Fibre Channel.

Now, as the drumbeat for converged networking grows louder, we see the real impact of the decision to move Ethernet directly to 10Gbit speed. Data centers that already employed Fibre Channel are moving rapidly to the 8Gbit performance point. These are the prime targets for Fibre Channel over Ethernet, but 10Gbit speed holds little attraction for them. It is only when they move to a converged stack like the Cisco/EMC vBlock that adopting an "everything over Ethernet strategy" makes sense.

At this point, many would suggest falling back on cheaper Gigabit Ethernet and iSCSI. One of the many Ethernet bonding techniques should, in theory, allow for a competitive 4Gbit or 8Gbit Ethernet SAN. However, as an experienced network engineer like Ethan Banks would tell you, none of these techniques results in a truly flexible result. Cisco EtherChannel, for example, should allow four Gigabit Ethernet links bonded together to carry iSCSI traffic that rivals 4Gbit Fibre Channel. But the commonly available load balancing methods for EtherChannel will always forward traffic across the same channel, regardless of congestion. This means that in iSCSI SAN that relies on Gigabit Ethernet and EtherChannel can never compete with the throughput and quality of service offered by Fibre Channel.

Server-side multipathing drivers can offset this issue to a certain extent, as well, but they exist more for high availability than high performance. Building a faster-than-gigabit iSCSI SAN using multiple Gigabit Ethernet links and Multipath IO requires an extensive investment in software and a very capable storage array on the other side. And none of these techniques addresses the critical server-to-network link: It is impractical in the extreme to connect each server with four or more separate Gigabit Ethernet cables, not to mention the cost of network interface cards and switch ports. A strategy of multiple Gigabit Ethernet links end-to-end just doesn't make sense.FCoE over 10Gbit Ethernet completely changes the cabling situation, but requires a massive investment in host adapters, cabling and switches. The transition to FCoE is made easier by an architecture that allows existing Fibre Channel storage systems to be migrated forward into this new paradigm, but the server-side and network investment is still significant. Eight gigabit Fibre Channel will remain highly competitive in terms of price and performance. Unless one is using FCoE-oriented blade servers like the Cisco Unified Computing System (UCS) or connecting a massive number of servers to the SAN, it's a hard sale.

This is the root of the convergence quandary, and holds back converged networking at this point. Gigabit Ethernet will never offer enough performance to make a compelling case, even with hardware and cabling that is essentially free. And 10Gbit Ethernet (with FCoE) remains too expensive and novel. Even as 10Gbit Ethernet hardware drops in price over the next few years, FCoE and converged networking will face tough challenges in enterprise data centers unless a business case is made based on factors other than performance and cost.

All this may change with the next generation of Ethernet hardware, however. Although 40Gbit and 100Gbit Ethernet currently bond multiple 10G or 25Gbit physical links, both will function as single connections. One can imagine 40Gbit Ethernet becoming a "just right" performance increment on the way to 100 gigabits. Although Fibre Channel will likely continue to advance, 40Gbit Ethernet should arrive sooner, cost less and perform better than 16G or 32Gbit Fibre Channel. And, by then, the impact of integrated stacks and blade servers will make the non-cost benefits of converged networking that much more compelling.

This will be the true perfect storm for the "everything over Ethernet" strategy: Performance, cost and features will finally put enough nails in the Fibre Channel coffin to sink it permanently. But this will not happen for years, perhaps even a decade. Until then, Ethernet has a Goldilocks problem.

About the Author

Stephen Foskett

Organizer in Chief, Tech Field Day

Stephen Foskett is an active participant in the world of enterprise information technology, currently focusing on enterprise storage and cloud computing. He is responsible for Gestalt IT, a community of independent IT thought leaders, and organizes the popular Tech Field Day events. A long-time voice in the storage industry, Foskett has authored numerous articles for industry publications, and is a popular presenter at industry events. His contributions to the enterprise IT community have earned him recognition as both a Microsoft MVP and VMware vExpert. Stephen Foskett is principal consultant at Foskett Services.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights