FCoE: The Latest Standard We Don't Need

FCoE is either the long-awaited common infrastructure that can run standard network and storage applications or the last gasp of the Fibre Channel industry about to drown in the

Howard Marks

May 25, 2007

3 Min Read
Network Computing logo

Last month a group of vendors, including Brocade, Cisco, Emulex, Intel and QLogic, announced yet another Fibre Channel over Ethernet protocol that encapsulates the Fibre Channel Protocol (FCP) in an Ethernet frame so Fibre Channel data can be carried across 10 Gigabit Ethernet connections. So FCoE joins iSCSI, iFCP and FCIP as yet another way to carry storage data across an Ethernet network. As professor Andrew S. Tanenbaum once said, "The nice thing about standards is that there are so many of them to choose from."

The big difference between Fibre Channel over Ethernet and the others is that FCoE eschews IP and sends FC data directly down the Ethernet. Depending on who you talk to, this is either the long-awaited common infrastructure that can run standard network and storage applications--yielding FC behavior and management at Ethernet prices while reducing both world hunger and global warming--or the last gasp of the FC industry about to drown in the tsunami that is iSCSI. I think it's mostly the latter; here's how I see the arguments shaking out.

FCoE's proponents claim that avoiding the computing cost of calculating all those pesky TCP windows and checksums is an advantage. That makes me wonder why storage guys are afraid of TCP. Today's servers are crammed full of multicore, multigigahertz processors and use Gigabit Ethernet chips from Broadcom and Intel that off-load much of the heavy lifting of TCP, so even several gigabits per second of TCP traffic uses just a small percentage of available CPU. If you throw enough cheap computing cycles and bandwidth at a problem, you don't need to tweak your protocols to be especially efficient. Giving up on IP makes FCoE unroutable, limiting its use to links--or at least VLANs--dedicated to storage traffic. Why bother with a new protocol?

So, what would FCoE buy a SAN admin? It allows the use of 10-Gbps Ethernet links, boosting available SAN bandwidth, but very few servers generate more traffic than a 4-Mbps FC link can handle. And of course, important servers that generate that kind of traffic should have two FC HBAs and a multipath driver for reliability. That boosts their available bandwidth to 8 Gbps, and even fewer servers will fill that pipe. Faster storage-to-switch and inter-switch links could be more attractive, but QLogic already has 10-Gbps FC ISLs.

One of the reasons most large enterprise shops haven't adopted iSCSI is the political squabbling between the storage group, which owns the FC SAN, and the networking group, which owns the Ethernet infrastructure on which iSCSI runs. The storage group doesn't want to have the network group managing switches on the SAN.I see those same political problems with FCoE for server connections to the SAN. Even if OS vendors develop FCoE initiators, as they have for iSCSI, the servers would still connect to Ethernet switches and the storage and network groups will still have a turf war over that switch. Because FCoE relies on Ethernet extensions like jumbo frames and flow-control pause to make Ethernet transport lossless like FC, it doesn't run on just any Ethernet switch. Ethernet switches don't provide the fabric services FC switches do, so using it on a SAN will require an FCoE blade in a SAN director or special FCoE switches.

My friend Howard Goldstein, one of the best storage networking trainers on the planet, points out that the one place FCoE is a natural is SAN extension over carrier Ethernet. Encapsulating FCP in Ethernet for WAN transport makes sense for SAN extension if nowhere else.

So, I'm sticking to FC for high-performance SANs and iSCSI for just about everything else. As far as I'm concerned, you can mark FCoE dead on announcement. When real FCoE products start shipping in 18 to 24 months, all you Fibre Channel fans out there can prove me wrong.

Howard Marks is founder and chief scientist at Networks Are Our Lives, a network design and consulting firm in Hoboken, N.J. Write to him at [email protected].

About the Author(s)

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights