The Deal On FCoE

In the 500 or so years it feels like I've been working with computers of one sort or another, I've noticed that shiny new technologies follow similar trajectories. With FCoE, we've reached what Gartner calls the "Peak of Inflated Expectations," and what I call the "ATM Can Fix Everything" moment, named after the timely combination of IBM introducing desktop ATM cards and a Visa commercial in the mid 1990s. As we head towards the "Trough of Disillusionment" we need to take a good hard look at whe

Howard Marks

September 9, 2010

3 Min Read
Network Computing logo

In the 500 or so years it feels like I've been working with computers of one sort or another, I've noticed that shiny new technologies follow similar trajectories. With FCoE, we've reached what Gartner calls the "Peak of Inflated Expectations," and what I call the "ATM Can Fix Everything" moment, named after the timely combination of IBM introducing desktop ATM cards and a Visa commercial in the mid 1990s. As we head towards the "Trough of Disillusionment" we need to take a good hard look at where FCoE is a good fit and where it could be overkill or resume dressing.

Conceptually, FCoE promises an end to the back-of-server spaghetti factory through converged networking and improves Fibre Channel performance and reliability at Ethernet prices. Having spent more than my share of time  standing in the blast of hot air from the back of a server rack trying to figure out which of the seven Gigabit Ethernet cables is plugged into vNIC5, I'm sure that most of you can see the benefit from at very least converging data and storage traffic into a pair of 10Gbps Ethernet connections. 

When I ran the numbers in this blog entry back in February, we'd already reached the point where 10Gbps was cheaper than multiple 1Gbps connections for virtual server hosts.  I can only imagine the difference is even bigger now.

Assuming everyone should use 10GbE for new deployments, and that 10GbE solves the cable mess problem, the question remains, who should plan for and start piloting FCoE deployments? The easy answers come at the ends of the spectrum.

First let me state that FCoE is a technology for organizations already running Fibre Channel SANs. If you've been using DAS and are adding shared storage to support server virtualization or disaster recovery planning initiatives, you could use FCoE, and it would work well for you.However a comparable iSCSI solution (or for VMware, NFS), will cost less to purchase and install. After all, a dual port Intel 10Gbps NIC is just $645 at CDW where an Emulex or Qlogic CNA is over a thousand. The real savings comes in the switch, since Cisco charges $8000 for the FCoE software option on a Nexus 5000. If you use DCB NICs and switches, you'll get the same lossless transport as FCoE, and recent tests by Demartek and NetApp show similar performance for iSCSI and Fibre Channel or FCoE with only a two-to-four percent CPU utilization penalty.

If you have thousands of servers, multiple Fibre Channel directors and use Fibre Channel management software like SANscreen, VirtualWisdom or Storage Essentials to closely manage your SANs, you're the target market for FCoE. Since Qlogic and Emulex HBAs and FCoE switches follow the same management paradigms as their FC counterparts, your tools will continue to work just fine. 

While the FC suppliers promise that  16Gbps FC products are coming soon, I remain somewhat skeptical. With 10--and soon 40Gbps FCoE as a competitor--16Gbps FC is going to be low-volume, and therefore high-priced, product. By the time 32Gbps FC is ready, I'm guessing it might not be worth the R&D to get it to market as there will be too few ports to amortize the costs.

Those in the middle of the spectrum have a more difficult decision. If someone asks to see your SAN and you point at your one Clariion or Magnitude, I would suggest iSCSI could be in your future. If you have multiple arrays and several FC switches, FCoE might be an easier transition to converged networking than iSCSI as iSCSI/FC bridges have never been a popular option and may not get the 10Gbps DCB upgrade.

A final note to managers who may read this blog: techies love new technologies and may be asking you to buy the shiny new tech to get the experience and/or training to prepare themselves for their next jobs.  Make sure you see proposals for several alternatives not just the Cisco-FCoE-EMC combo.

Read more about:

2010

About the Author(s)

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights