Wire-Once: Strategy or Pipedream?

With the rise of 10Gbps Ethernet and the converged data and storage network being talked about these days, we are hearing the siren's song of wire-once networking. While I love the idea of wire-once, I've been building networks long enough have heard this song before. Advocates of technologies from 10Base-T to ATM have all claimed you could wire once and then relax and live the simple life. As the saying goes, if it sounds too good to be true, it probably is.

Howard Marks

July 12, 2010

3 Min Read
Network Computing logo

With the rise of 10Gbps Ethernet and the converged data and storage network being talked about these days, we are hearing the siren's song of wire-once networking. While I love the idea of wire-once, I've been building networks long enough have heard this song before.  Advocates of technologies from 10Base-T to ATM have all claimed you could wire once and then relax and live the simple life. As the saying goes, if it sounds too good to be true, it probably is.

My favorite wire-once failure story came back when 10Base-T was cutting edge technology, when I was consulting with a client. We ran an RFP process to choose a replacement for their existing mixture of coax Ethernet, both thick and thin, and IBM Token Ring. After a winner was selected, and our consulting engagement ended, and some braniac in facilities decided it would add value to their headquarters building to wire-once with multi-mode Fibre.

This decision raised the cost of the project significantly since Fibre optic hubs had about the same port density of 10Base-T hubs, and all the other parts from patch panels to NICs also cost extra. At the time, they thought it was a good investment. By the time they went to upgrade to switched fast Ethernet, PCs had 100Base-T ports on the motherboard and Fibre optic NICs weren't readily available.

So they bought hundreds of media converters that went under desks and got full of dust, cables kicked out of them, and so on. They sold the building before upgrading to Gb to the desktop, which wouldn't have worked on their old FDDI style multi-mode anyway.

While I've already started using 10Gbps as my default for server connections in new designs, I'm not convinced that current wiring systems will last the 10 years Cat5 has, or even for the lifetime of the blade chassis and top of rack switches we're buying this year. My first concern is that we haven't really standardized on the cables themselves. Assuming that Xenpak, X2, XFP and the like really are dead, we still have SFP+ twinax cables (in two variations active and passive) and 10GBase-T plus our choice of several Fibre optic solutions.  Like my former clients, you could decide that OM3, or OM4, Fibre will work for today's and tomorrow's technologies, but that means spending $500-$2000 on optics and cable for each connection, where an SFP+ twinax cable costs $50-$100.  Even if it costs you $100 to install a cable from server to your top of rack switch, in which case you might want to look at what you pay your data center guys, you have to rewire twice using copper can save real money. You can use that to put more memory in the servers.

I'm also concerned about bandwidth. 10Gbps seems like a lot now, but some of today's blade server architectures could run low on uplink bandwidth in a couple of years. By 2012, that half height blade will have 2-4x the compute power and memory of today's Westmere's and Magny-Cours. Put eight of those servers in a chassis with 8-10Gbps uplinks, and they may be able to overload it, especially in UCS environments where server to server traffic goes upstream to the next switch.

The data center network of tomorrow with virtual NICs and switches that recognize when a VM moves from host to host should need a lot less cable changes than today's do. I'm just not sure we're really talking about wiring once.

About the Author(s)

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights