10GigE Tipping Point Reached - Problems Remain
In shopping for new 10gig Ethernet switches for the lab I've realized that 10gigE pricing has reached the tipping point where the purchase price of two 10gigE connections for a server is about the same as for the six 1gigE ports I frequently need. Unfortunately with the DCB/CEE/DCE standards not yet ratified and the 10gig connector/optical module situation bringing truth to the old saying "I love standards - there are so many of them," the decision isn't as easy as I hoped.
December 8, 2009
This month I'm moving the DeepStorage labs, sometimes called the NetworkComputing Purchase NY Real World Labs, and I figured that since I was re-cabling everything anyway, now would be the right time to refresh the lab's network infrastructure. The Extreme Summit 7i that's been the core of the lab net for years has been a workhorse but we need 10gigE to push new servers and storage systems to their limits. In shopping for new switches, I've realized that 10gigE pricing has reached the tipping point where the purchase price of two 10gigE connections for a server is about the same as the six 1gigE ports I frequently need.
A 24 port 10gig switch from Extreme, HP or Arista will set me back around $12,000 or $500/port. Add in $650 for a dual port card with SR-IOV support and a pair of $100 SMB+ twin-ax cables, and the total cost is $1850. Using the two LOM ports on my servers plus a quad port Intel or Broadcom gigabit card to connect to a Cisco 3750 adds up to a similar $1875 for six 1gig ports.
While I'd love to upgrade to a Nexus 5000 or Brocade 8000 and add FCoE support, my budget can't handle the additional $15-20K that would add to the cost. While the base switches are just a bit more than a plain 10GigE switch, which is probably worth it to get the DCB features like per priority pause, adding FCoE about doubles the cost as the storage protocol support comes dear from both vendors.
In addition to a 10gigE switch, I'll also need a 1gig switch with 10gig uplinks so I can fan-in the servers I don't upgrade to 10gig right away, or ever. That's where I ran into the real problem with deploying 10gig Ethernet today: too many interface specs. Most of the switches I could afford had CX4 or XFP interfaces, some the even older XENPAK or X2.
Someday soon 10gigE will be as simple as 1gig with 10GBASE-T competing with SFP+ passive cables for short haul and SFP+ optics for long. 10GBASE-T uses familiar, if higher quality, cables and allows for 100/1000/10G trispeed ports, but today's 10GBASE-T solutions use 5W or more per port and cost a bit more than SFP+ cables. The other problem is there's no easy way to connect a 10GBASE-T port on one switch with a CX4, XENPAK, X2, XFP or even SFP+ port on another.
I did get momentarily excited when I saw HP's J9302a direct attach cable for XFP to SFP+ connections. Now I could connect my Foundry FESX-424's XFP ports directly to my new 10gig switch's SFP+ ports. Then I saw the $900 price tag. Guess I'll be buying SR optics for both XFP and SFP+ and connecting them that way. Seems wasteful but it's actually cheaper.The good news is we've turned onto the slippery slope of high tech product maturity. Lower prices drive adoption as higher sales result in lower prices as vendors can amortize R&D over more units and do volume production. The bad news is you have to pay attention so you don't end up with CX4 network cards or CNAs and a 10GBASE-T switch.
Please post any suggestions or war stories that may come to mind.
About the Author
You May Also Like