As I was adding an Extreme X480 switch to the DeepStorage lab last week, I started to wonder if boosting switch density to 48 or more ports per rack unit might create more cable management issues than it's worth. Then I got an announcement from Oracle that they were coming out with a 72 port 1u top-of-row switch. So I have to ask you, "Is 48 ports per u too dense or just a good reason to invest in cable management?"
In my life as a consultant, I wasted a lot of time tracing out one of the 96 blue cables that made resembled a waterfall coming out of the front of the server rack. While those of you who practice cable management laugh at the thought of a rack that looks like this, you can go too far. I've seen cable harnesses tie-wrapped so extensively that they became a single rigid object and I couldn't disconnect a port in the middle. To add insult to injury, the Cat 7 and Cat 7a cables we'll use for 10Gbase-T if PHY vendors can never get power consumption below 1 watt/port and are just a bit larger and stiffer than the Cat 5e we've been using for 1000base-T. Luckily the Fibre Optic cables and SFP+ Twin-ax options are a bit thinner if more fragile and less flexible than good old Cat 5.
Oracle's 72 port switch has 16 QSFP (Quad Small Form Factor Pluggable) and eight more conventional SPF+ ports. Since QSFP hasn't caught on yet, users will have to search for 4-into-1 SFP+ octopus (or is it quadropus?) cables or appropriate patch panels. I'm looking forward to blade server chassis with QSFP uplinks so I can wire up all the data for 16 servers on two cables.
Here in the lab, I installed 1u cable managers above and below the X480 so my 1u switch, and its cables will take up 3u of rack space. Now, a testing lab is a much more dynamic and heterogeneous environment than a production data center. We're building and tearing down configurations all the time, so I need to be able to see the label on the cables in ports 32 and 37 and disconnect the cable from a test server to see how the product under test deals with links going down.
As unified networking, using FCoE or an IP storage protocol, takes hold data center, designers can move to a wire once architecture. With wire once, and a compulsive initial installer who harnesses, labels and documents all the connections, high switch port density might make sense. Disclaimer: I would like to thank Extreme Networks for their generosity. The X480 was provided to us at DeepStorage.net on long term loan without charge.