Kill the Traditional PCIe Slot: A Modest Proposal

Let's get rid of traditional PCIe slots and move all the cable connections to the front of the server. We'll get compelling maintenance and operational advantages. Here's how and why.

Howard Marks

April 2, 2013

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

I've been inspired by the work of the Open Compute Project to propose my own change to traditional server form factors. The 2.5-inch PCIe slots now appearing on servers for hot-swap SSDs should replace current PCIe slots entirely. This would let me move all the connections, with the exception of power, to the front of the server.

Moving all the connections to the front of the server has several significant maintenance advantages, especially in large data centers.

First, it eliminates cable management arms. I'm sure someone has a data center full of 1u servers with properly installed cable management arms that allow them to pull a running server out of the rack without disconnecting any cables--but not in any data center I've ever worked in.

In my experience, there's the guy who read that power and Ethernet cables don't mix, so he declared that power cables shouldn't be in the management arm. Or someone was just lazy and used the 5-foot Ethernet cable, because running it through the arm would have required him to go hunt around for a 7-foot cable. Plus, cable management arms sag enough to get tangled on each other and block the airflow from the back of the servers into the hot aisle, causing hot air re-ingestion.

Second, it eliminates the need to make multiple trips to the back and the front of the server whenever you want to add a NIC or HBA. Consider the steps involved: pull the server out, open the server, install the card (which may require you to remove other stuff in the way), push the server back in, connect cables to new card, and then power up the server again. In a large data center with rows that are 20 or more racks long, and a hot aisle temp of 120 degrees F, going from front to back can take a while and be uncomfortable at best.

If we take a typical 1u server and replace its eight to ten 2.5-inch disk drive slots with slots that are the same size but include PCIe connections as well--like the ones on Dell's newest servers--we could use those slots for more than just the Micron PCIe SSDs that Dell currently sells for them. If Emulex, Qlogic and Intel made Fibre Channel HBAs and 10Gbps Ethernet CNAs, the vast majority of corporate servers could eliminate conventional PCIe slots all together.

The server vendors can move the LOM (LAN on Motherboard), IPMI/DRAC/iLO, VGA and USB ports to the front and we'd just connect power in the back. PCIe cards would now be in the fresh airflow from the cold aisle, rather than breathing air that's been heated by the disk drives and processors. The airflow would improve reliability.

The connector Dell uses supports SAS, SATA and four lanes of PCIe connection (as does the standardized version from the SSD Form Factor Working Group), so all eight slots could be used for disk drives or I/O cards. I expect most servers would have two disks for boot and one or two I/O cards.

My technical experts tell me that even though the PCIe SIG promotes the current versions of the bus as fully hot swappable, you'd probably have to reboot when installing a new card. That would still be several fewer steps than in today's world and would keep you from running back and forth between the hot and cold aisles, which my mother always told me would make me catch cold.

Sure, cabinet vendors and cable management vendors like Panduit will have to come up with clever new ways to dress the cables (though the server vendors could help by replacing the little ring handles with ones that swivel to hold cables), but we'd save on all the ball-bearing rails and cable management arms.

So I/O card and server vendors, what do you think? Is this idea crazy like a fox--or just plain crazy? And when can I buy a 2.5-inch CNA?

About the Author

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights