Howard Marks

Network Computing Blogger


Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

Kill the Traditional PCIe Slot: A Modest Proposal

I've been inspired by the work of the Open Compute Project to propose my own change to traditional server form factors. The 2.5-inch PCIe slots now appearing on servers for hot-swap SSDs should replace current PCIe slots entirely. This would let me move all the connections, with the exception of power, to the front of the server.

Moving all the connections to the front of the server has several significant maintenance advantages, especially in large data centers.

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

First, it eliminates cable management arms. I'm sure someone has a data center full of 1u servers with properly installed cable management arms that allow them to pull a running server out of the rack without disconnecting any cables--but not in any data center I've ever worked in.

In my experience, there's the guy who read that power and Ethernet cables don't mix, so he declared that power cables shouldn't be in the management arm. Or someone was just lazy and used the 5-foot Ethernet cable, because running it through the arm would have required him to go hunt around for a 7-foot cable. Plus, cable management arms sag enough to get tangled on each other and block the airflow from the back of the servers into the hot aisle, causing hot air re-ingestion.

Second, it eliminates the need to make multiple trips to the back and the front of the server whenever you want to add a NIC or HBA. Consider the steps involved: pull the server out, open the server, install the card (which may require you to remove other stuff in the way), push the server back in, connect cables to new card, and then power up the server again. In a large data center with rows that are 20 or more racks long, and a hot aisle temp of 120 degrees F, going from front to back can take a while and be uncomfortable at best.

If we take a typical 1u server and replace its eight to ten 2.5-inch disk drive slots with slots that are the same size but include PCIe connections as well--like the ones on Dell's newest servers--we could use those slots for more than just the Micron PCIe SSDs that Dell currently sells for them. If Emulex, Qlogic and Intel made Fibre Channel HBAs and 10Gbps Ethernet CNAs, the vast majority of corporate servers could eliminate conventional PCIe slots all together.

The server vendors can move the LOM (LAN on Motherboard), IPMI/DRAC/iLO, VGA and USB ports to the front and we'd just connect power in the back. PCIe cards would now be in the fresh airflow from the cold aisle, rather than breathing air that's been heated by the disk drives and processors. The airflow would improve reliability.

The connector Dell uses supports SAS, SATA and four lanes of PCIe connection (as does the standardized version from the SSD Form Factor Working Group), so all eight slots could be used for disk drives or I/O cards. I expect most servers would have two disks for boot and one or two I/O cards.

My technical experts tell me that even though the PCIe SIG promotes the current versions of the bus as fully hot swappable, you'd probably have to reboot when installing a new card. That would still be several fewer steps than in today's world and would keep you from running back and forth between the hot and cold aisles, which my mother always told me would make me catch cold.

Sure, cabinet vendors and cable management vendors like Panduit will have to come up with clever new ways to dress the cables (though the server vendors could help by replacing the little ring handles with ones that swivel to hold cables), but we'd save on all the ball-bearing rails and cable management arms.

So I/O card and server vendors, what do you think? Is this idea crazy like a fox--or just plain crazy? And when can I buy a 2.5-inch CNA?


Related Reading


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013



TechWeb Careers