Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Whole Rack Servers: A Data Center Alternative?

Blade servers are a demonstrably successful and effective means of server packaging. Despite their proprietary, vendor-specific backplanes and somewhat limited internal storage options, blades are quite popular and represent the fastest-growing segment of the server market. Among respondents to InformationWeek's latest State of Server Technology Survey (full report and data to be published next month), 36% make extensive use of the modular form factor, while another 21% have more limited deployments.

But blades aren't the last word in dense server packaging, as a number of new, so-called shared infrastructure products--from hybrid blade/rack-mount designs like Dell's "sled" C8000 and Hewlett-Packard's Scalable System 8 [PDF] that pack eight vertical sleds or half-width compute nodes in 4U chassis, to microservers that jam dozens of low-power Xeons or hundreds of ARM or Atom CPUs in a 10U package--have targeted hyperscale/scale-out computing needs. These still hew to the conventional rack-mount design of discrete systems in a standard rack enclosure, yet logically extending the blade philosophy turns the entire rack into a modular system with a shared power, cooling and cabling backplane for hot-swappable servers. Indeed, this is just the vision behind the Facebook-instigated Open Compute Project and its Open Rack design.

The goals are simple: create an open standard for a flexible, modular rack platform that can hold everything from 1U and half-U servers to storage arrays, power supplies, PDUs and cabling. Facebook started the project as an internal effort to maximize space utilization and serviceability for systems in its new Prineville, Ore., data center. By taking a blank sheet of paper to the entire rack concept, Facebook stole a page from Google's playbook, but unlike the search company, Facebook decided to make its plans public, try to sell the world on the concept of whole rack systems, and ultimately create an OEM ecosystem for servers and other components. And it appears to be succeeding, with executives from Intel, Arista and Rackspace on the Open Compute board; motherboard designs from both Intel and Advanced Micro Devices; and a high-density storage system spec [PDF] already on the drawing board.

Indeed, the project's Open Rack spec goes far beyond just shoehorning "naked" servers (that is, motherboards with minimal packaging) into a standard 19-inch rack; it ditches the existing form factor entirely. By creating a new 21-inch-wide rack specification and designing system boards, modular cable management, and other electrical and mechanical interfaces around whole rack implementations, the Open Compute specs can achieve high density and easy system replacement. For example, its second-generation motherboard specs call for two dual-socket, hot swappable systems with 16 DIMM slots in 1 or 1.5U. With each rack divided into three 13 "power zones" sharing a 3U power supply, this yields 20 diskless, hot swappable servers, or 12 1.5 systems with six drives shared between two servers, in each rack zone, with room left over for a 2U ToR switch.

Density aside, the project's real value is in what the design team calls component disaggregation: decoupling and modularizing server components that have very different useful lifespans. For example, in presentations at last spring's third Open Compute Summit, project leaders noted that while racks and cables easily last 10 or more years and power supplies six, CPUs and disks are often obsolete or dead in four or five years. By making it easier to connect and swap out infrastructure pieces like fans and power supplies, the project aims to maximize the utility of each component in the system and obviate the need for forklift upgrades just to update a server motherboard and CPU.

Our server survey found 40% of respondents would be very or somewhat interested in the concept once commercial products become available. Yet Drew Schulke, director of Dell Data Center Solutions, offers an important caveat: There might not be that many customers that can buy and digest racks and racks of systems at the same time. For example, one of the Open Rack standards calls for three-rack clusters, meaning one "unit" has capacity for more than 100 dual-socket systems, with more than 300 drives. Using eight-core CPUs and maxing out the DIMM slots, that's easily enough to run 1,500 or more virtual machines.

Commercial Alternatives

So far there are few commercially available whole rack systems. Although VCE sells Vblock systems integrating Cisco UCS blades and switches with EMC VNX storage, and HP has its CloudSystem Matrix, these resemble highly integrated "mainframes for the virtualization" era more than they do modular racks.

IBM is moving closer to the Open Compute model of mix-and-match server, storage and network subsystems in a standard chassis with its Flex System, a rack-based product, which comes in either 10U or 42U sizes. The design shares power supplies and cabling, but, of course, the backplane is proprietary, meaning it only works with IBM equipment. Much like Open Compute, IBM segments the rack into 10U modules, each capable of hosting 14 dual-socket, 24 DIMM-slot servers. Aside from three available server nodes, Flex System also supports a storage node using a variant of IBM's Storwize V7000 with up to 12 3.5-inch drives and several networking nodes: a 64-port (42 internal, 22 external) 10 Gigabit Ethernet switch, a 48-port 8/16-Gb Fibre Channel SAN switch and even a 14-port FDR InfiniBand switch.

While Flex System does one-up IBM's BladeCenter on density and power efficiency (claimed to be 40% lower), its primary value appears to be the preintegration and configuration of hardware and management software from a single source, not heterogeneous, mix-and-match system flexibility. Sadly, the interests of server customers like Facebook and the large incumbent systems houses like Dell, HP and IBM are often at odds, hence the value of trying to stoke an open, modular compute standard. Yet if Facebook and the Open Compute Project can do for hardware what Linux, Apache and other open source projects did for software, the day may be coming when swapping server and switch modules is as easy as replacing a failed hard drive.