Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Open Compute Racks: Are We Going to Use Them in Our Data Centers?

Facebook's Open Compute Project is, on the surface, specifications for servers, racks, cooling and power distribution planned for its data centers. Somewhat surprisingly, the project continues to evolve and produce more specifications. In my view, it's a business initiative that drives cheaper procurement of data center hardware assets for Facebook. Sure, Facebook hopes other companies will use this as a jumping-off point for their own data center initiatives.

How does that benefit Facebook? I speculate that there are manufacturers all over the world that would be delighted to make something that doesn't need an expensive design and validation process, and will thus reduce cost. Consider that a factory manager can download the specifications and start building a rack. Those racks are likely to have a market and, therefore, the products will sell to a wholesaler. Server enclosures and motherboards are also part of Open Compute. Contract manufacturers who take bids in the spot market might have some spare capacity and produce a run of server motherboards that meet the Open Compute spec--spare capacity results in lower costs. This is disintermediation and cost reduction on a grand scale, and could change the middle market for infrastructure.

Today, the recognized server manufacturers like to promote the idea that their products have some added value or features that provide value to the customer. However, the last five years have seen the server market focused on price and reduced to contract manufacturing using off-the-shelf components and merchant silicon from Intel, AMD, Nvidia and a handful of other silicon vendors. In short, names like HP, Dell and Cisco are simply badges on commodity hardware. Let's face it--Intel CPUs and supporting chips are glued to a motherboard based on an Intel reference design. How much differentiation can you achieve when the "engine, drive train and suspension" all come from the same supplier?

Evidence of this trend is clearly visible in the latest HP Gen8 servers, which rely heavily on a marketing message that there are "special" features such as sensors that "do something" no other server can do. While HP has gotten ahead of the commoditization trend here with a focus on physical enclosures, power supplies and sensors, Cisco is differentiating with its fancy network capability in its UCS chassis. But how much are these features really worth?

Racks: Let's start with the racks. The Open Compute rack design has power supplies built into the rack and supply DC direct to devices. Two models--a singlet and triplet--allow for better purchasing. You can read the rack specification and see that the racks are nonstandard 21 inches wide, instead of the standard 19-inch mounting. This is a significant change that would preclude the use of existing enclosures for a lot of equipment.

Note that Open Compute racks use overhead power. This means a raised floor is no longer required for your data center and makes a new build much cheaper. The price of raised flooring is high due to its limited availability and load-bearing properties, which really isn't a requirement anymore. Even the cost of the hot/cold aisle containment barriers would be cheaper that the cost of raised flooring. A final point is that raised flooring must be installed at first build and is a large capital investment. Compare this with aisle containment systems that can be installed on a row-by-row or as-needed basis, thus spreading out the capital deployment to meet project-by-project requirements.

The Open Compute battery cabinet design removes the need for an expensive (and dangerous) battery room in your data center. Again, capital costs are deferred to an as-needed basis, since batteries are installed in each racklet and meet the requirements of the node. While the maintenance program will necessarily be more complex, it will also be possible to service batteries at much business risk--overall, a positive outcome. Forty-five seconds of battery per design is enough time for the generator to start.

This affects networking vendors in several ways: Equipment should become cheaper, since there are no power supplies necessary. Of course racks are more expensive, but larger power supplies are inherently more efficient than many smaller units, and the switch to DC power distribution in-rack will bring further efficiencies.

It's possible that networking vendors would resist the price reduction (to prevent top-line revenue loss), but, really, they aren't in the business of making power supplies and don't do a good job of producing efficient power supplies anyway. That said, there are some interesting maintenance problems since the quality of the power into a device can directly affect its reliability. Power supplies from recognized vendors are typically over-engineered and over-specified for this reason, but at much cost to the customer. It seems likely that vendors would and should simply install power sensors that provide monitoring data about power quality and move to add real features that increase networking value.

Next: Commodity Servers and Will Vendors Support Open Compute?

  • 1