Open Compute Racks: Are We Going to Use Them in Our Data Centers?

The Open Compute Project from Facebook may serve as a framework that could lead to cheaper, more efficient data center resources. But will we use Open Compute racks?

Greg Ferro

June 25, 2012

6 Min Read
Network Computing logo

Facebook's Open Compute Project is, on the surface, specifications for servers, racks, cooling and power distribution planned for its data centers. Somewhat surprisingly, the project continues to evolve and produce more specifications. In my view, it's a business initiative that drives cheaper procurement of data center hardware assets for Facebook. Sure, Facebook hopes other companies will use this as a jumping-off point for their own data center initiatives.

How does that benefit Facebook? I speculate that there are manufacturers all over the world that would be delighted to make something that doesn't need an expensive design and validation process, and will thus reduce cost. Consider that a factory manager can download the specifications and start building a rack. Those racks are likely to have a market and, therefore, the products will sell to a wholesaler. Server enclosures and motherboards are also part of Open Compute. Contract manufacturers who take bids in the spot market might have some spare capacity and produce a run of server motherboards that meet the Open Compute spec--spare capacity results in lower costs. This is disintermediation and cost reduction on a grand scale, and could change the middle market for infrastructure.

Today, the recognized server manufacturers like to promote the idea that their products have some added value or features that provide value to the customer. However, the last five years have seen the server market focused on price and reduced to contract manufacturing using off-the-shelf components and merchant silicon from Intel, AMD, Nvidia and a handful of other silicon vendors. In short, names like HP, Dell and Cisco are simply badges on commodity hardware. Let's face it--Intel CPUs and supporting chips are glued to a motherboard based on an Intel reference design. How much differentiation can you achieve when the "engine, drive train and suspension" all come from the same supplier?

Evidence of this trend is clearly visible in the latest HP Gen8 servers, which rely heavily on a marketing message that there are "special" features such as sensors that "do something" no other server can do. While HP has gotten ahead of the commoditization trend here with a focus on physical enclosures, power supplies and sensors, Cisco is differentiating with its fancy network capability in its UCS chassis. But how much are these features really worth?

Racks: Let's start with the racks. The Open Compute rack design has power supplies built into the rack and supply DC direct to devices. Two models--a singlet and triplet--allow for better purchasing. You can read the rack specification and see that the racks are nonstandard 21 inches wide, instead of the standard 19-inch mounting. This is a significant change that would preclude the use of existing enclosures for a lot of equipment.

Note that Open Compute racks use overhead power. This means a raised floor is no longer required for your data center and makes a new build much cheaper. The price of raised flooring is high due to its limited availability and load-bearing properties, which really isn't a requirement anymore. Even the cost of the hot/cold aisle containment barriers would be cheaper that the cost of raised flooring. A final point is that raised flooring must be installed at first build and is a large capital investment. Compare this with aisle containment systems that can be installed on a row-by-row or as-needed basis, thus spreading out the capital deployment to meet project-by-project requirements.

The Open Compute battery cabinet design removes the need for an expensive (and dangerous) battery room in your data center. Again, capital costs are deferred to an as-needed basis, since batteries are installed in each racklet and meet the requirements of the node. While the maintenance program will necessarily be more complex, it will also be possible to service batteries at much business risk--overall, a positive outcome. Forty-five seconds of battery per design is enough time for the generator to start.

This affects networking vendors in several ways: Equipment should become cheaper, since there are no power supplies necessary. Of course racks are more expensive, but larger power supplies are inherently more efficient than many smaller units, and the switch to DC power distribution in-rack will bring further efficiencies.

It's possible that networking vendors would resist the price reduction (to prevent top-line revenue loss), but, really, they aren't in the business of making power supplies and don't do a good job of producing efficient power supplies anyway. That said, there are some interesting maintenance problems since the quality of the power into a device can directly affect its reliability. Power supplies from recognized vendors are typically over-engineered and over-specified for this reason, but at much cost to the customer. It seems likely that vendors would and should simply install power sensors that provide monitoring data about power quality and move to add real features that increase networking value.

Next: Commodity Servers and Will Vendors Support Open Compute?Commodity servers: Open Compute has specifications for AMD and Intel servers including motherboards, chip sets and the chassis. It's surprising how similar these are to traditional servers in terms of components and selection. A contract manufacturer could produce these without any electronics design team, thus reducing the cost to production and QA testing. What would a server like this cost? I don't know, but I'd hazard about one-third the price of a conventional server--getting three servers for the price of a single conventional model is seriously motivation. With a good virtualization setup, you would have plenty of spare server capacity in case of hardware failures--even more than a single spare. That's a practical option.

Of course, you might not use these types of servers for every workload in an enterprise data center, but the 80/20 rule suggests that 80% of your servers could be these commodity systems. You might buy these in larger volumes, but it would still be cheaper overall. A lot cheaper.

Will Vendors Support It?

The conventional view is that few enterprise IT teams are willing to adopt radical ideas. In my view, it's certainly practical to use nonstandard racks and power systems, since we already do it today. Products such as IBM mainframes, HP NonStop and EMC VMAX arrays are examples of custom racks and hardware that already infest the physical data center and cause major problems in data center cooling, power distribution and weight management. It's not much more work to consider using Open Compute racks as a new template if they're cheaper and more efficient than other options.

The remaining question is whether vendors will produce products that are compliant. I'd say yes. Vendors will follow the money when there is enough customer demand. Open Compute will almost certainly be adopted by large corporate customer looking to reduce capex for new build, and they can drive vendor engagement--that would be the first phase. Once products reach the market, then availability could drive a second wave of adoption in the much larger market of midsized data centers. Change in the physical center is seriously overdue, and this is a step in the right direction.

The specifications are published and released into the public domain in the form of openly available documents on Git Hub.

About the Author(s)

Greg Ferro

Network Architect & Blogger

Greg has nearly 30 years of experience as an IT infrastructure engineer and has been focused on data networking for about 20, including 12 years as Cisco CCIE. He has worked in Asia and Europe as a network engineer and architect for a wide range of large and small firms in many verticals. He has been writing about networking for more than 20 years and in the media since 2001.

You canemail Gregor follow him on Twitter as@etherealmind. He also writes the technical blogEtherealmind.comand hosts a weekly podcast on data networking atPacket Pushers.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights