Data centers

11:13 AM
Greg Ferro
Greg Ferro
Connect Directly

Open Compute Racks: Are We Going to Use Them in Our Data Centers?

The Open Compute Project from Facebook may serve as a framework that could lead to cheaper, more efficient data center resources. But will we use Open Compute racks?

Commodity servers: Open Compute has specifications for AMD and Intel servers including motherboards, chip sets and the chassis. It's surprising how similar these are to traditional servers in terms of components and selection. A contract manufacturer could produce these without any electronics design team, thus reducing the cost to production and QA testing. What would a server like this cost? I don't know, but I'd hazard about one-third the price of a conventional server--getting three servers for the price of a single conventional model is seriously motivation. With a good virtualization setup, you would have plenty of spare server capacity in case of hardware failures--even more than a single spare. That's a practical option.

Of course, you might not use these types of servers for every workload in an enterprise data center, but the 80/20 rule suggests that 80% of your servers could be these commodity systems. You might buy these in larger volumes, but it would still be cheaper overall. A lot cheaper.

Will Vendors Support It?

The conventional view is that few enterprise IT teams are willing to adopt radical ideas. In my view, it's certainly practical to use nonstandard racks and power systems, since we already do it today. Products such as IBM mainframes, HP NonStop and EMC VMAX arrays are examples of custom racks and hardware that already infest the physical data center and cause major problems in data center cooling, power distribution and weight management. It's not much more work to consider using Open Compute racks as a new template if they're cheaper and more efficient than other options.

The remaining question is whether vendors will produce products that are compliant. I'd say yes. Vendors will follow the money when there is enough customer demand. Open Compute will almost certainly be adopted by large corporate customer looking to reduce capex for new build, and they can drive vendor engagement--that would be the first phase. Once products reach the market, then availability could drive a second wave of adoption in the much larger market of midsized data centers. Change in the physical center is seriously overdue, and this is a step in the right direction.

The specifications are published and released into the public domain in the form of openly available documents on Git Hub.

Greg has nearly 30 years of experience as an IT infrastructure engineer and has been focused on data networking for about 20, including 12 years as Cisco CCIE. He has worked in Asia and Europe as a network engineer and architect for a wide range of large and small firms in ... View Full Bio
2 of 2
Comment  | 
Print  | 
More Insights
White Papers
Register for Network Computing Newsletters
Current Issue
Research: 2014 State of the Data Center
Research: 2014 State of the Data Center
Our latest survey shows growing demand, fixed budgets, and good reason why resellers and vendors must fight to remain relevant. One thing's for sure: The data center is poised for a wild ride, and no one wants to be left behind.
Twitter Feed