Data centers

11:13 AM
Greg Ferro
Greg Ferro
Commentary
Connect Directly
Twitter
LinkedIn
Google+
RSS
E-Mail
50%
50%

Open Compute Racks: Are We Going to Use Them in Our Data Centers?

The Open Compute Project from Facebook may serve as a framework that could lead to cheaper, more efficient data center resources. But will we use Open Compute racks?

Commodity servers: Open Compute has specifications for AMD and Intel servers including motherboards, chip sets and the chassis. It's surprising how similar these are to traditional servers in terms of components and selection. A contract manufacturer could produce these without any electronics design team, thus reducing the cost to production and QA testing. What would a server like this cost? I don't know, but I'd hazard about one-third the price of a conventional server--getting three servers for the price of a single conventional model is seriously motivation. With a good virtualization setup, you would have plenty of spare server capacity in case of hardware failures--even more than a single spare. That's a practical option.

Of course, you might not use these types of servers for every workload in an enterprise data center, but the 80/20 rule suggests that 80% of your servers could be these commodity systems. You might buy these in larger volumes, but it would still be cheaper overall. A lot cheaper.

Will Vendors Support It?

The conventional view is that few enterprise IT teams are willing to adopt radical ideas. In my view, it's certainly practical to use nonstandard racks and power systems, since we already do it today. Products such as IBM mainframes, HP NonStop and EMC VMAX arrays are examples of custom racks and hardware that already infest the physical data center and cause major problems in data center cooling, power distribution and weight management. It's not much more work to consider using Open Compute racks as a new template if they're cheaper and more efficient than other options.

The remaining question is whether vendors will produce products that are compliant. I'd say yes. Vendors will follow the money when there is enough customer demand. Open Compute will almost certainly be adopted by large corporate customer looking to reduce capex for new build, and they can drive vendor engagement--that would be the first phase. Once products reach the market, then availability could drive a second wave of adoption in the much larger market of midsized data centers. Change in the physical center is seriously overdue, and this is a step in the right direction.

The specifications are published and released into the public domain in the form of openly available documents on Git Hub.

Greg has nearly 30 years of experience as an IT infrastructure engineer and has been focused on data networking for about 20, including 12 years as Cisco CCIE. He has worked in Asia and Europe as a network engineer and architect for a wide range of large and small firms in ... View Full Bio
Previous
2 of 2
Next
Comment  | 
Print  | 
More Insights
Cartoon
Slideshows
Audio Interviews
Archived Audio Interviews
Jeremy Schulman, founder of Schprockits, a network automation startup operating in stealth mode, joins us to explore whether networking professionals all need to learn programming in order to remain employed.
White Papers
Register for Network Computing Newsletters
Current Issue
2014 Private Cloud Survey
2014 Private Cloud Survey
Respondents are on a roll: 53% brought their private clouds from concept to production in less than one year, and 60% ­extend their clouds across multiple datacenters. But expertise is scarce, with 51% saying acquiring skilled employees is a roadblock.
Video
Twitter Feed