Should Your Cloud Go Open Compute?

Facebook's open data center designs offer efficiency and flexibility, but should your enterprise follow the Open Compute Project model? Here are some pros and cons.

Jim O'Reilly

December 24, 2014

4 Min Read
Network Computing logo

Most IT admins planning a cloud installation envy large cloud service providers' ability to create cost-effective infrastructure. Buying leverage at a huge scale is combined with designs that are minimalist and low-cost. Mega-scale CSPs and Internet giants like Facebook also have cut out the middleman by buying hardware directly from the manufacturers in China, known as ODMs.

Large CSPs such as AWS, Google, and Azure have invested heavily in cost-cutting approaches, and most enterprise buyers have neither the cash nor the expertise to match their near-decade of experience. That places them in a follow-the-leader role when it comes to infrastructure procurement.

Data center operators typically share little of their infrastructure designs, so the decision by Facebook to share server reference designs has garnered a lot of attention. It has passed several designs to the Open Compute Project (which it launched in 2011), allowing both vendors and users to piggyback off the work and make multiple-sourced solutions available to the industry.

The OCP portfolio of designs covers AMD, ARM, and Intel motherboards; servers to mount them; storage and network gear; and racks. In all cases, the aim is a bare-bones design approach, with unnecessary components removed. Drives may be fixed, rather than removable, for instance, and metalwork may be as simple as a flat "pizza" plate.

For all this simplification, weight loss, power minimizing, and cost reduction, the OCP approach has some drawbacks for the enterprise. To understand how, it's necessary to look at how the hyperscale data centers operate. They work directly with and create designs interactively with the Chinese ODMs. The ODMs started with fully featured Intel and AMD reference designs and removed components that weren't useful, such as PCI slots, the VGA, and other unused connectors.

The results are targeted designs aimed at specific use cases. These are economic solutions, since even a small buy by a huge cloud company is thousands of units. This is where the results diverge from the typical commercial product. Lifecycles tend to be short. The hyperscale cloud companies are agile and evolve rapidly; they may go through four to six designs a year, each in large volume.

But the real difference may lie in defining the support structures around the chosen motherboard. Which cooling and power approach is used will have a major effect on operating costs, and here there is little standardization. We see stories about Google using frozen lakes to cool a data center, or no chillers at all, but this is a major design challenge for the enterprise customer to emulate. Again, the issue is cash and resources.

The Open Compute Project is stepping up to the plate by defining an "Open Rack," but there are already signs that the design is gaining proprietary embellishments. Intel has offered a photonic interconnect scheme, for example, which might be a good idea but seems expensive and potentially an avenue for vendor lockin.

Users looking to the OCP to alleviate the cost differential of having to buy traditional gear versus cloud-targeted solutions are getting just a glimpse of what's required. In many ways, the OCP is serving in the role of a facilitator to the idea of these bare-bones systems. The likely evolution of the server market will be a rapid increase in comfort with the idea that elegant, gold or black server boxes are no longer required.

As experience in installing this type of gear grows, expect enterprise users to go direct and buy current designs from the US outlets of Quanta, Hyve, Wiwynn, ZT Systems, and Supermicro, selecting from a menu of ready-to-go systems. We can also expect a layer of integrators to step in to convert motherboard/component-level solutions into ready-to-install racks and containers.

As economics allow, rack-integrated solutions will enter the midmarket in the form of ready-to-go sub-rack modules. This approach solves much of the issue of finding skilled and experienced integrators. A final step might be for large enterprises to build up their own integration teams and save the cost of external services.

Overall, the hyperscale computing approach works well and is compelling for most enterprises. The OCP will alleviate the skills and learning issues involved in getting on board with the new approach. But in the longer term, savings and increased agility will come from dealing directly with ODMs, using third-party integrators to create ready-to-install solutions.

About the Author(s)

Jim O'Reilly


Jim O'Reilly was Vice President of Engineering at Germane Systems, where he created ruggedized servers and storage for the US submarine fleet. He has also held senior management positions at SGI/Rackable and Verari; was CEO at startups Scalant and CDS; headed operations at PC Brand and Metalithic; and led major divisions of Memorex-Telex and NCR, where his team developed the first SCSI ASIC, now in the Smithsonian. Jim is currently a consultant focused on storage and cloud computing.

Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like

More Insights