Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Should Your Cloud Go Open Compute?

Most IT admins planning a cloud installation envy large cloud service providers' ability to create cost-effective infrastructure. Buying leverage at a huge scale is combined with designs that are minimalist and low-cost. Mega-scale CSPs and Internet giants like Facebook also have cut out the middleman by buying hardware directly from the manufacturers in China, known as ODMs.

Large CSPs such as AWS, Google, and Azure have invested heavily in cost-cutting approaches, and most enterprise buyers have neither the cash nor the expertise to match their near-decade of experience. That places them in a follow-the-leader role when it comes to infrastructure procurement.

Data center operators typically share little of their infrastructure designs, so the decision by Facebook to share server reference designs has garnered a lot of attention. It has passed several designs to the Open Compute Project (which it launched in 2011), allowing both vendors and users to piggyback off the work and make multiple-sourced solutions available to the industry.

The OCP portfolio of designs covers AMD, ARM, and Intel motherboards; servers to mount them; storage and network gear; and racks. In all cases, the aim is a bare-bones design approach, with unnecessary components removed. Drives may be fixed, rather than removable, for instance, and metalwork may be as simple as a flat "pizza" plate.

For all this simplification, weight loss, power minimizing, and cost reduction, the OCP approach has some drawbacks for the enterprise. To understand how, it's necessary to look at how the hyperscale data centers operate. They work directly with and create designs interactively with the Chinese ODMs. The ODMs started with fully featured Intel and AMD reference designs and removed components that weren't useful, such as PCI slots, the VGA, and other unused connectors.

The results are targeted designs aimed at specific use cases. These are economic solutions, since even a small buy by a huge cloud company is thousands of units. This is where the results diverge from the typical commercial product. Lifecycles tend to be short. The hyperscale cloud companies are agile and evolve rapidly; they may go through four to six designs a year, each in large volume.

But the real difference may lie in defining the support structures around the chosen motherboard. Which cooling and power approach is used will have a major effect on operating costs, and here there is little standardization. We see stories about Google using frozen lakes to cool a data center, or no chillers at all, but this is a major design challenge for the enterprise customer to emulate. Again, the issue is cash and resources.

The Open Compute Project is stepping up to the plate by defining an "Open Rack," but there are already signs that the design is gaining proprietary embellishments. Intel has offered a photonic interconnect scheme, for example, which might be a good idea but seems expensive and potentially an avenue for vendor lockin.

Users looking to the OCP to alleviate the cost differential of having to buy traditional gear versus cloud-targeted solutions are getting just a glimpse of what's required. In many ways, the OCP is serving in the role of a facilitator to the idea of these bare-bones systems. The likely evolution of the server market will be a rapid increase in comfort with the idea that elegant, gold or black server boxes are no longer required.

As experience in installing this type of gear grows, expect enterprise users to go direct and buy current designs from the US outlets of Quanta, Hyve, Wiwynn, ZT Systems, and Supermicro, selecting from a menu of ready-to-go systems. We can also expect a layer of integrators to step in to convert motherboard/component-level solutions into ready-to-install racks and containers.

As economics allow, rack-integrated solutions will enter the midmarket in the form of ready-to-go sub-rack modules. This approach solves much of the issue of finding skilled and experienced integrators. A final step might be for large enterprises to build up their own integration teams and save the cost of external services.

Overall, the hyperscale computing approach works well and is compelling for most enterprises. The OCP will alleviate the skills and learning issues involved in getting on board with the new approach. But in the longer term, savings and increased agility will come from dealing directly with ODMs, using third-party integrators to create ready-to-install solutions.