The Open Compute Project has attracted a lot of attention for its effort to bring the open-source model to computing hardware. While the project's goals are laudable, and its backers such as Facebook and Goldman Sachs are impressive, it seems unlikely that the work of the Open Compute Project will have much impact outside of a limited number of boutique buyers.
The primary reason is that the status quo works just fine for the market at large. The efforts of the OCP might have a greater effect if a lack of competition were stifling potential advances or keeping costs high, but that's just not the case.
[Check out Interop news and trends and register for the show and conference in Las Vegas.]
General-purpose x86 servers have become so powerful, while their costs have continued to drop, that they more than meet the computing and budget needs for the vast swathe of the server-buying market.
It's worth nothing that OCP touts the greater energy efficiency of its server design. The project's website claims a 38% improvement vs. "vanity" servers, meaning those from name-brand vendors. That's a significant gain, particularly as power and cooling costs become more of a factor in datacenters of every size.
However, the "closed" server market is addressing efficiency without necessarily having to embrace openness. Case in point are ARM- and Atom-based systems-on-a-chip (SoCs), which are emerging as a ready-made alternative to their more power-hungry x86 brethren.
As Kurt Marko notes in this Networking Computing article, "Both ARM and Intel are releasing new 64-bit products using next-generation process nodes that will substantially improve performance and memory capacity while still fitting in a 5-20 Watt per SoC power budget."
It's easy to point to the huge success of open-source software and presume that we'll see similar liftoff for open-source hardware, but I think that's a poor assumption. The barriers to entry for writing software are considerably lower than for designing and manufacturing hardware.
Sure, a clever engineer could set up a fabrication lab in a basement, get some component parts, and put together a neat motherboard with interesting specifications, but then what? Hardware has to be physically assembled. The assembly capability has to be enormous if a design is to have any material impact.
By contrast, a chunk of code can be distributed 10,000 times in the blink of an eye. Open-source software can scale much faster, and innovation can spread more quickly, than hardware can hope to duplicate. This ability to scale gives software a considerable advantage when it comes to mass-market influence.
The Web-scale giants behind OCP run so big that every watt they can shave or microsecond they can gain via a specialized design pays back in multiples. But because the designs are so specialized, I don't really see them moving en masse to the enterprise or midmarket.
It's possible that some tweaks may trickle down to the Dells and Hewlett-Packards that manufacture systems for the masses, but in my opinion OCP will promote custom designs for highly specific requirements.
There's nothing wrong with this objective, but I think it's a mistake to expect the same impact from open hardware that we saw -- and continue to see -- with open software.
Interop Las Vegas, March 31 - April 4, 2014, brings together thousands of technology professionals to discover the most current and cutting–edge technology innovations and strategies to drive their organizations' success, including BYOD security, the latest cloud and virtualization technologies, SDN, the Internet of Things, Apple in the enterprise, and more. Attend educational sessions in eight tracks, hear inspirational and industry-centric keynotes, and visit an Expo Floor that brings more than 350 top vendors together. Register for Interop Las Vegas with Discount Code MPIWK for $200 off Total Access and Conference Passes.Drew is formerly editor of Network Computing and currently director of content and community for Interop. View Full Bio