Data centers

09:50 AM
Connect Directly

Open Compute: Boutique Hardware In A Populist Wrapper

Who needs an open-source server when x86 machines are so powerful and their prices keep falling?

The Open Compute Project has attracted a lot of attention for its effort to bring the open-source model to computing hardware. While the project's goals are laudable, and its backers such as Facebook and Goldman Sachs are impressive, it seems unlikely that the work of the Open Compute Project will have much impact outside of a limited number of boutique buyers.

The primary reason is that the status quo works just fine for the market at large. The efforts of the OCP might have a greater effect if a lack of competition were stifling potential advances or keeping costs high, but that's just not the case.

[Check out Interop news and trends and register for the show and conference in Las Vegas.]

General-purpose x86 servers have become so powerful, while their costs have continued to drop, that they more than meet the computing and budget needs for the vast swathe of the server-buying market.

It's worth nothing that OCP touts the greater energy efficiency of its server design. The project's website claims a 38% improvement vs. "vanity" servers, meaning those from name-brand vendors. That's a significant gain, particularly as power and cooling costs become more of a factor in datacenters of every size.

However, the "closed" server market is addressing efficiency without necessarily having to embrace openness. Case in point are ARM- and Atom-based systems-on-a-chip (SoCs), which are emerging as a ready-made alternative to their more power-hungry x86 brethren.

As Kurt Marko notes in this Networking Computing article, "Both ARM and Intel are releasing new 64-bit products using next-generation process nodes that will substantially improve performance and memory capacity while still fitting in a 5-20 Watt per SoC power budget."

It's easy to point to the huge success of open-source software and presume that we'll see similar liftoff for open-source hardware, but I think that's a poor assumption. The barriers to entry for writing software are considerably lower than for designing and manufacturing hardware.

Sure, a clever engineer could set up a fabrication lab in a basement, get some component parts, and put together a neat motherboard with interesting specifications, but then what? Hardware has to be physically assembled. The assembly capability has to be enormous if a design is to have any material impact.

By contrast, a chunk of code can be distributed 10,000 times in the blink of an eye. Open-source software can scale much faster, and innovation can spread more quickly, than hardware can hope to duplicate. This ability to scale gives software a considerable advantage when it comes to mass-market influence.

The Web-scale giants behind OCP run so big that every watt they can shave or microsecond they can gain via a specialized design pays back in multiples. But because the designs are so specialized, I don't really see them moving en masse to the enterprise or midmarket.

It's possible that some tweaks may trickle down to the Dells and Hewlett-Packards that manufacture systems for the masses, but in my opinion OCP will promote custom designs for highly specific requirements.

There's nothing wrong with this objective, but I think it's a mistake to expect the same impact from open hardware that we saw -- and continue to see -- with open software.

Interop Las Vegas, March 31 - April 4, 2014, brings together thousands of technology professionals to discover the most current and cutting–edge technology innovations and strategies to drive their organizations' success, including BYOD security, the latest cloud and virtualization technologies, SDN, the Internet of Things, Apple in the enterprise, and more. Attend educational sessions in eight tracks, hear inspirational and industry-centric keynotes, and visit an Expo Floor that brings more than 350 top vendors together. Register for Interop Las Vegas with Discount Code MPIWK for $200 off Total Access and Conference Passes.

Drew is formerly editor of Network Computing and currently director of content and community for Interop. View Full Bio
Comment  | 
Print  | 
More Insights
Threaded  |  Newest First  |  Oldest First
User Rank: Ninja
2/4/2014 | 12:36:57 PM
Cost advantage
The key question raised here: Is it all that big a cost savings to adopt open compute designs? Open source software could attack the soft underbelly of the proprietary model's fat profit margins. In servers at least, the environment is one of falling prices and margins --  witness IBM exiting the x86 business. Does that make networking equipment, where margins are stronger, a likely candidate for adoption of these designs? 
User Rank: Apprentice
2/4/2014 | 1:10:09 PM
Re: Cost advantage
I see Open Compute's potential more in the area of networking than in servers. Perhaps the networking industry will go the way of the server industry -- toward lower prices and commodization. It already has to some extent. But there are lots of networking vendors still earning fat profit margins on their systems. We'll see if customers think they're getting enough value from them.
User Rank: Apprentice
2/4/2014 | 1:35:39 PM
Open Compute: Scale-out Hardware In A Cloud Harness
That's a great line of attack, Drew, and all too true for the server market as we know it. But that market is changing rapidly and Open Compute will end up with a major role in how it changes. The margins are already razor thin on general purpose servers commonly used in enterprise data centers. The new market is for scalable designs where the data center becomes the computer. That is, all the resources in the data center can be linked and directed to act as much larger systems through scale-out techniques. Open Compute started because Facebook needed such a design. Everyone is not Facebook. But on the other hand, every enterprise wants to gain the advantages of the cloud's economies of scale. Some will do so by going to the public cloud. But many will build a private cloud. Right now, financial services is leading the way, as it often does, and financial services likes simplified, easily configured and managed, and let me say, open source, design. 
Drew Conry-Murray
Drew Conry-Murray,
User Rank: Strategist
2/5/2014 | 9:55:50 AM
Re: Open Compute: Scale-out Hardware In A Cloud Harness
Hi Charlie,

You make a good point about the data center essentially becoming the computer. But I think the big hitch at this point is more about software than hardware. There's a lot of orchestration and management that has to happen among the disparate hardware components to get the goal of allocating and re-allocating resources on the fly. Network and storage vendors are under a lot of pressure to give up some of the control they currently exercise over their pieces of the private cloud stack. We're starting to see that pressure in the form of APIs and protocols that allow for more intelligence and decision-making to move into a controller or orchestration layer, but it's just getting started. I think the real fight--and the real potential for innovation in private clouds--will come from software, not hardware.
User Rank: Apprentice
2/4/2014 | 3:09:10 PM
OCP shifts design control to the customer
And another thing, Drew. There's no reason why HP, Dell or Lenovo won't become major manufacturers of OCP designs. They already produce them, grudgingly, when customers ask for them. That's the real point about Open Compute. Power over the design shifts from the manufacturer to the consumer, which makes already slender margins a little thinner. And the private cloud builder knows he's got some say over his next generation of hardware. Facebook and financial services want that. Anyone trying to makeover their data center, as Bank of America is, wants some say over the components going into it. I think eventually, Open Compute will play a role in creating a baseline, common denominator set of devices, mass produced to a very large scale for many customers, It hasn't happened yet, I know. But it will.
White Papers
Register for Network Computing Newsletters
Current Issue
Research: 2014 State of the Data Center
Research: 2014 State of the Data Center
Our latest survey shows growing demand, fixed budgets, and good reason why resellers and vendors must fight to remain relevant. One thing's for sure: The data center is poised for a wild ride, and no one wants to be left behind.
Twitter Feed