Open Compute's Trickle-Down Tech
February 06, 2013
Given the degree to which open source software has worked its way into the corporate data center, it's interesting to dream about how "open source hardware" espoused by the Open Compute Project might let us cast off the yoke of vendor lock-in and save money on data center hardware. The reality is that hardware designed to meet the needs of Facebook and Rackspace may not be much use to the average enterprise data center.
The first thing you have to realize about Open Compute designs is that, like the rich, service providers are different. They don't buy servers one or even a hundred at a time; they provision whole data centers all at once. Those servers are also committed to very specific use cases rather than general-purpose computing.
- Datacenter Modernization: How Customers are Standardizing in Preparation for the Future
- How to Really Put Big Data to Work
White PapersMore >>
As a result, if providers have to buy different Open Racks with centralized 12VDC power supplies and unique servers to go in them, it's worth it to save $100 per server because they're buying 20 rows of racks and servers all at once. And all those servers are going to run shards of the same hyperscale application.
The corporate world is much more interested in flexibility and compatibility. When I retire 50 old 2U servers, it's easy to slot in 50 new 2U servers and buy a few spares. I also need those servers to go into my existing 19-inch racks with 208/120VAC power. I was looking at a presentation on the Open Vault (previously known as Knox) storage device. It holds 30 3.5-inch disks in a 2U tool-less enclosure with SAS expander, or storage server, slide-in access cards. I thought it would be a great addition to DeepStorage Labs--until I realized I'd need an Open Rack to put it in, because the only power option is the 12V bus bars in an Open Rack.
At the recent Open Compute summit, one of the speakers said, "We've been asked how this applies to virtualization. What we're talking about is the opposite of virtualization." In other words, organizations like Facebook, which kicked off the Open Compute Project, aren't trying to run as many different workloads on a single server as they can; instead, they're spreading a single massive workload across many, many servers to optimize that workload for a billion users.
As a result, the "group-hug board" that houses 10 ARM or Atom micro-servers, each on a 20- or 30-square-inch PC board, would be great if you planned to shard your application across thousands of servers, or if you want to offer dedicated hosting servers to the masses. I can't see it working in corporate America, even in private cloud environments.
On the other hand, many corporate data centers are primarily populated with servers that have storage for boot--if they have local storage at all. Wouldn't it be nice, as some of the Open Compute folks were promoting, to design server hardware with the PCIe slots, network connections and the like on the front, and replace the hot-swap drive bays we don't use anyway?
That would allow server maintenance and updates from the cold aisle. Today, to add a network card I have to run back and forth from the cold to hot aisle two or three times. In a large data center, where a row is 20 or more racks long, that adds a lot of movement (though I, for one, could use the exercise). With hot aisles running well over 100 degrees F, working in the cold aisle is also a lot more comfortable.
It will probably be years before most large corporate data centers start adopting Open Rack and the rest of the Open Compute Project's designs. Even then, I expect them to move slowly, perhaps by installing a row or two of Open Racks for a private cloud or VDI implementation.
In the meantime, Open Compute isn't good news for leading server vendors such as HP and the now-private Dell. As virtualization reduces the number of servers that corporations buy, vendors look to the service provider market for growth. Some have gone so far as to develop products such as Dell's C-series and Supermicro's FatTwin and MicroCloud high-density servers (some of which, by the way, have front-mounted I/O).
Service providers currently account for 20% to 30% of server sales--a percentage that will only grow along with public cloud services. But every time a business or educational institution shuts down its Exchange environment and goes to Gmail, those server sales shift to Google, and therefore to its ODM suppliers such as Quanta and Wistron, which are rapidly replacing HP, IBM and Dell as vendors to the big service providers. Open Compute designs turn servers into commodities, which can't be good for name-brand vendors.