Howard Marks

Network Computing Blogger

Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

Open Compute's Trickle-Down Tech

Given the degree to which open source software has worked its way into the corporate data center, it's interesting to dream about how "open source hardware" espoused by the Open Compute Project might let us cast off the yoke of vendor lock-in and save money on data center hardware. The reality is that hardware designed to meet the needs of Facebook and Rackspace may not be much use to the average enterprise data center.

The first thing you have to realize about Open Compute designs is that, like the rich, service providers are different. They don't buy servers one or even a hundred at a time; they provision whole data centers all at once. Those servers are also committed to very specific use cases rather than general-purpose computing.

More Insights


More >>

White Papers

More >>


More >>

As a result, if providers have to buy different Open Racks with centralized 12VDC power supplies and unique servers to go in them, it's worth it to save $100 per server because they're buying 20 rows of racks and servers all at once. And all those servers are going to run shards of the same hyperscale application.

The corporate world is much more interested in flexibility and compatibility. When I retire 50 old 2U servers, it's easy to slot in 50 new 2U servers and buy a few spares. I also need those servers to go into my existing 19-inch racks with 208/120VAC power. I was looking at a presentation on the Open Vault (previously known as Knox) storage device. It holds 30 3.5-inch disks in a 2U tool-less enclosure with SAS expander, or storage server, slide-in access cards. I thought it would be a great addition to DeepStorage Labs--until I realized I'd need an Open Rack to put it in, because the only power option is the 12V bus bars in an Open Rack.

At the recent Open Compute summit, one of the speakers said, "We've been asked how this applies to virtualization. What we're talking about is the opposite of virtualization." In other words, organizations like Facebook, which kicked off the Open Compute Project, aren't trying to run as many different workloads on a single server as they can; instead, they're spreading a single massive workload across many, many servers to optimize that workload for a billion users.

As a result, the "group-hug board" that houses 10 ARM or Atom micro-servers, each on a 20- or 30-square-inch PC board, would be great if you planned to shard your application across thousands of servers, or if you want to offer dedicated hosting servers to the masses. I can't see it working in corporate America, even in private cloud environments.

On the other hand, many corporate data centers are primarily populated with servers that have storage for boot--if they have local storage at all. Wouldn't it be nice, as some of the Open Compute folks were promoting, to design server hardware with the PCIe slots, network connections and the like on the front, and replace the hot-swap drive bays we don't use anyway?

That would allow server maintenance and updates from the cold aisle. Today, to add a network card I have to run back and forth from the cold to hot aisle two or three times. In a large data center, where a row is 20 or more racks long, that adds a lot of movement (though I, for one, could use the exercise). With hot aisles running well over 100 degrees F, working in the cold aisle is also a lot more comfortable.

It will probably be years before most large corporate data centers start adopting Open Rack and the rest of the Open Compute Project's designs. Even then, I expect them to move slowly, perhaps by installing a row or two of Open Racks for a private cloud or VDI implementation.

In the meantime, Open Compute isn't good news for leading server vendors such as HP and the now-private Dell. As virtualization reduces the number of servers that corporations buy, vendors look to the service provider market for growth. Some have gone so far as to develop products such as Dell's C-series and Supermicro's FatTwin and MicroCloud high-density servers (some of which, by the way, have front-mounted I/O).

Service providers currently account for 20% to 30% of server sales--a percentage that will only grow along with public cloud services. But every time a business or educational institution shuts down its Exchange environment and goes to Gmail, those server sales shift to Google, and therefore to its ODM suppliers such as Quanta and Wistron, which are rapidly replacing HP, IBM and Dell as vendors to the big service providers. Open Compute designs turn servers into commodities, which can't be good for name-brand vendors.

Related Reading

Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013

TechWeb Careers