Jim Rapoza


Upcoming Events

A Network Computing Webcast:
SSDs and New Storage Options in the Data Center

March 13, 2013
11:00 AM PT / 2:00 PM ET

Solid state is showing up at every level of the storage stack -- as a memory cache, an auxiliary storage tier for hot data that's automatically shuttled between flash and mechanical disk, even as dedicated primary storage, so-called Tier 0. But if funds are limited, where should you use solid state to get the best bang for the buck? In this Network Computing webcast, we'll discuss various deployment options.

Register Now!


Interop Las Vegas 2013
May 6-10, 2013
Mandalay Bay Conference Center
Las Vegas

Attend Interop Las Vegas 2013 and get access to 125+ workshops and conference classes, 350+ exhibiting companies and the latest tech.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

Facebook's Open Compute Summit Pushes Open Hardware For The High End

At the Open Compute Summit this week in New York, Frank Frankovsky, Facebook’s director of hardware design and supply chain, opened the proceedings by saying, "Open source is not just something that you can use to describe software, but also to describe the hardware space."

That is the goal for the Open Compute Project, which aims to spur the development of cheaper servers and more efficient data centers. The project was kicked off by Facebook in April 2011 and has shared details of the social networking giant's customized server specifications and data center design principles. At the summit, a board of directors was announced that includes members of Facebook, Intel, Arista, Amazon, Goldman-Sachs and Dell. But does this project really herald an open source era for hardware? Yes and no.

The Open Compute Project certainly is based on open source principles and guidelines. And it has definitely embraced many of the aspects of successful open source software groups, such as the Apache Project, including an open model for contributions and project organization.

But people shouldn’t expect to see full-fledged products emerge from this group in the same way that you see code coming from Apache or Mozilla. What the Open Compute Project is releasing are specifications for data center hardware, including everything from servers to racks to batteries (but not networking equipment). With these open specifications, companies and vendors will be able to build products designed to these open hardware specifications.

Currently, the efforts of the Open Compute Project will be of most interest to companies at the highest end of the data center picture--companies such as Facebook and Amazon that design massive data centers that often take up entire buildings. For example, one of the specifications launched at the summit was for a giant triplet rack designed to handle Open Compute servers. This is a gigantic rack that wouldn’t fit in the data centers of many companies.

When I asked the Open Compute Project board when and if some of these designs would be useful for the "smaller" data centers found in some businesses (like, you know, those with just 1,000 servers as opposed to 40,000 or 50,000), they said that down the road, the specifications would adapt to designs more common in smaller data centers, such as single racks.

Sitting through the sessions at the Open Compute Summit, I definitely saw quite a bit of exciting technology. There is a great deal of potential for savings in power, better and cheaper cooling and much greater interoperability for hardware in the data center. For example, the power supply models are designed to handle power from the utility with less conversion needed, cutting down on waste, and the rack and data center designs focus on avoiding the use of air conditioning and other expensive methods for cooling systems, relying more on regular room venting.

However, in many ways it felt the same as if I was looking at a Formula 1 car: The technology is really cool and some of it will eventually trickle down, but the advances in that race car (massive data center) won’t apply much right now to my regular sedan (typical data center).


Related Reading


More Insights


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
IaaS Providers
Cloud Computing Comparison
With 17 top vendors and features matrixes covering more than 60 decision points, this is your one-stop shop for an IaaS shortlist.
IaaS Providers

Next Gen Network Reports

Premium Content

Research and Reports

The Virtual Network
February 2013

Network Computing: February 2013

Upcoming Events



TechWeb Careers