Jim Rapoza


Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

Facebook's Open Compute Summit Pushes Open Hardware For The High End

At the Open Compute Summit this week in New York, Frank Frankovsky, Facebook’s director of hardware design and supply chain, opened the proceedings by saying, "Open source is not just something that you can use to describe software, but also to describe the hardware space."

That is the goal for the Open Compute Project, which aims to spur the development of cheaper servers and more efficient data centers. The project was kicked off by Facebook in April 2011 and has shared details of the social networking giant's customized server specifications and data center design principles. At the summit, a board of directors was announced that includes members of Facebook, Intel, Arista, Amazon, Goldman-Sachs and Dell. But does this project really herald an open source era for hardware? Yes and no.

The Open Compute Project certainly is based on open source principles and guidelines. And it has definitely embraced many of the aspects of successful open source software groups, such as the Apache Project, including an open model for contributions and project organization.

But people shouldn’t expect to see full-fledged products emerge from this group in the same way that you see code coming from Apache or Mozilla. What the Open Compute Project is releasing are specifications for data center hardware, including everything from servers to racks to batteries (but not networking equipment). With these open specifications, companies and vendors will be able to build products designed to these open hardware specifications.

Currently, the efforts of the Open Compute Project will be of most interest to companies at the highest end of the data center picture--companies such as Facebook and Amazon that design massive data centers that often take up entire buildings. For example, one of the specifications launched at the summit was for a giant triplet rack designed to handle Open Compute servers. This is a gigantic rack that wouldn’t fit in the data centers of many companies.

When I asked the Open Compute Project board when and if some of these designs would be useful for the "smaller" data centers found in some businesses (like, you know, those with just 1,000 servers as opposed to 40,000 or 50,000), they said that down the road, the specifications would adapt to designs more common in smaller data centers, such as single racks.

Sitting through the sessions at the Open Compute Summit, I definitely saw quite a bit of exciting technology. There is a great deal of potential for savings in power, better and cheaper cooling and much greater interoperability for hardware in the data center. For example, the power supply models are designed to handle power from the utility with less conversion needed, cutting down on waste, and the rack and data center designs focus on avoiding the use of air conditioning and other expensive methods for cooling systems, relying more on regular room venting.

However, in many ways it felt the same as if I was looking at a Formula 1 car: The technology is really cool and some of it will eventually trickle down, but the advances in that race car (massive data center) won’t apply much right now to my regular sedan (typical data center).


Related Reading


More Insights


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Next Gen Network Reports

Research and Reports

August 2013
Network Computing: August 2013



TechWeb Careers