Running rows and rows of servers, storage appliances and networking equipment makes the electric bill a data center receives at the end of each month a whopper. That’s why data-center operators are constantly working on ways to improve energy efficiency. A couple of tech companies in Silicon Valley have reported considerable progress in improving energy efficiency, and a national organization shows the average energy efficiency also improving, though some data-center operators are having more success than others.
Energy efficiency of data centers is measured by what’s called the power usage effectiveness (PUE) rating, which is a ratio of the energy consumption of the servers, storage and networking equipment in a data center to the energy consumption of everything else, including the lights, cooling equipment, air circulation systems and other items, collectively called 'overhead'. If a data center has a PUE of 2.0, that means that for every kilowatt hour of electricity used to generate compute cycles, another kilowatt hour is being consumed by overhead. Ideally, the PUE should be as close to 1.0 as possible.
According to the Uptime Institute, which tracks these statistics, the average PUE for data centers surveyed in 2011 was 1.83, which means that overhead energy costs were 83% of the cost of electricity for the computing equipment. That was from a global survey of 525 data center operators, 71% of whom were in North America.
Google recently reported a PUE of just 1.14 at the end of 2011, an average of all its data centers, meaning overhead energy usage was only 14% of computing electricity consumption. “I’m delighted to see that,” says Joe Kava, senior director of data center construction and operations at Google, who blogged about its numbers March 26. The 2011 PUE is an improvement over the 1.16 rating of 2010, and a PUE of 1.22 when Google first began tracking energy usage in 2008.
Data-center operators have a number of different levers at their disposal to manage energy consumption, Kava says, but the biggest energy hog in the overhead is the cost of cooling the equipment. More than 70% of the energy savings opportunity is in cooling equipment.
“The biggest and easiest way you can do that is to turn up the temperature in the data center just like in your house,” he notes.
Another way to save on energy for cooling is to be more precise in where you direct the cool air, he adds. Oftentimes, server racks are only partly filled, so it makes no sense to send cool air to racks that have holes in them. The answer is to install blanking plates to cover the gaps in those racks.
Another option is to use outside air as much as possible, especially in cool or temperate climates. HP opened a data center last year in Fort Collins, Colo., and during certain times of the year can bring in cool Rocky Mountain air to chill the servers, says Doug Oathout, VP of green IT at HP. He says the PUE at the Fort Collins facility averages 1.35, versus the corporate average for all HP data centers of 1.55. HP operates another data center in Wynyard, England, that is located on the shore of the North Sea. Drawing in cool sea air gives the Wynyard data center a PUE of just 1.19.
“You want to design to take advantage of the most free cooling based on the natural environment that you are in,” he states.