Cloud Data Centers: Power Savings or Power Drain?
A new model introduced by Berkeley Labs recommends moving data center applications to the cloud to save energy, but is that necessary?
February 7, 2014
In a six-year span, one major laboratory has been the principal source of two contradictory reports on the power consumption of cloud data centers. The most recent report concludes that moving to the cloud saves energy, yet close inspection of the data reveals that virtualization -- not the use of the technology in the cloud -- is behind power reduction.
In 2007, the U.S. Environmental Protection Agency warned Congress that the power draw from cloud data centers appeared to be doubling every five years. At that rate, cloud data centers would be adding more stress to the nation's power grid than new citizens being added to the population.
Then last June, a study from Lawrence Berkeley National Laboratory proclaimed that businesses moving their e-mail, productivity, and CRM applications to SaaS providers could reduce their power consumption by as much as 87%.
The Cloud Energy and Emissions Research (CLEER) Model, funded in part by Google and introduced by LBNL in its June report, used survey data submitted by U.S. businesses to estimate significant power savings achievable through SaaS. Using this model, researchers extrapolated the following: If all U.S. businesses were to shift their critical business applications to cloud service providers, enough energy savings would be attained each year to power the entire city of Los Angeles.
Obviously, that's a somewhat different picture than the power precipice painted by the EPA, using data supplied in 2007 by the very same Berkeley Lab. Dale Sartor, who leads LBNL's Building Technologies Applications Team, told Network Computing data suggests the dire predictions advanced by the EPA did not pan out after all. Though LBNL's final word on the subject has yet to be published, totals for annual data center electricity use in 2011 appear closer to the low end of the 2007 forecast at 82 billion kilowatt-hours per year, rather than 122 B kWh/year.
The declining economy in the latter half of the prior decade had a significant impact, Sartor said. And although power usage by data centers is continuing to rise, he explained, the computational level of processors improves at a greater rate.
The CLEER Model (with which Sartor's team was not involved) is open to public experiments on the lab's website. IT professionals can key in their own data center specifications and energy use parameters, and see reasonable estimates of how much energy savings they can expect for themselves. LBNL principal investigator Dr. Lavanya Ramakrishnan told Network Computing these estimates are based on survey data submitted by data center administrators, not through direct observations made by LBNL researchers.
"Our results indicate substantial primary energy savings if U.S. businesses shift common software applications to the cloud," the CLEER case study report states. Yet where would these savings come from? The input data for the CLEER model comes solely from surveys, said Dr. Ramakrishnan, and it's impossible to drill down into specifics.
Using virtualization is what provides the savings, not the cloud model itself, postulated Sartor.
"In a typical cloud data center, you have virtualized servers," he said. "There's no reason why an enterprise data center can't adopt the same technologies that the cloud providers have. You basically virtualize your servers, and offer data center services rather than racks."
"That's the potential of [almost] 90 percent savings moving to the cloud," he added. "But you could have gotten exactly the same savings moving to a virtualized environment at your own data center."
Sartor is presently involved in the development of the Center of Expertise for Energy Efficiency in Data Centers, which will provide insight and expertise to both enterprises and governments in crafting data center energy policy.
You May Also Like