Special Coverage Series

Network Computing

Special Coverage Series


Microsoft, Facebook Make Green Data Center Push

Technology giants announce plans to use wind power for their data centers as the movement towards renewable energy and data center efficiency gains steam.

This story was updated on Nov. 21, 2013

Data center wars are nothing new. But where previous battles have focused on the size, speed and sophistication of each new data center, the latest efforts to re-arm have instead centered on tackling environmental concerns.

Whether it’s turning to renewable energy sources or coming up with innovative approaches to power creation and data center efficiency, one company after another keeps raising the bar on reducing the carbon footprint of these IT nerve centers.

The most recent salvos came this month, when Facebook and Microsoft announced significant commitments to powering data centers with the wind.

First, Microsoft said it would buy up all the power produced over the next 20 years at RES America’s 110 megawatt Keechi wind project 70 miles northwest of Fort Worth, Texas. The project’s 55 turbines reportedly will feed into the same grid that powers Microsoft’s data center in San Antonio, though the output will not be sufficient to meet all of the facility’s power needs.

Meanwhile, Facebook took things a step further with its announcement in a blog post that its planned data center in Altoona, Iowa, would be 100 percent wind-powered.

That achievement was made possible when MidAmerican Energy, which announced in May that it was investing $1.9 billion to expand its wind generation facilities, agreed to take over construction and operation of the planned Altoona wind farm from RPM Access, a wind farm developer and operator that had been working with Facebook on the project. At about the same time, MidAmerican shelved plans for a nuclear power plant in a coincidental but unrelated decision, according to a MidAmerican spokesperson.

MidAmerican’s wind project will result in 138 megawatts of wind power, or “more than what our data center is likely to require for the foreseeable future,” as Vincent Van Son, Facebook’s data center energy manager, wrote in the post.

Facebook’s and Microsoft’s approaches -- buying rights to renewable power generated near their data centers -- could become a popular model for data centers and utilities alike, said Andy Lawrence, an analyst with 451 Research.

“They are helping to provide a return on investment for renewable, and this will encourage further investment,” Lawrence said via email. “Their contribution to the overall cause is quite significant, especially in providing leadership.”

[Read about Seattle city officials' plan to recycle waste heat from nearby data centers to provide sustainable heat and hot water to buildings in "Seattle's Plan To Warm City With Data Center Waste Heat."]

Microsoft is also looking to lead beyond its own data centers by working on making its customers’ data centers more energy efficient. Specifically, the company revealed in a recently published white paper that it was experimenting with incorporating fuel cells directly into server racks to make them self-powering. By effectively putting a tiny power plant on each rack in the data center, thereby minimizing the amount of power that’s wasted as it moves through the electrical grid, Microsoft estimates its new approach could double the power efficiency of data centers.

Lawrence said fuel cells are gaining traction as a way to power data centers, but that Microsoft’s approach represents a departure from previous efforts, such as Google’s introduction of fuel cell batteries at the rack level a few years ago.

“We think fuel cells will play an increasing role in the data center, especially where grid connectivity is unstable or where power is expensive,” Lawrence said. “It is probably too early to say if rack level fuel cells will be adopted, but it is certainly innovative.”

Elsewhere, IBM received a patent for inventing a new technique that would enable cloud computing data center operators to dynamically redistribute workloads to systems that are utilized or simply use less power. IBM claims its invention will enable operators to simplify and reduce the costs of deploying cloud environments.

Lawrence said, however, that before companies agree to start moving workloads around to maximize energy efficiency, they would want assurances that service levels won’t suffer. And that, he said, would require integration of the technology with underlying systems, as well as implementation of policy and control systems.

“This is a good development, but it is only part of the picture,” Lawrence said.

Then again, sometimes a part of the picture can make a big difference. Case in point: Portugal Telecom’s new data center in Covilha, Portugal. The facility, which opened in September, includes a number of energy-conservation features, such as using external air as a coolant, that are common in new data centers.

But its rainwater collection system, which forms a moat around the main building, represents an innovative approach to water preservation, “which few have paid enough attention to,” Lawrence said.



Related Reading


More Insights



Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 

Editor's Choice

Research: 2014 State of Server Technology

Research: 2014 State of Server Technology

Buying power and influence are rapidly shifting to service providers. Where does that leave enterprise IT? Not at the cutting edge, thatís for sure: Only 19% are increasing both the number and capability of servers, budgets are level or down for 60% and just 12% are using new micro technology.
Get full survey results now! »

Vendor Turf Wars

Vendor Turf Wars

The enterprise tech market used to be an orderly place, where vendors had clearly defined markets. No more. Driven both by increasing complexity and Wall Street demands for growth, big vendors are duking it out for primacy -- and refusing to work together for IT's benefit. Must we now pick a side, or is neutrality an option?
Get the Digital Issue »

WEBCAST: Software Defined Networking (SDN) First Steps

WEBCAST: Software Defined Networking (SDN) First Steps


Software defined networking encompasses several emerging technologies that bring programmable interfaces to data center networks and promise to make networks more observable and automated, as well as better suited to the specific needs of large virtualized data centers. Attend this webcast to learn the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging.
Register Today »

Related Content

From Our Sponsor

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

Business executives are challenging their IT staffs to convert data centers from cost centers into producers of business value. Data centers can make a significant impact to the bottom line by enabling the business to respond more quickly to market demands. This paper demonstrates, through a series of examples, how data center infrastructure management software tools can simplify operational processes, cut costs, and speed up information delivery.

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Both hot-air and cold-air containment can improve the predictability and efficiency of traditional data center cooling systems. While both approaches minimize the mixing of hot and cold air, there are practical differences in implementation and operation that have significant consequences on work environment conditions, PUE, and economizer mode hours. The choice of hot-aisle containment over cold-aisle containment can save 43% in annual cooling system energy cost, corresponding to a 15% reduction in annualized PUE. This paper examines both methodologies and highlights the reasons why hot-aisle containment emerges as the preferred best practice for new data centers.

Monitoring Physical Threats in the Data Center

Monitoring Physical Threats in the Data Center

Traditional methodologies for monitoring the data center environment are no longer sufficient. With technologies such as blade servers driving up cooling demands and regulations such as Sarbanes-Oxley driving up data security requirements, the physical environment in the data center must be watched more closely. While well understood protocols exist for monitoring physical devices such as UPS systems, computer room air conditioners, and fire suppression systems, there is a class of distributed monitoring points that is often ignored. This paper describes this class of threats, suggests approaches to deploying monitoring devices, and provides best practices in leveraging the collected data to reduce downtime.

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Rack power of 10 kW per rack or more can result from the deployment of high density information technology equipment such as blade servers. This creates difficult cooling challenges in a data center environment where the industry average rack power consumption is under 2 kW. Five strategies for deploying ultra-high power racks are described, covering practical solutions for both new and existing data centers.

Power and Cooling Capacity Management for Data Centers

Power and Cooling Capacity Management for Data Centers

High density IT equipment stresses the power density capability of modern data centers. Installation and unmanaged proliferation of this equipment can lead to unexpected problems with power and cooling infrastructure including overheating, overloads, and loss of redundancy. The ability to measure and predict power and cooling capability at the rack enclosure level is required to ensure predictable performance and optimize use of the physical infrastructure resource. This paper describes the principles for achieving power and cooling capacity management.