As 2013 opens with new prospects for data center operations, we'll see new looks at some old themes, especially around energy efficiency. Increased power costs and pressure from environmental groups will lead data center designers to look to new technologies to cut their traditional energy needs. But that's not all we'll see; here are five important trends you can expect to see gain strength in 2013.
1. Location Drives Energy Efficiency
There is one data center concern that overwhelms all others: the need for energy efficiency. At one time, energy costs were viewed as a given, compared to the expenses in hardware purchases and labor for operations. But as hardware became more efficient and automated procedures more prevalent, the cost of energy has steadily risen to capture 25% of total operating costs, and it now sits close to the top of the list.
In addition, there is a clash building between environmentalists versus smartphone and tablet users and data center operators. As the evidence builds for global warming, the unbridled growth of computing in many forms is coming under attack as a wasteful contributor to global warming. Indeed, such an attack was the theme of a landmark New York Times storypublished Sept. 22, "The Cloud Factories: Power, Pollution and the Internet."
[ Our analysis of the New York Times story was one of InformationWeek's top 12 stories of 2012. Catch up on the other 11 at Best Of InformationWeek 2012: 12 Must-Reads. ]
This clash will take place even though data center builders are showing a remarkable ability to reduce the amount of power consumed per unit of computing executed. The traditional enterprise data center uses just under twice as much electricity as it needs to do the actual computing. The extra amount goes to run cooling, lighting and systems that sustain the data center.
A measure of this ratio is PUE, or power usage effectiveness. An ideal PUE would be 1.0, meaning all the power brought to the data center is used for computing -- probably not an achievable goal. But instead of 2.0, Google showed it could build multiple data centers that operated with a PUE of 1.16 in 2010, reduced to 1.14 in 2011.
Each hundredth of a point cut out of the PUE represents a huge commitment of effort. As Jim Trout, CEO of Vantage Data Centers, a wholesale data center space builder in Santa Clara, Calif., explained, only difficult gains remain. "The low-hanging fruit has already been picked," he said in an interview.
Nevertheless, Facebook illustrated with its construction of a new data center in Prineville, Ore., that the right location can drive energy consumption lower. The second-biggest energy hog, just below electricity used in computing, is power consumed for cooling. Facebook built an energy-efficient data center east of the Cascades and close to cheap hydropower. By using a misting technique with ambient air, it can cool the facility without an air conditioning system.
It drove the PUE at Prineville down to 1.09, but a Facebook mechanical engineer conceded few enterprise data centers can locate in the high, dry-air plains of eastern Oregon, where summer nights are cool and winters cold. "These are ideal conditions for using evaporative cooling and humidification systems, instead of the mechanical chillers used in more-conventional data center designs," said Daniel Lee, a mechanical engineer at Facebook, in a Nov. 14 blog.
Most enterprise data centers remain closer to expensive power and must operate year round in less than ideal conditions. Facebook also built a new data center in Forest City, N.C., (60 miles west of Charlotte) where summers are warm and humid, attempting to use the same ambient air technique. To Lee's surprise, during one of the three hottest summers on record, the misting method worked there as well, although at higher temperatures and humidity. Instead of needing 65-degree air, it will operate at up to 85 degrees. And instead of having a maximum 65% relative humidity, it can function with 90%. That most likely resulted in the need to increase the flow of fan-driven air. Nevertheless, a conventional air-conditioning system with its power-hungry condensers would have driven the Forest City PUE far above the Prineville level.
The equipment and design used to achieve that PUE are available for all to see. In 2011, Facebook initiated the Open Compute Project, with the designs and equipment specifications of its data centers made public. Both Prineville and Forest City follow the OCP specs.
Thus, Facebook has set a standard that is likely to be emulated by more and more data center builders. In short, 2013 will be the year when the Open Compute Project's original goal is likely to be put into practice: "What if we could mobilize a community of passionate people dedicated to making data centers and hardware more efficient, shrinking their environmental footprint?" wrote Frank Frankovsky, Facebook's director of hardware design and supply chain, in a blog April 9.
Google is another practitioner of efficient data center operation, using its own design. For a broad mix of older and new data centers, it achieved an overall PUE of 1.14 in 2011, with a typical modern facility coming in at 1.12, according to Joe Kava, VP of Google data centers, in a March 26 blog. In 2010, the overall figure was 1.16.