Power Surge

The heat is rising--and costs, too--as tightly packed servers consume gobs of electricity. And the problem could get worse before efforts to contain it catch up, as server deployments rise

February 27, 2006

12 Min Read
Network Computing logo

Nine million servers hum in computer rooms across the United States, driving our information-obsessed, transaction-fueled economy every second of every day. It's an astonishing display of computer-processing power--and an insatiable electricity hog that's become a huge expense for many companies.

If racks and racks of Unix, Windows, and Linux servers deliver megaflops of computational speed, megawatts of power consumption are the price businesses pay. Data center electricity costs are soaring as companies deploy growing numbers of servers, consuming ever more power, and, in the process, throwing off heat that needs to be cooled using still more juice.

InformationWeek Download

The problem could get worse before efforts to contain it catch up. Data center electricity costs are already in the range of $3.3 billion annually, and the number of servers in the United States will jump 50% over the next four years, IDC predicts. The data center utility bill exceeds the cost of acquiring new computers for some companies. And it can cost more to cool a data center than it does to lease the floor space to house it. Edward Koplin, a principal at engineering firm Jack Dale Associates, estimates the average annual utility cost for a 100,000-square-foot data center has reached $5.9 million.

Ironically, state-of-the-art computersare part of the problem. Blades, the fastest-growing segment of the server market, can be packed into a smaller space than rack-mounted servers. As density increases, however, the amount of heat produced by blades and their processor cores rises, and you have computing's double whammy--pay once to power servers and a second time to cool them.

The more servers, the bigger the problem, and that's got Web powerhouses such as Google and Yahoo working furiously to find solutions. In a paper published three years ago, Google engineers foresaw the challenge, calculating that an 80-unit rack of midrange servers, each with two 1.4-GHz Pentium III processors, required about 400 watts of electricity per square foot. The dilemma: Most data centers couldn't handle more than 150 watts per square foot.

That was three years ago, yet Google principal engineer Luiz Andre Barroso, one of the authors of the report, is still worried. "Although we're generally unhappy with the power efficiency of today's systems, our concerns are more with trends than with today's snapshot," says Barroso via E-mail. Power efficiency is now a design goal for many tech vendors, and Google wants the industry to "execute aggressively," he says. New computer models, however, must be both energy- and cost-efficient, he warns. "A solution that has superior power efficiency but worse overall cost efficiency is unlikely to be competitive," Barroso says.

Meanwhile, technology vendors are reinventing themselves as air-cooling specialists to bring data centers-turned-saunas under control. Hewlett-Packard last month introduced its first environmental-control system, which uses water cooling to lower temperatures. "Data centers, compared to Moore's law, have been fairly slow-moving animals and haven't changed much in the last 20 years," says Paul Perez, VP of storage, networking, and infrastructure for industry standard services at HP. "Moore's law is running smack into the wall of physics."

Feel The HeatAt Pomona Valley Medical Center, the problem reached a melting point when the data center temperature spiked to 102 degrees, causing hard drives to go on the blink. The medical center had been centralizing servers scattered across the 426-bed facility, and as the heat rose in its data center, two 5-ton air conditioners couldn't keep up. "We had box fans hanging from the ceiling. It was the most ridiculous thing you've ever seen just to try to move air around that room," CIO Kent Hoyos says.

Temps typically hovered at a balmy 92 degrees in Pomona Valley's data center, but when one air conditioner failed, the room jumped 10 degrees. The medical center lost several computer drives and had to rebuild a lab system, at a cost of about $40,000.

Pomona Valley has invested in new cooling equipment to avoid further meltdowns. The Liebert XDV system--designed for data centers without much floor space--mounts on top of server racks and uses a liquid coolant that vaporizes into a gas within the cooling module. Using 20 of these "above-the-rack" coolers, Pomona Valley was able to bring in the equivalent of 44 tons of air conditioning--the standard measure of refrigerant--with room for more as the IT department adds servers. The data center has stabilized at 64 degrees.

Cooling server installations directly is a growing trend using new products such as Egenera's and Liebert's CoolFrame, a cooling system that attaches to a server blade rack, and American Power Conversion's InfraStruXure system, which provides uninterruptible power, power distribution, and cooling for rack-mounted servers (see story, p. 44).

Spiraling Costs

Supersize electric bills are a related problem. Businesses paid about 20% more for electricity last year than they did in 2004, according to IDC, with rates jumping more than 40% in some parts of the country. Rackspace, which provides hosted computing services from five data centers, saw its utility bill increase 65% in 2005. "We went up by about $1 million, and this year we will likely continue to climb in that manner," VP of engineering Paul Froutan says. Long term, he adds, "we don't expect utility rates to drop, and power consumption per server will continue to increase."

Hot under the collar: Pomona Valley CIO Hoyos needed to cool down after data center temps hit 102 degrees.



Hot under the collar: Pomona Valley CIO Hoyos needed to cool down after data center temps hit 102 degrees.

Photo by Dave Strick

The cost of data center floor space is inconsequential for Rackspace compared with the cost of operating and cooling its data centers. It locates them in areas such as San Antonio, Dallas, and suburban Virginia, where real estate is relatively cheap. "When we look to build a new facility, our primary concern is what we will be paying per watt, not per foot," Froutan says.

Yahoo houses up to 200,000 servers in 27 data centers. "It used to be that you wanted the fastest processor and to process as much as possible in a small footprint," CIO Lars Rabbe says. "But everyone has realized that power pricing, and having low-power CPUs and low-power systems in general, are becoming more important. That changes the whole dynamic of how you manage your data centers."

Based on engineering consultant Koplin's estimate of data center electricity costs, Yahoo could be paying $6 million annually to run its 160,000-square-foot Santa Clara, Calif., data center alone. Rabbe and VP of operations Kevin Timmons suggest Yahoo needs around 200 watts per square foot to power its data centers. By our estimate, that puts Yahoo's total annual utility bill for its data centers in the range of $20 million to $50 million; company officials declined to get into those details.

Yet million-dollar utility bills are real, and they can dwarf property costs. Space that's leased for $12 to $20 per square foot can cost $60 per square foot to cool, Koplin says. Modern computing hardware requires about 3 square feet of cooling infrastructure for every square foot of floor space devoted to computers, he says. That's six times the ratio of 10 years ago.

That trend explains why Yahoo keeps its data centers spacious, with a relatively low density of machines on the floor. "We max out on the power long before we max out on the square footage," Rabbe says.Relief?

Power-efficient computers promise some relief. NewEnergy Associates, which develops software and provides consulting to electric and gas companies, has begun replacing its power-hungry systems with newer servers from Sun Microsystems equipped with Advanced Micro Devices processors optimized for performance per watt. The company has found it can replace seven older servers with one dual-processor Sun Fire X4200 server and use virtual machines to retire even more servers. The result is an 84% reduction in heat generation.

More companies need to take similar steps to replace the thousands of inexpensive but inefficient servers that populate their data centers. "With the advent of cheap computing, people started saying, 'Hey I can get all these CPUs for only $10,000, so I'll try things I never considered before,'" says Neal Tisdale, VP of software development at NewEnergy. Heating problems escalated when companies installed large numbers of servers during what Tisdale calls the "gigahertz arms race." Server manufacturers engaged in "real silliness by building and shipping machines optimized to win benchmarks but not for operational efficiency," he says.

Yahoo's power-hungry data center



Yahoo's power-hungry data center

Multicore processors and virtualization offer the greatest hope for electricity-sucking, heat-generating data centers. Dual-core processors that AMD and Intel have introduced over the past two years decrease the total power required by the processing cores while increasing computational output by placing two cores in the same physical footprint (see story, p. "Chip Speed Vs. Power Demand").

AMD's Opteron processors come in three heat grades, letting customers chose between raw performance and the most optimum performance per watt. Intel's high-volume Xeon is rated at 110 watts, while a low-voltage, single-core version is available at 55 watts.Intel plans to introduce a mass market 95-watt Xeon this quarter, as well as a 33-watt device based on 32-bit technology developed for its mobile PC platform. On Intel's road map for the second half of the year is the Woodcrest processor, which the company says will provide a nearly 300% performance-per-watt improvement over the current Xeon.

Chips In Action

The Tokyo Institute of Technology used Sun Galaxy servers with more than 5,000 of AMD's dual-core Opteron processors to create, at 100 teraflops, the largest supercomputer in Japan. If the college had been forced to use single-core processors, the data center would have needed to be twice the size, and it would have generated almost twice the heat, says Satoshi Matsuoka, professor in charge of research infrastructure at the institute's global scientific information and computing center.

When Dr. Jeffrey Skolnick joined the Georgia Institute of Technology last year as director of the Center for the Study of Systems Biology, he understood that a planned $8.5 million supercomputer, to be used in calculations of genome-mapping algorithms, would have to be built with tight budget restrictions on space and cooling. Skolnick was initially told he would need a data center with a 3-foot raised floor that would require hundreds of tons of air conditioning. "If that was the case, this was going to be a nonstarter," he says.

The university was able to lease a 1,300-square-foot facility with a 13-inch raised floor, but space was tight. The solution, provided by IBM, came in the form of dual-core Opteron servers and a chilled-water cooling system called Cool Blue. Inside the door of the unit, which is attached to the rear of a server rack, are sealed tubes filled with circulating chilled water that can remove up to 55% of the heat generated by a fully populated rack. "Without those technologies, you start doubling everything from square footage to utility costs and getting a lot less computing power back in exchange," Skolnick says.The next advance will be quad-core processors, which Intel has said it will introduce in early 2007 and AMD plans to introduce by the end of next year.

Other vendors, like Azul Systems, offer systems based on processors with low clock cycles but multiple processing engines. Azul's Vega chip has 24 processing cores.

But "you can't get away from the laws of physics," admits Randy Allen, corporate VP of AMD's server and workstation division. "You give us more power and we can deliver more performance." AMD's low-power Opterons are for customers willing to trade unbridled performance for improved performance per watt, he says.

Like Gas

Cooling and power are the No. 1 facilities issue among members of AFCOM, an association of data center professionals. According to a recent survey of 200 AFCOM members, data centers are averaging more than one serious outage per year, with more than half caused by not having enough in-house power to run the centers. One in five data centers runs at 80% or more of its power capacity."The cost of power is going up dramatically, and they're just eating it and accepting it," AFCOM founder Leonard Eckhaus says. "It's like buying gas for your car. You have no choice. You have to pay."

A January conference in Santa Clara, Calif., sponsored by AMD, the Environmental Protection Agency, and Sun, attempted to raise industry awareness of cooling and energy issues. Andrew Fenary, manager of the EPA's Energy Star program, says the agency can help coordinate meetings among server makers, cooling-equipment manufacturers, microprocessor vendors, and data center managers to work on strategies for identifying problems and creating solutions. Fenary expects meetings to be held during the year to focus on technical metrics and develop ways to quantify the efficiency and power consumption of servers. The plan is to create a metric, possibly under the EPA Energy Star banner, that will let buyers more readily identify the efficiency of computer products.

But such metrics won't eliminate the "perverse incentives" that persist in some parts of the industry, says Jonathan Koomey, a staff scientist at Lawrence Berkeley National Laboratory and a consulting professor at Stanford University. For example, some data center managers never even see the huge electricity bills their systems are generating. And, when steps are taken to reduce energy consumption, Koomey says, money saved "can just get sucked up into the corporate nexus and is then used for other general purposes."

Getting the situation under control is even more urgent because the industry is on the verge of an infrastructure build-out that has been on hold for several years, Koomey says. "How are we going to address the expansion coming in the next few years?" he asks. "If you don't pursue some type of energy-conscious strategy, you're betting that the price of energy is going to either stay the same or go down. I wouldn't want to bet my company's future on those odds."

Technologists are becoming acutely aware of the vicious cycle in which high-performance servers consume power and generate heat, which then needs to be cooled, and do so in greater numbers and with increasing density. The beads of sweat on their foreheads? It's an industry running to keep up with its own power-hungry creation.--with Thomas Claburn

Illustration by Nick Rotondo

Continue to the sidebars:
Chip Speed Vs. Power Demand,
and Uncool Data Centers Need To Chill Out

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights