Step Into The Future

New technology, security, and reliability requirements are changing the data-center infrastructure, taking the heat off servers -- and putting it on IT administrators.

April 18, 2005

10 Min Read
Network Computing logo

Internet search-engine company Ask Jeeves Inc. is looking at a major data-center redesign in the next few years. The company, which IAC/InterActiveCorp recently revealed plans to acquire, operates almost a dozen data centers ranging from 25,000 to 100,000 square feet, altogether housing about 10,000 servers. Based on current and projected growth rates, the company's data-center footprint over the next five to seven years could expand by a factor of 10 and require an additional 50,000 to 75,000 servers. "Trying to leverage tens of thousands of servers in a fabric is only going to increase heat concerns," says Dayne Sampson, VP of information technology. "The ramifications of having to cool this sort of density have a number of implications, particularly cost."

Having to cool a big data center has many implications, especially cost, says Sampson, Ask Jeeves' VP of IT.

Each year, $20.6 billion is spent on the electrical mechanical infrastructure that supports IT in the United States, according to a new report, "The Data Center Of The Future: What Is Shaping It?" from InterUnity Group. Its survey of 161 data-center professionals, conducted in conjunction with AFCOM, a leading association for the industry, shows that Ask Jeeves isn't the only company facing potentially expensive change in the face of new computing technologies and that heat is just one area of growing concern. Nearly 60% of those surveyed believe new equipment is being acquired

without adequate concern for power or cooling requirements. But businesses also are looking to boost reliability and security as they prepare for major upgrades to their data centers, which about half the com- panies in the survey believe will take place within the next three years.

The changes afoot aren't for the faint of heart. "It's actually a very scary proposition to think about," says Sampson, who serves as both IT director and data-center manager. Improving operational efficiencies, he says, is going to require a "paradigm shift in the way we cool equipment."

In addition to the technology concerns, some companies also will require a shift in the way their data-center managers, who traditionally have operated under the auspices of the facilities department, interact with CIOs and other high-level IT executives over evolving infrastructure plans. At a growing number of companies, the data-center manager is part of the IT department, reporting to the director of IT or the CIO. Still, three-quarters of respondents to InterUnity's survey say they're concerned about a lack of involvement in the planning and procurement of new equipment. While most respondents believe their data centers are more reliable and better protected than three years ago, half still have big concerns about reliability, in large part because of the increased need for power and cooling in a world of power-dense servers and switches, and tightly packed cluster and grid environments.

chart

Many data centers now in operation were built quite recently, during the late 1990s. In undertaking the survey, "we expected to find a fairly static five- or 10-year horizon," says Richard Sneider, an InterUnity director. "What we found was that there are changes happening over the next three years that will dramatically alter how data centers need to be set up."

Cooling supercharged servers is probably the most dramatic. Liquid cooling--or some improved form of it--is inevitable, says Steve Madara, VP and general manager for the cooling business at Liebert Corp., a provider of environmental control systems, even though most companies eliminated such systems long ago because of the space-eating sizes of the devices and the potential for leaks. Many companies hope to forestall moving to liquid cooling by adopting improved use of air-cooling designs--specifically by getting the cooling as close to the heat sources as possible. Alternating hot and cool aisles of servers, raising floor levels beyond the standard 18 to 24 inches to improve cool-air flow, and using overhead cooling to supplement raised floor-level cooling are all viable strategies. Data centers under construction are better able to accommodate 3-foot raised floors or even to dedicate entire floors below the hardware to cooling, rather than trying to add those to existing data centers, Madara points out.Virginia Polytechnic Institute and State University worked with Liebert to get the cooling right for its computing cluster, one of the top 10 fastest supercomputing grids in the world, using 1,100 Apple Macintosh xServers with dual 2.3-GHz processors. The vendor designed a cooling system that sits on top of the server racks, which are arranged in alternating hot and cool aisles and dissipate a heat load of about 350 watts per square foot in the data center, says Kevin Shinpaugh, director of research for cluster computing at Virginia Tech. "If we tried to do it with normal floor AC units, we wouldn't have been able to complete the project," Shinpaugh says.

Ask Jeeves' Sampson isn't convinced such configurations are commercially feasible. "That level of cooling has never actually been proven out in any kind of real-world installation," he says. "You might be able to do that on a small footprint, but trying to do that in a 100,000-square-foot data center is a different issue. Secondly, the cost of that infrastructure would be enormous."

chartTo help combat potential problems, leading blade-server vendors Dell, Hewlett-Packard, and IBM have begun offering assessment services that assist customers in designing their blade environments to achieve the best thermal characteristics.

At consumer-goods manufacturer Newell Rubbermaid Inc., moving toward the data center of the future already has begun. Blade servers are in place and will continue to replace existing servers. "We'll end up with small mainframes because the new servers will be so dense and so hot," says Paul Watkins, data-center and network analyst at Rubbermaid. The blade architecture has resulted in more servers being packed into a single rack than ever before, needing more air flow than before. "We're fitting two servers where there was one so we can continue to grow," he says.

Yet that also means there's less space for air to move through, which will become a more pressing problem as more blade servers are brought in. Watkins looks forward to breakthroughs in liquid cooling that might alleviate the problem, such as integrating these units right into the server, where they won't consume precious data-center space. "And then it will come down to the density of the server meeting a price point," he says.

chartWatkins expects to be able to have input into how his company might take advantage of those future breakthroughs, since he has never felt left out in the cold when it comes to decisions about the IT architecture that will affect data-center operations. While 27% of InterUnity's survey respondents were concerned about the lack of communication between IT and facilities departments, and 26% cited poor communications with senior management as an issue, Rubbermaid operations are structured in a way that fosters dialogue. Senior-level executives say what they want to accomplish, and Watkins and his fellow IT staffers aim to make it work. But Rubbermaid has a flat organization, so anyone can address a concern or an improvement to Watkins' boss, the IT manager, who "can get straight up to the CIO," Watkins says.

Similarly, at investment company the Vanguard Group, communication is key to optimized data-center operations. "We have full-day sessions within IT" about data-center issues, says Bob Yale, principal of technology operations at Vanguard. Spending and technology issues go all the way up to chairman and CEO John Brennan, who has a well-deserved reputation for being hands-on with technology and the costs of running it.It's not uncommon for the associated capital costs of boosting reliability, availability, and security requirements to start at $100 million. At Vanguard, security is a priority, requiring the company to automate checking for vulnerabilities and looking for unapproved changes to baseline configurations, says John Samanns, principal of technology operations, architecture, and planning. The company also uses proactive monitoring tools and third-party service providers that are paid to test the Vanguard infrastructure for vulnerabilities, Samanns adds. "We keep a dashboard and have a line of sight to the chairman about security," he says.Recent legislation such as the Sarbanes-Oxley Act makes it imperative that companies not risk losing data or even risk downtime that could jeopardize accessing information in a timely fashion, says Michael Fluegeman, VP at Syska Hennessy Group, a data-center consulting company. "You can't operate without a rock-solid infrastructure," he says. So companies "are digging deeper and deeper and spending more and more money to make these facilities more robust and safe from every possible threat of downtime or loss of critical information."

To help with reliability, Randy MacCleary, a VP and general manager of Liebert's uninterruptible-power-supply business, expects to see as many as three-quarters of data centers introducing dual-bus UPS systems within the next five years. Only about a third of data centers now include these systems, which permit one UPS bus and its associated distribution system to be shut down for maintenance while the load continues to be supplied by the second UPS bus. Another step to improve reliability is to create partitioned data-center environments, he says. For example, in a 10,000-square-foot data center, a company could create four 2,500-square-foot sections, with each section addressed by a separate dual-bus UPS system. If for some reason one section were to fail, the three remaining sections would be able to carry the data center forward with minimal disruption.

Despite some challenges, nearly half the companies in the survey say they're not looking to utility computing to offload those problems. Vanguard, which recently surpassed $800 billion in assets, says utility computing is a possibility in the future, but right now it's doing just fine running two data centers that consist of more than 100,000 square feet of raised floor capacity, with more than 1,000 Unix and Windows systems and mainframes, for what it cost to run one data center in 2001. Vanguard doubled its capacity without raising costs chiefly through automation and server consolidation. "We've been able to grow our multifunction business without increasing costs for our data center," Samanns says. "We have redundancy of the data center at no cost to shareholders."

Not every company runs its data center as efficiently as the servers in those centers run their most important applications. But now's the time for that to change so these data centers are ready for the future, whatever it may bring.

-- with Martin J. GarveyIllustration by Steve Lyons

Continue to the sidebars:
CPU Cool: Getting Faster But Not More Power-Hungry
and Utility Interest: The Model's Catching On

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights