Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

The Cold, Green Facts: Page 4 of 7

Many organizations (and, until recently, vendors) dismiss the notion of managing the power consumption of storage systems as either impractical or inconsequential. Both notions are wrong. Particularly in data-intensive industries, the power, cooling, and floor space consumed by storage systems easily competes with that used by servers. Further, storage capacity as a whole is projected to grow 50% annually as far as the eye can see. So savings on storage system power and cooling is anything but inconsequential.

Similar to the server challenge, storage efficiency comes through better management and consolidation. Unfortunately, storage management remains an oxymoron for most enterprises. Sun Microsystems estimates that only 30% of enterprise storage systems are used effectively--a pretty alarming statistic since it comes from a storage vendor. Implementing a storage resource management system and actually using it is about the only way to recover that dead 70% of storage, but the benefit is unquestionable (see Savings Through Storage Management).

CHANGING BEST PRACTICES

The good news is that for most organizations, the pressure to remodel or build new data centers can be alleviated through improved server and storage hygiene. But even as you get more out of existing data centers, new challenges threaten long-held best practices. As certain racks become more densely populated with 1U servers and blade systems, using perforated floor tiles on a raised floor no longer supplies enough cold air for the systems in the rack. For facilities built in the last decade, typical raised-floor cooling systems can exhaust 7 kilowatts per rack. Even today, most data centers won't use that much power per rack, but in certain instances, they can use far more. For example, a fully loaded rack of blade servers can draw 30 kilowatts or more--only specialized, localized cooling systems can handle that sort of per-rack load.

In the past, the advice was to spread out the load. Put blade servers and other high-powered gear in with lower-consumption storage and networking systems, or simply leave the racks partially empty. While it's still good advice for those who can pull it off, increasingly the geometry of the data center doesn't allow it. Spreading out the load can push the average power draw per rack beyond what most data centers can deliver. The answer then is to pull those high-demand systems back together and use rack-based or row-based cooling systems to augment the room-based air conditioning.

Rack-based cooling systems are available from a number of vendors. Two with very different approaches are IBM and HP. IBM's eServer Rear Door Heat eXchanger replaces the back door of a standard IBM rack. The door uses a building's chilled water supply to remove up to 55% of the heat generated by the racked systems.