Data centers

12:39 PM
Jim O'Reilly
Jim O'Reilly
Commentary
Connect Directly
LinkedIn
RSS
E-Mail
50%
50%

The Evolving PUE of Containerized Datacenters

A number of breakthroughs in server and storage designs have made containerized datacenters highly energy efficient.

When I first started designing containerized datacenters, I looked at them as ways to tidy up many of the problems bedeviling the IT department, such as acceptance testing, installation disruption, cabling, and overall performance. As things evolved, we found ways to reduce inefficiencies in power utilization.

In turn, this led to x64 server designs that are more efficient still, and the efforts spent in improving unit cooling led to the realization that the traditional concept of providing a chilled air environment was fading into the history books.

A lot of factors make this increase in power usage effectiveness (PUE) possible. The advent of multi-core x64 CPUs keeps the power profile in a typical 1U server constant, while boosting power tremendously. Lower power memory matches the CPU, so that current servers have much better GFLOPS/watt. At the same time, power supplies are capable of much better efficiencies today, reducing the heat load in the server box.

The largest breakthrough came from the realization that cooling had to be decoupled from the server engine in a container. We tried water cooling (it leaks!), roof refrigeration and sidewall refrigeration. In the process, it became obvious that we could drop the chilling if we used bigger fans on the servers, and with racks full of the same structure, we could use 5-inch or bigger fans for cooling. (A techy aside -- the bigger the fan, the better the airflow, and it’s roughly a square law, so a 5-inch fan versus a ¾-inch fan is no contest!)

At the same time, a bit of study showed that in many places in the U.S. and EU the number of really hot days was small, and in the case of the Northern states, not that hot. The idea of designing containerized server farms for a zero-chill environment is catching on.

[Read about the deal the NSA inked for cooling its future datacenter in Maryland in "NSA Plans To Cool New Datacenter With Wastewater."]

x64 COTS servers handling 40C inlet temperatures are available for containers. Typically, a cluster of ½-U servers are cooled by large fans and share redundant power, although there are other configurations. Note, these are not blade servers; the infrastructure and packaging complexity of blades would limit operation to lower temperatures. The same is true of legacy style mainframes and minicomputers.

40C (104F) is good for many northern sites, but it would be better to have some extra margin. The issue is the disk drive, which mostly has a 55C ambient spec. It is possible to live with these drives, and still move to 45C (113F). I’ve done that for military applications, but it carries design trade-offs.

SSDs at 70C and a raft of new hard drives specified at 60C or 65C are solving that problem, so even servers with drives can get to the 45C target.

An alternative is to move all the storage into its own racks or containers. This fits the virtualized cloud model really well, and it allows the servers to reach 45C inlet easily. The storage units tend to run cool and airflow across the drives is more optimized, so they too can be at 45C.

So what does this do to PUE? The “garage” for the containers has a very low PUE. With no cooling load, it’s down to lighting and personnel spaces. The container power load is almost totally computer gear. The overhead is mainly cooling fans, and the large fan design reduces that tremendously. The net result is that PUEs for the container are less than 1.10 in a zero-chill environment.

There are other tricks to reduce power usage in the servers, including using large shared supplies to deliver 12V DC to the server clusters.

Some of these ideas, such as the larger fans, can be applied to single racks of servers, though exhaust airflow ducting is required. The bottom line is that we are moving towards much greener datacenters.

What's the next step? Late in 2014, we’ll see servers where the memory and CPUs are tightly coupled in modules. When this happens, there will be a big boost in performance and a drop in power used in the server. With the implication that much less gear will do the job, and save power in the process, total datacenter power will go down, though, perversely, PUE may rise slightly.

We’ll see a lot of microservers too. These are low-power, high-density servers that could match the cooling profile needed.

Jim O'Reilly was Vice President of Engineering at Germane Systems, where he created ruggedized servers and storage for the US submarine fleet. He has also held senior management positions at SGI/Rackable and Verari; was CEO at startups Scalant and CDS; headed operations at PC ... View Full Bio
Comment  | 
Print  | 
More Insights
Cartoon
Hot Topics
4
IT Certification's Top 10 Benefits
Global Knowledge, Global Knowledge,  8/20/2014
2
Rethinking Data Center Design
Nicholas Ilyadis, Vice President and Chief Technical Officer, Infrastructure & Networking Group, Broadcom Corporation,  8/20/2014
1
In Hybrid Clouds, Mind The Details
Don Magrogan, CTO, Fusion PPT,  8/14/2014
White Papers
Register for Network Computing Newsletters
Current Issue
2014 Private Cloud Survey
2014 Private Cloud Survey
Respondents are on a roll: 53% brought their private clouds from concept to production in less than one year, and 60% ­extend their clouds across multiple datacenters. But expertise is scarce, with 51% saying acquiring skilled employees is a roadblock.
Video
Slideshows
Twitter Feed