Data centers

05:20 PM
Kurt Marko
Kurt Marko
Commentary
Connect Directly
Facebook
LinkedIn
Twitter
RSS
E-Mail
50%
50%
Repost This

Data Center Efficiency Plateaus

A recent survey shows that efficiency is no longer a major priority for data center operators. Meanwhile, public cloud and DCIM software are on the rise.

The latest Uptime Institute Data Center Industry Survey reveals some interesting trends, including a reduced focus on data center efficiency.

According to Uptime's survey, which queried 1,000 data center facilities operators, IT managers and senior executives from around the globe, efficiency as measured by median responses (so we're not talking about behemoths like Amazon, Google or Facebook), has plateaued and is no longer considered an urgent priority. Only half of North American respondents said they considered efficiency to be very important.

Uptime's data shows that initial gains in PUE, the standard metric for data center efficiency, which improved dramatically from 2007 to 2011, were largely the result of easy fixes like properly isolating hot and cold aisles, installing blanking panels in unused rack segments and upgrading old power distribution equipment to more efficient models. Now, however, improvements are much harder and more costly to come by. Thus, most operators consider a 1.65 PUE (the average in this year's survey) good enough, even as the mega colocation centers and cloud operators race to see who can edge closer to the ideal level of 1.0.

An easy fix can be borrowed from every homeowner trying to cut their summer electric bill: just crank up the thermostat. Only 7% of respondents operate data centers at temperatures above 75 degrees, even though ASHRAE, the professional society of HVAC engineers, says 80 degrees is a reasonable upper bound.

Another drag on efficiency is the prevalence of zombie servers, the survey indicated. "According to Uptime Institute’s estimates based on industry experience, around 20% of servers in data centers today are obsolete, outdated or unused," the report said. Uptime estimates that for every 1U zombie unplugged, operators save about $2,500 a year in energy, OS licenses and hardware maintenance.

[Uptime's study also indicated that data centers are becoming the domain of service providers as smaller enterprises increasingly outsource their data center operations. Read Kurt Marko's analysis in "Data Center Study: The Big Get Bigger."]

In addition to data center efficiency trends, the Uptime report highlights three data center technologies that are poised for explosive growth: adoption of public cloud services, data center infrastructure management (DCIM) and prefab modular data centers. While we'd agree on the first two, we have our doubts about modulars.

Public cloud growth is a no-brainer as nearly every survey, including ours, shows that enterprise resistance, fueled by a combination of protectionism, security and performance FUD and immature management software, is rapidly crumbling. Only 20% of the respondents to InformationWeek's State of Cloud Computing Survey have no plans to use a cloud service provider. Uptime finds global cloud adoption still rather low at 28%, but large companies are twice as likely as smaller ones (as defined by the total number of operated servers) to deploy public cloud services.

In contrast, private cloud seems to have hit a brick wall, with deployment actually falling in Uptime's survey. It's either harder than people think or smaller companies figure why bother re-architecting for a private cloud when they can rent a ready-made one at AWS or Rackspace.

According to the survey, 38% of respondents use DCIM software, which Uptime defines as a facility-wide system that catalogs assets, collects usage statistics and records operational status. Using some homegrown spreadsheets and open source monitoring tools doesn't qualify, although we would argue that something like Nagios is a long way from DIY Perl scripting and includes many DCIM features. Uptime's number seems high, but the respondent demographics skew large, with 82% managing more than one site and 42% in the business of data center hosting as a colocation or cloud service provider.

Of course, only large operators can justify the cost of DCIM tools; looking at just the small companies in Uptime's sample, 72% report spending over $100,000 on DCIM tools, while 17% of the largest ones spend $400,000 or more.

I take issue with Uptime's prediction regarding prefab modular data centers. Those semi-truck shipping containers made into a tightly packed computer rooms are a clever idea that's time has come and gone. When first introduced more than five years ago, modulars offered superior energy and space efficiency to conventional facilities, but with some significant downsides. First, you needed to redesign data center facilities to look more like a mobile home park -- with concrete pads and utility drops -- than a self-contained warehouse. Secondly, with such tight quarters, if -- or make that when -- a modular's cooling system ever so much as hiccups, the temperature spike could roast everything within minutes.

According to Uptime's own data, modular adoption is tepid with only 8% of data center operators having deployed and another 8% considering them. The majority of respondents (53%), have no interest. Even among large operators, only 15% have modular deployments.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Lorna Garey
50%
50%
Lorna Garey,
User Rank: Apprentice
8/20/2013 | 7:57:20 PM
re: Data Center Efficiency Plateaus
Re "large companies are twice as likely as smaller ones (as defined by the
total number of operated servers) to deploy public cloud services" -- that's surprising. The cliche is the F500 with sunk data center costs dragging its feet while nimble startups run100% in the cloud.
Aaron_Rallo
50%
50%
Aaron_Rallo,
User Rank: Apprentice
8/16/2013 | 11:59:38 PM
re: Data Center Efficiency Plateaus
Many
data centers simply lack the visibility to truly understand their power usage
and easily measure, analyze and fix those areas of inefficiencies. PUE is
a great starting point to measure energy efficiency but deep-level insight into
power usage at an individual server or application basis can reveal significant
energy savings opportunities. See how automatically right-sizing server capacity based on
IT workload generates energy savings on a monthly basis yet still allows you to
meet peak demand: http://bit.ly/1cTyiwN
kmarko
50%
50%
kmarko,
User Rank: Apprentice
8/16/2013 | 7:38:10 PM
re: Data Center Efficiency Plateaus
Thanks for the clarification vis-a-vis modulars. While your points about construction efficiency and different construction methods (pre-fab housing model vs. cargo container model) are valid, I still contend that when it comes to overall facility efficiency, flexibility and scalability, modulars are an inferior choice to the types of megacenters build by the likes of Apple, Facebook, Switch, DP Fabros, et. al. As I pointed out, Microsoft thinks differently, but they are in the minority. YMMV.
DanielB551
50%
50%
DanielB551,
User Rank: Apprentice
8/16/2013 | 12:48:29 PM
re: Data Center Efficiency Plateaus
Hi Kurt,

I am analyst with 451 Research, a sister company with Uptime. Our definition of prefabricated modular datacenter includes containers indeed, however, it is also inclusive of any other prefabricated form factors. There are vendors that developed PFM designs that resemble a brick-and-mortar datacenter facility very closely.

Even if there was no intention of mimicking a traditional build, there are many steel-frame and enclosure based options available that can create contiguous data halls, grey space and support area. Actually, there is at least one vendor that can join up containers to create larger spaces. Currently we track about 40 vendors, and other than a few containerized ones, I have yet to encounter two identical offering. Which makes pragmatic decisions more complex, of course, but the number of designs is vast.

Bottom line is, the speed and predictability (cost, performance) of prefab deployments vs. traditional should overcome any downsides and misconceptions over time. Doesn't necessarily satisfy all use cases, retrofitting an existing non-purpose built building for colocation comes to mind, but DC-onstruction projects are not the way to go forward. They simply don't make sense any more.

Your point about thermal runaways is also valid for high-density containzers/rooms, but the design/cost overhead of adding additional critical venting should be marginal.
More Blogs from Commentary
Infrastructure Challenge: Build Your Community
Network Computing provides the platform; help us make it your community.
Edge Devices Are The Brains Of The Network
In any type of network, the edge is where all the action takes place. Think of the edge as the brains of the network, while the core is just the dumb muscle.
Fight Software Piracy With SaaS
SaaS makes application deployment easy and effective. It could eliminate software piracy once and for all.
SDN: Waiting For The Trickle-Down Effect
Like server virtualization and 10 Gigabit Ethernet, SDN will eventually become a technology that small and midsized enterprises can use. But it's going to require some new packaging.
IT Certification Exam Success In 4 Steps
There are no shortcuts to obtaining passing scores, but focusing on key fundamentals of proper study and preparation will help you master the art of certification.
Hot Topics
6
IT Certification Exam Success In 4 Steps
Amy Arnold, CCNP/DP/Voice,  4/22/2014
3
The Ideal Physical Network
Martin Casado 4/23/2014
White Papers
Register for Network Computing Newsletters
Cartoon
Current Issue
2014 Private Cloud Survey
2014 Private Cloud Survey
Respondents are on a roll: 53% brought their private clouds from concept to production in less than one year, and 60% ­extend their clouds across multiple datacenters. But expertise is scarce, with 51% saying acquiring skilled employees is a roadblock.
Video
Slideshows
Twitter Feed