For many, the answer is uncertain, and for good reason. While application specialists and IT architects have been packing data centers with blade systems and 1U and 2U servers, it's only been by the grace of mainframe forebears that most facilities still function adequately. But don't count on data center designs of the previous century to do the job much longer.
For most enterprises, existing data centers aren't full, and in a gross sense, existing cooling and electrical systems are often sufficient for the overall load currently presented. The immediate problem is that today's high-density systems can require much more localized power and cooling than is typically available. For example, using HP's online configuration tool, we were able to build a single rack of HP ProLiant BL p-Class blade servers that would require over 30kW of power and would weigh in at over a ton. (We picked on HP here because it provides some of the nicest power requirement calculators, but all the other blade system vendors share a similar story.)
In practice, such a configuration is difficult to deploy. According to Neil Rassmussen, CTO at Uninterruptible Power Supply (UPS) and cooling giant APC, "Not only will you need to size the UPS to power the data equipment, but you'll also need to include the air handling units. In the 10 to 30 seconds required for a backup generator to kick in, a 30kW system will overheat itself." To avoid the additional cost of battery backup for AC units, Rassmussen says a single rack's power budget shouldn't exceed 15kW.