This Old Data Center

We've gathered the know-how you need to get your data center in shape to meet current and future demands.

May 20, 2005

46 Min Read
Network Computing logo

Top 5 Clues Your Data Center Needs UpgradingClick to Enlarge

Laying the Foundation

Although much data-center planning and implementation is specific to each enterprise, a few universal truths hold. First, the small stuff matters. Even the simplest technology, like cabling, deserves your full attention. Second, any upgrade request will likely generate the question: Should we do this ourselves or outsource? To answer, examine your core competencies and weigh total costs of data-center operation against services provided by the likes of Equinix, Rackspace Managed Hosting and Savvis Communications.

Off-loading IT functions may be a hot trend--the worldwide IT outsourcing market is predicted to grow 7.9 percent yearly to reach $429.2 billion by 2008, according to Gartner--but it's not always the best choice (for insight, see "Why Outsource?" ). If you do outsource your data center, a comprehensive SLA (service-level agreement) is a must; "Craft a Data-Center SLA So It Meets Business Needs", provides tips.If you maintain your data-center services in-house, periodic upgrades are constants, as is dealing with the heat--and we don't mean the local building inspector. Rather, the laws of physics say expansion and contraction cannot co-exist at the same time, in the same place, on the same object. But that's what we're asking our data centers to do: Expand to include more CPU and processing power to drive business applications, yet take up less space and reduce overhead.

Before Y2K, servers drew about 50 watts of electricity. Now they typically use 250 watts. This is more than an increase in electrical requirements--more heat is generated, too. But at the chip level, vendors are packing more performance into smaller spaces so the power density-to-performance ratio is improved overall. For example, AMD now squeezes two microprocessors onto a single piece of silicon without consuming extra power compared with its single-core predecessors (see "AMD Ups the Ante"). Likewise, Intel's hyperthreading adds a second set of registers and duplicate memory for processors, and Sun's Niagara uses a multicore, multithreaded design to improve performance. But we'll have to wait for resource controls that will slow processors and fans to half- and quarter-speed automatically when they're not in use to reduce power consumption further.

We also now pack more boxes into more racks, and power draws and heat increases exponentially. The logical solution would be to add space or turn up the air conditioning. But adding square feet or increasing power to keep things cool erases the savings most companies were trying to achieve by shrinking platforms. At the system level, vendors are looking at better cooling methods for individual servers and at building racks with water and chemical cooling. But those solutions add costs too.

Our recommendation is to watch for smart new offerings like the Egenera BladeFrame, which efficiently brings together CPU, storage and network resources on demand. Innovative building design can help as well; more on that next as Curt Franklin gives us a look at the new physical data center, with an eye on security. --Sean Doherty

Drivers for the big trend in data-center system design sounds like an ad for the construction crew's fast-food lunch: faster, denser and hotter. Blade servers increase your processing-per-foot ratio but suck up power and generate serious heat. NAS (network-attached storage) systems pack many terabytes of storage--and thousands of watts of heat generation--into similar devices, as do 10 Gigabit Ethernet network switches. Add to these loads the requirements of high-availability network components, VoIP (voice over IP), information security and flexible deployment requirements, and what you need is a high-demand, heavy-load, quickly reconfigurable data center. No problem, right?As we'll discuss later, blade servers are the future. But as vendors increase processing density in rack enclosures by putting as many as 60 servers in a standard 42U rack, they also increase the power demands of those racks. As an example, the Dell 1855 blade server will pull a maximum of 4,437 watts. With six chassis per rack, that's 26,662 watts per rack. Just for comparison, with a 208V service, that's a 128.18-amp load--the standard modern house has a 200-amp panel. Now, by comparison, a rack of 42 1U servers (we'll use the Dell 1850 as an example) pulls 23,100 watts at 110 volts--which is, again, about the same energy consumption as your average suburban house!

The good news about blades is that they make cable management much easier, with a single set of KVM (keyboard, video, mouse) switches and power cables for each chassis of as many as 10 server blades. As KVM switches have become more popular, cable-management problems have eased, with Cat5 replacing the "octopus" cables previously used to carry control signals. In addition, IP KVMs enable remote management of multiple servers. These devices are finding a home in conventional server installations as well--administrators are eager to assume single- and remote-console control over widely dispersed systems. A similar function is available to routers, switches and other serial-console appliances through console servers from Cyclades, Digi, Raritan and others. But remember: With any IP control system, security is a significant concern. Authentication and encryption features should be comparison points for products in this category, as should programming capabilities for creating complex login and automated control scripts.

Speed and Density

The pressure to keep server and appliance densities high (and put as many devices as possible into 1U or 2U packages) has led to what we call the sandwich effect--when you mash down on the top, what's inside has to come out, either on the sides or the back. Vendors have chosen the back. Although they've stuck to the 19-inch standard width, they've increased the length from 36 inches to 42 inches to 46 inches and more, requiring organizations to expand the depth of their rack enclosures. In server rooms with many rows of rack enclosures, this is a, ahem, growing problem: A few extra inches of depth in each row reduces aisle space to the minimum allowed by fire code and will reduce the number of rows you can place in the room.

Designers also are beginning to think about the move to 10 Gigabit Ethernet for network backbones. All current 10 Gigabit Ethernet implementations are fiber-optic-based, but Ortronics and other manufacturers have begun touting connectors and patch panels rated for 10 GigE copper. In addition to considering standards and connector mechanical quality, moving 10 GigE over CX4 cables means changing the way cables are laid in cable trays--the neat, parallel cable bundles that work so well for Fast and Gigabit Ethernet produce significant cross-talk between cables at 10 GigE speeds. 10-GigE copper runs must be separated where possible or placed in a random sequence, in which they cross one another at shallow angles rather than lying alongside one another where separation isn't possible.At the same time, as fiber-optic cables have grown in popularity, MT-RJ connectors have replaced SC connectors and allowed patch panel densities to increase. This increase has been in step with optical switch port density increases to serve the expanding fiber coverage, all of which leads back to--yes, rack density and heat buildup. At the end of a data center's life, fiber may have an additional advantage: With its smaller diameter and greater data densities, it can be easier to pull out, as is required by more and more leases and building codes when major renovations or tenant changes take place.

Although water-cooled computers are less common than they once were, heat problems haven't evaporated. With flexibility an overriding concern, cooling is moving away from giant floor units that require a raised floor to rack-specific air-conditioning units, such as the Liebert RackCooler for individual racks and the American Power Conversion (APC) NetworkAir for individual rows of rack enclosures (see our Buyer's Guide, on page 85, for more on new racks). Spot-cooling products don't eliminate the need for room air-conditioning units, but they do let designers reduce the size of the whole-room units, as individual rack- or row-cooling systems pick up the thermal load introduced by new equipment.

Finally, the changing nature of physical-plant requirements tied to the rise in VoIP can't be underestimated. Although reliability expectations for the data center have increased tremendously, the convergence of data and voice is taking those expectations to a new level. As the two media have merged, data centers, and their attached remote wiring closets, have begun to demand reliability levels, expressed in environmental controls and power continuity, that match earlier telephone requirements. This has meant, in many cases, increasing the size of UPS battery packs, with the resultant increase in floor loading. In some cases, companies have had to file hazardous-materials statements with local emergency-services organizations.

Obviously, planning for long-duration emergencies means having generator backup, with diesel or natural gas power; your choice should depend on the reliability of the supply. To make ISP connections truly redundant, where possible, ensure that building entry points are on opposite sides of the structure to minimize the possibility of "backhoe fade," which results when a worker with a large entrenching tool meets a data cable. We've heard horror stories over the past 10 years that have reinforced the importance of having connections with two separate ISPs; these redundant connections provide a measure of protection against both local and upstream network emergencies.

Security, and the way it changes to meet evolving needs, also is at the forefront of the data-center evolution. As information security becomes information assurance, more organizations are moving to fiber optics because fiber's immunity to electrical interference makes it resistant to surreptitious tapping. Fiber-optic cable is sufficiently difficult to tap without interruption in service that social engineering hacks have become the most common of fiber-attack vector, according to two companies that asked to remain anonymous (for obvious reasons). Employees mistakenly think a service provider has sent in a repair crew, don't question the schedule or credentials, and allow tapping to take place under their noses.Data-center security increasingly is being incorporated into the infrastructure. Environmental sensors and rack-mounted monitoring cameras are common, and network access and physical access are being linked through two-factor authentication in the form of smart cards tied to back-end network directories that store both data and physical-plant authorization levels. The challenge: Combining physical and data security means integrating more than separate systems. It means merging two different professional cultures. Negotiations over where access databases should be stored and who may access and modify fields within those databases--not to mention discussions on the implementation of policies on switches, firewalls and other IT infrastructure components--have resulted in multiple, redundant, out-of-band control networks for different administrative functions. Whether all the management networks are run through a single infrastructure separated into virtual LANs, or run on separate switching and cabling plants will control how much space within racks and cable trays is dedicated to administration.

Security considerations also dictate how video and environmental signals are distributed from server rooms and remote wiring closets. The way IT and security responsibilities are usually split, information from cameras and sensors on equipment racks goes over a dedicated coaxial-cable network to the network operations center, while information from cameras in the parking lot and halls is routed to security guards, again over a dedicated coax network. In the new model, video signals and instrumentation data are routed across an IP network to as many locations as have security responsibility for the area. Within the data center this creates another dedicated network that must be racked and managed, but that's really the story of the entire data-center infrastructure: Individual components are smaller and perform a wider range of functions, but the range of tasks and the number of components has continued to grow.

Backhoe fade--modern digital communications brought low by low-tech forces like a broken air conditioner or a tree falling on power lines--is symbolic of the problems faced by any data-center designer. Packing as much speed and as many features as possible into the racks while keeping attackers and the forces of heat and power loss at bay is the challenge that will keep you evolving your design well into the future.--Curtis Franklin Jr.

Data centers go through cycles. The conventional glass house comprised mainframe and midrange computers serving up VMs (virtual machines) for customers. That model gave way to minicomputers and a client-server architecture. Now, VM technology is making a comeback, as data centers look to virtual servers and storage. Whether this VM cycle will continue through grid and utility computing architectures, only time will tell. Regardless, data centers must support and maintain increasingly resource-hungry business applications through the 21st century, and that means network backbones must be upgraded to 10 Gigabit Ethernet.

Any new structured wiring plan should spec 10 Gigabit Ethernet over UTP (unshielded twisted pair) only where it can be contained within its 100-meter limit; beyond that, you must add fiber. Extend your UTP wiring with Category 6 and review the TIA (Telecommunications Industry Association) 568B.2 standard (see "Your Data Center 911-Style", for more on cabling).If you have the budget for new hardware, consider our Best of Interop Grand Prize Winner, Foundry Networks' BigIron RX-4, RX-8 and RX-16 10 Gigabit Ethernet-ready routers. These machines support up to a million unique IPv4 routes in the hardware FIB as well as a redundant switch fabric. Cisco's Catalyst 6500 with additional network and line cards and Nortel Networks' Ethernet Routing Switch 8600 also are worth considering. But the BigIron RX-Series claims the grand prize by being able to send 2.9 billion packets per second in a seven-foot rack of 192 10-Gbps ports or 1,152 1-Gbps ports.

Convergence Costs

Data centers must also expand and contract their network capabilities to deliver voice and video in real time. Most data centers have a three-tier architecture covering the edge, an aggregation layer and a core. The next move is toward simpler designs that support a two-tier architecture: edge and core. This will reduce costs and improve reliability for other VoIP and real-time multimedia applications by reducing latency and jitter.

If you think your network is contracting more than it is expanding, it's time for a QoS (quality of service) strategy. Consider turning on traffic prioritization and queuing (IEEE 802.1p/Q) or a ToS (type of service)-based QoS for Layer 3. Many intermediate devices support these schemes, and it's only a matter of enabling them across your network. For additional convergence support, Cisco's ISR (Integrated Service Router) Series includes QoS and onboard voice-processing support for VoIP services.

To fully support digital convergence, many 10 Gigabit switches, including Extreme Networks' BlackDiamond 8800 and Foundry's FastIron SuperX, also support IEEE 802.3af (Power-over-Ethernet) at the edge. You'll also need PoE to power phones and other network devices in the data center, such as surveillance cameras, Wi-Fi access points and environmental monitors.If you don't have the resources to support edge switches for VoIP, check out midspan controllers from the likes of ADC and PowerDsine. These devices can support more than 15 watts per port, powering IP telephony, security cameras, APs and more. They come in rack units that can be daisy-chained for management. ADC says its TrueNet Midspan Power-over-Ethernet Controller can even detect VoIP devices predating the 802.3af standard and supply the appropriate power over Ethernet.

By The NumbersClick to Enlarge

Big Pipes

Upgrading your backbone to 10 Gigabit Ethernet may be the least of your concerns. Once that's done, you may be all gigged up with no place to go. A big problem for data centers is the cost and viability of finding truly redundant Internet providers--if you don't have sufficient bandwidth reserved in dark fiber, get it.

If you do a decent volume of business with your ISP, it may be willing to fulfill your bandwidth requirements on demand, assuming dark fiber is available. Otherwise, you'll have to acquire it--which will add a layer of complexity to your data center's ability to expand and contract.Many devices will help maximize Internet and WAN bandwidth by routing, switching, caching and accelerating traffic. These devices, which often are content-aware and can direct traffic to meet service-level demands, are critical for multimedia applications, such as VoIP and unified messaging. Don't cut corners here. If your Internet pipe goes, so do your customers.

If you keep your applications in sight, designing the turns and joints in your plumbing is easy. Look for content-aware devices that can route, switch and even filter traffic from Layer 2 to 7 as well as off-load CPU-intensive transactions, including SSL and even TCP. Content switches like F5's Big-IP and Netscaler's 9950 cache application requests, insert client headers, filter and redirect HTTP traffic to targeted servers. They can even stop traffic carrying malicious content or viruses. If SSL-enabled Web service traffic is clogging pipes, SSL accelerators will increase performance by off-loading encryption tasks to dedicated silicon.

Data centers serving branch offices should make the most of their WAN links with dedicated accelerators in-house and at the remote location. Peribit's SM-500 and SM-250 make a good pair to speed Exchange, CIFS, HTTP and other traffic types between sites (see Peribit Sequence Mirror Maximizes WAN Pipes).

If TCP transactions are taking up too many CPU cycles, you can accelerate those, too. Alacritech's programmable STA2000 Internet Protocol Processor (IPP) ASIC off-loads TCP/IP transaction processing and can accelerate both network and storage traffic. The IPP technology is built into Alacritech's multiport Gigabit SEN2000 NIC and the SES2000 iSCSI Accelerator. Both cards provide failover for high availability.

As for redundancy, if you've done your homework, you've provided dual links to avoid backhoe fade. If you choose multiple ISPs, find out whether they are using the same POP (point of presence). If they are, you have a single point of failure.--Sean DohertyThe data center is a big service sundae, loaded with high-calorie, interwoven systems and network-connected applications on various operating systems, topped by a thick, gooey layer of power, cooling and physical security. Not all IT services are equal, of course--some demand redundant failover and disaster recovery, while less precious systems sit quietly in the corner. But for a data center to pull its weight, all these interrelated services must be maintained, and functions must be repeatable and sustained. This is the stuff of 2 a.m. panic.

These interrelated consumption relationships aren't difficult to understand in a vacuum. Block system diagrams can explain how each is dependent on another, and which versions of OSs work together. But we don't work in a world of block diagrams. Configuration details shift over time, and the data center's shared environmental and human support services are complex. Without data-center-management coordination, efficiencies achieved through sharing are quickly replaced with extended downtime and MTTR (mean time to restore).

To track and coordinate change within the data center, you must manage device configurations and dependencies. This means having documentation, inventories and audits in place. Easier said than done, right?

Best practices outlined by the IT Infrastructure Library (ITIL) manage configuration creep under a four-pronged approach: two management practices, configuration and change management; a database, the configuration management database (CMDB); and a recommendation that "strongly suggests" (ITIL talk for, "You'd be crazy not to!") using technology to automate change- and configuration-management procedures and CMDB updates. In other words, buy or build some configuration-management software (for the lowdown on ITIL, see "IT Best Practices With ITIL").

Guarding the gate of system configuration change is the ITIL change-management process, aka production control, operations control board, or when you need that last-minute, just-before-vacation change, the ever-popular change police. The ITIL's change-management piece plays a pivotal authorization role in the configuration-change workflow, and core to this role is the analysis of systems impact.

By The NumbersClick to Enlarge

The configuration-management ITIL best practice specifies monitoring configurations and inventories for change. In a perfect world, rigid change-management processes would keep the border so carefully regulated there wouldn't be any need to verify changes, but that's not the reality. Emergencies happen, vacations get taken and changes get made without all the necessary steps and blessings. And when a production system is down, getting it up takes precedence over protocol. It's necessary, therefore, to actively monitor and audit the data center and network systems for changes, to log changes that are found, to notify appropriate operations and support personnel when changes occur, and to ensure these adjustments fit your change-management policies.

All this logging, notifying and monitoring are best done automatically, of course. Enter configuration-management software, which we review on page 59.

Configuration Software

We found that basic monitoring, logging and notification are only part of the functionality provided by configuration-management software. These systems also automatically download running and start-up configurations; get hardware and OS information; and schedule uploads of configurations, OS upgrades and patches to targeted device groups. These groups are created using static lists of automatically discovered systems or network devices, or by using SQL queries to generate a dynamic list when the batch update is to occur.More advanced offerings include policy-management and workflow features. Policy management defines configuration rules for a group of devices. Rules are then checked for device compliance. If a device's configuration is noncompliant, notification and logging occur. It's even possible to execute a job that updates the configuration to a previous compliant version. This automatic remediation worries most operations personnel, so many vendors let you require operator input as a protection.

Common workflow features include access control by grouping devices and users. For users, access can be limited by device and/or what functions can be performed by the configuration-management software on that person's behalf. For example, a lower-level admin can download a configuration but not perform upgrades. Additional workflow features include requiring peer review and management authorization and notification before configuration updates. The tightest workflow comes with integration into third-party service desks, letting a service-desk workflow kick off a configuration update automatically, and having the configuration software notify the service desk when it's done.

Implement configuration and change management on the systems and network devices most important to your organization. We don't mean just crucial servers--equally important are critical shared facilities, like HVAC, electrical power distribution units, room security, Halon and telecommunications access links. These systems can be difficult to monitor automatically, so give them extra consideration during your audit and change-control processes. Also realize that personnel dedicated to change and configuration duties aren't just overhead. Without change and configuration management, you'll pay later when it comes time to diagnose and fix problems.

Finally, it's normal to focus on systems in a data center, but network configurations are equally as likely to change and thus also require configuration and change-management and configuration-management software. Most organizations will benefit from a unified systems- and-network configuration management process, but unfortunately, no configuration-management software will do both. This will change as a result of creating a single interface for the ITIL-prescribed CMDB update and service-desk interface, but vendors tell us this functionality is at least a few years off. Part of the reason lies in the skill sets required to manage system and network configurations, as well as organizational reporting and budget divisions between the two groups. AlterPoint, Opsware and other vendors are trying to combine systems- and network-configuration management. Until they finish, the only option is a divide-and-conquer approach of implementing two separate products.

Most enterprise data centers are going to get a bigger bang out of controlling their system changes by limiting access and automating provisioning. Network-configuration management is expensive and best-suited to service providers, which will gain more efficient use of network bandwidth, and large WAN-connected enterprises. How large? If you manage between 50 and 100 routers, you have enough pain to offset the $60,000 to $80,000 price tag for full-fledged network-configuration software.--Bruce BoardmanFor the first time in years, the server market is entering a period of true innovation. Blade servers and increasing Ethernet speeds are among the improvements that will help you service the needs of tomorrow's business. The advent of blades especially will mean changes for IT: Your data center will get more dense, time spent on standalone systems will shift to time spent on logical systems, and you may experience KVM problems if you purchase before IP KVMs become standard fare in every blade server. But you will be able to tell management that your server infrastructure is more agile. Not only will you get buzzword points, you'll be telling the truth--the nature of a well-designed blade infrastructure is such that the pain from server failure and required upgrades will be minimal, and the rate of failure will be no more than it would be on dedicated, non-RAID'd 1U servers.

If yours is like most organizations, you've got servers of all sizes and ages, from old Pentiums and Pentium IIs that do the job to 1U and 2U quad-processor Pentium 6 and Opteron boxes, mainframes, and AIX/HP-UX/SPARC boxes that support the lifeblood of information flow across your organization. But no matter how well the older machines work, you must put them out to pasture before you're affected by OS end-of-life issues, lack of interoperability with newer hardware and risks of failure under increasing traffic loads. Fortunately, products like Microsoft Virtual Server and VMWare let you run applications that require older versions of the OS or older hardware on a newer system by virtualizing the hardware and OS level, meaning you can cram four "old faithful" boxes into one server when the time comes to replace them, provided the apps are not too CPU-intensive. This isn't a perfect solution, but for many applications that run only on out-of-date hardware, it's worth considering.

The good news is that blade servers are here, manageable and versatile. If you need a new Windows machine, throw in a blade, give it disk space, and install Windows. When you need a Linux machine, throw in a blade right next to that first one, give it disk space, and install your favorite Linux distribution.

Of course, there are gotchas with blade servers--they're still an emerging technology--but the benefits of fitting as many as 14 dual-processor machines into 4U of space is worth a small amount of hassle. Think about that: A 43U rack of 1U servers could be replaced by 16U of blades with no significant loss of computing power, assuming your 43U rack is full of dual-processor boxes, that is.

Needless to say, significantly more power is consumed by a full rack of blade servers than the equivalent rack space of 1U and 2U servers. Rack vendors recommend three-phase 220-volt power to racks for just this reason. Power-per-CPU is generally less in a blade server, but you're increasing density by a factor of at least 3.5, making overall consumption in each full rack much higher. Cable management, already a problem in most data centers, gets even uglier as you try to cram six to 14 connections into the space of four. Luckily, you won't have to connect everything separately: You'll run video and power once per 4U blade server chassis, so it's just a matter of Ethernet links.In addition, disk space may be a problem in many scenarios. A smart blade-server designer won't put dedicated disks on each blade. Why introduce something with moving parts to an otherwise solid-state blade? Vendors that do put disks on each blade will have a higher rate of returned blades, and the competition will be quick to tell potential customers that higher failure rate. In a high-density environment, consider that the chances of an individual blade failing are pretty small, but multiply that chance by the number of blades you can cram into a single rack, and you'll see significant annual failure rates. Introduce a hard disk with moving parts to an otherwise solid-state device, and your failure rate will increase. Sure, it's easier to replace a blade server than a 1U box, and you would have had disks in the machines if they were all 1U servers, but the ease of provisioning and the reduced per-server costs of blades mean you're likely to have more of them than you would have had 1U servers.

So why do blade server vendors put disks on each blade? They're attempting to model the environment you're replacing. It's a good transition strategy, but long term it only adds to ongoing expenses. We envision a day when there will be no hard disks on individual blades (more on this later in our storage section).

All that said, the benefits of blades stretch well beyond replacing old boxes with a compact system that is under warranty and has OS support. Good blade-server-management software will let you add blades to a "pool," making application scaling a simple matter of ensuring you have enough per-CPU licenses. Of course, it's rarely that simple this early in the rollout of blade servers, but that's where we're headed. Some high-end vendors, including IBM and Hewlett-Packard, are doing blade grouping and SLA management correctly today, and some tier-two vendors, including 3Up Systems, are nipping at their heels (see "A Real Blade Runner").

Running Out the Big Dogs

Replacing an old PII is one thing, but what about the quad-processor megaserver that houses your core database? Or the AIX box that sits quietly in a corner processing thousands of transactions per second?Technology is advancing on them too. Itanium and Opteron are revitalizing the high-end standardized architecture machine, while HyperTransport and PCI Express are bearing down on your shiny new server to make it the old faithful of tomorrow.

You can't stop evolutionary change in the server market, but you can plan for it. Have you ever kept more than 5 percent of your servers more than five years, and if so, after five years were they still doing what you purchased them for? For many of us, the answer is no. For Intel-based servers, technology advances and software requirements outstrip the state of the art in five years, so we find that the machine we bought as "the killer box" is now under someone's desk.

When you buy a server, consider it a short-term investment. Be realistic--immediately budget to replace it within five years. Don't tie those cables down with monster ties, either. Heck, most data centers are in the process of replacing those monster KVM cables with Ethernet cables, or will be in the near future.

Five-Year Plan

So what will the typical data-center server landscape look like in five years? We foresee evolutionary, rather than revolutionary, changes. You'll have blade servers sitting next to conventional 1U boxes. The blade servers will have external storage, implying some form of storage array; our money's on iSCSI. You'll still be struggling to keep up with the density of processing and storage power you've accumulated.The main difference will be in the blades. Replacing a server won't require downtime. You'll simply put the new, more powerful blade into a chassis, tell it to boot from a copy of the old server's image, then power down the old blade. Take it out, play with it, reprovision it, whatever.

Days of configuration work will be reduced to hours. You'll still have OS problems when upgrading by replacement. Presumably you're replacing the old blade with something more powerful, so you might have to build a new image. But once you have the image built, all blades of the same model require only a disk-to-disk copy of the image. Some systems let multiple machines boot from the same image, but we don't recommend this for general use--too much chance of conflict.

Beware KVM pitfalls: All blades come with built-in KVMs, but surprisingly, not all of them are IP-enabled unless you pay for that. Without that IP capability, you must physically go to the chassis and select the active blade. Although we're certain this is just another speed bump in the long road to blade-centric data centers, it is a major operational inconvenience you'll have to live with unless you ensure your new systems have IP KVM capabilities. Find out what your vendors offer, whether IP KVMs are available, and if you have to pay a premium for this functionality.

Have we converted you into a blade believer? Now you must make the sale to those holding the purse strings. It's expensive to start a blade-server setup because you must pay for a chassis that has a KVM, an IP switch and (possibly) a Fibre Channel switch built into it. But individual blades are significantly less expensive than 1U servers, so the per-CPU price goes down as you populate the chassis, and we expect blade-server prices to continue to drop. --Don MacVittie

Direct-attached storage won't leave the data center, but up-and-coming technologies make DAS less and less appealing for general data-center use.For example, 10 Gigabit Ethernet combined with PCI Express and iSCSI serves the needs of blade servers and standalone servers so well that it may soon be cost-effective to implement iSCSI storage within the large data center--something we wouldn't have considered in the Fast Ethernet or even Gigabit Ethernet worlds. You may even find yourself with a mix of Fibre Channel and iSCSI devices--booting from iSCSI and running critical apps on FC because of your existing investment.

Panasas has jumped the iSCSI gun, selling "object storage" aimed at Linux high-performance computing clusters, moving some of the brains down to the disk level. Disk vendors have been itching to make drives more intelligent, but their reseller relationships (read: the business needs of those resellers) have limited their ability to move things down the stack. Panasas took advantage of this relationship-induced paralysis and built a system that has controllers (and more) representing themselves as disks, thus putting more intelligence at the "disk" level. The system handles RAID at what could be considered the drive level, scales with clustered Linux systems, and interfaces using iSCSI Gigabit Ethernet. We haven't (yet) had a chance to test the system, but for scalability, Panasas could be to Linux clusters what EqualLogic and Intransa are to general-purpose iSCSI implementations.

We've tested blade servers with iSCSI boot capabilities in our labs, and though they work commendably, they're still on the bleeding edge. iSCSI performs admirably, and the vendors mentioned have taken steps to make it more scalable, but the iSCSI world is waiting for 10 Gigabit Ethernet, which will open the market to much more competition.

More short term, you must worry about what you have and how to keep it running, or even expanding it beyond its current limits. For now, 4-Gbps FC switches, some with 10-Gbps uplinks, are mainstream. If you need that kind of volume, and while you're waiting for 10 Gig Ethernet to enable iSCSI, consider looking at what some of the iSCSI-only vendors offer--varying numbers of 1-Gbps connections to give you the bandwidth you need.

The SAN Stands AloneStandalone storage is a different beast than DAS from a management and utilization perspective. A storage area network is unlikely to be a panacea; storage arrays create their own problems and bottlenecks. Two high-volume applications--even using different LUNS--that are hitting the same array are reading and writing to the same disk, and you'll see performance bottlenecks in a system that would have performed very well in a DAS environment. That's just one of many problems that come up with centralized storage, and one of the easiest ones to fix (move one of the apps off those specific disks, using volume-management software).

By The NumbersClick to Enlarge

Our point is that centralized storage is great for management and storage utilization, but it won't reduce your head count. After all, centralizing storage requires maintaining cables and switches your organization didn't use before. Centralized storage will, though, save on downtime if it's implemented correctly. It'll also let you give each server exactly what it needs without having to put a 30-GB drive into a machine that needs less than 10 GB, and it will make disk replacement a much simpler chore. Aside from saving on downtime because the array keeps you running (in degraded mode) while a disk is out, you don't have to rip the server apart to replace the disk. Just pop it out of the array and replace it (assuming it's hot-swappable and RAID 5--safe assumptions in enterprise-class equipment, not safe assumptions in lower-end iSCSI gear).

CDP PDQ

The storage world is advancing rapidly in other areas. Replication and continuous data protection (CDP) software and appliances may make your redundant data centers more flexible by keeping them up to date in real time. This, of course, might open the door to viruses that infect your core operating environment being replicated to your redundant data center, but that's a topic for another article.We're storing more data than ever before, and the idea that "disk is cheap" tends to lead to a proliferation of storage wherever centralized storage management is deployed. For most data centers, this proliferation means more racks of disk arrays, but there are things to watch for as the number of drives in the data center continues to rise.

First is the physical infrastructure. How much weight can your raised floor withstand? Racks can take a lot, but can the tiles you've put down? Along those same lines, if your data center isn't in the basement, how much weight can be crammed into a 10-foot area safely? Better find out before you stack those storage-array racks side by side. It's unlikely that you'll drop arrays through the floor, but do you want to introduce permanent structural damage?

Virtualization technology--both server and storage--will provide easier logical management of configurations but won't reduce the physical complexity of your data center. In fact, it may increase physical complexity as repurposing becomes easier, and therefore, more likely, and more blades and storage are added to handle performance peaks.

RAID, too, is bearing down on you, demanding more space in the data center. The best information systems are useless if you can't access your organization's precious data, so uptime, reliability and recoverability demands are driving RAID to ever greater heights. RAID 10, which makes a complete copy of every piece of data written, is a more robust and speedier solution to data integrity than RAID 5, but it requires twice the disk space. When the cost of disks is dropping, and you can crash several drives and not lose any data, and performance is better than the performance we have today, it's a safe bet that higher-end environments will move there. If you have a critical application that's using ever more disks, and you can ensure total uptime by doubling the disks, some organizations will pay for double the disk space, some will suffer with half the disk space, but most will move these applications to the more stable, faster technology.

Your FC switch is also about to get a lot smarter. A number of vendors are working on moving applications onto the switch. This has several implications for the data center. First is a reduction in the number of servers required to support storage, but beyond that, your switch could be doing compression, encryption, virtualization--even document management. In the months ahead, we'll examine what it means to have a switch capable of running arbitrary code. We think this market will boil down to what makes sense. Encryption on the switch is limited because the data would be unencrypted from the switch down to the client. Why not just protect your data from end to end (disk to server) instead?Compression from one switch to another could hold promise for replication over low-bandwidth connections to remote data centers or from remote offices. We do foresee that any processing moved to the FC switch will likely increase the density of switches in your data center. Although you'll probably see an uptick in the number of FC switches needed to get the job done, they'll still all be from the same vendor because the FC industry seems to have made a suicide pact under which no one fully supports anyone else's switches. We figure they're all waiting for the iSCSI sword of Damocles to fall.

It is worth noting that some Fibre Channel switch vendors have noticed the sword overhead and are starting to cooperate. We hope to see an expansion of this trend to the larger FC community, but we're not holding our breath. In fact, it's arguable that "intelligent FC switches" are just another vendor lock-in ploy. When evaluating FC switches, discerning whether your prospective vendors allow third-party OEMs to develop applications for their gear or if they insist on "customizing" every application will tell you a lot about their commitment to standards.

Of course, more storage and hosts of any kind increase the complexity of your network, both physical and logical. On the physical front, Tier 1 blade-server vendors are building FC switches into their chassis, meaning each chassis requires only a single connection on your existing infrastructure switches. However, if you use any of the "value added" features of your FC switch infrastructure, make certain the switch provided with your blade server is from the same vendor as your existing infrastructure. Some vendors offer a potpourri of support for FC switches, but the real answer here is full-out interoperability among FC vendors--something we do not foresee in the near future. So when shopping for blade servers, find out which FC switch brands are available if those blades require FC connections.

Growing Drains

Right now the storage market is in growth mode. We're reasonably certain that if you look long enough, you can find a storage vendor that includes a built-in coffee maker for its appliances, free of charge. This means that the options available to us will be varied and inventive, albeit not always as modular as we'd like. Just remember when expanding your data-center infrastructure that consolidation will happen eventually. If your company leases switches, you're generally looking at a three-to-five-year cycle; those that buy should consider the same cycle. Balance price, performance and the long-term corporate viability of vendors when making major infrastructure purchases. --Don MacVittieJust as most homeowners constantly upgrade their living space, research shows that most IT shops are in the process of upgrading their data centers. In our poll of Network Computing readers, nearly half (47 percent) of the respondents said they have recently upgraded their data centers, and 28 percent said they plan to do an upgrade in the next 24 months.

Many enterprises also are building new data centers. Large companies, for example, are moving away from the monolithic, single-site computer facility and building smaller, geographically dispersed data centers that can work together. The number of enterprises with a single-site data center had dropped from 51 percent in 1999 to about one third (33 percent) in 2004, according to a study by International Network Services. The trend toward multiple-location, centrally integrated data centers increased by about 9 percent during that time frame, INS said.

Small and midsize companies also are joining the ranks of data-center ownership. "With the introduction of blade servers and other smaller-footprint technologies, and the growing need for 24/7 business operations in companies of all sizes, we're seeing a lot more interest from smaller companies," says David Sjogren, president of Strategic Facilities, a consulting firm that helps enterprises plan, build and test new data centers. "There are not many 50,000-to-100,000 square-foot data centers anymore. Most of the ones we're building are smaller."

So if technology's footprint is shrinking, why are so many companies building or renovating their data centers? What's driving all this home improvement? The answers vary from enterprise to enterprise, but one of the biggest drivers is also the simplest: physical growth of the data center.

In our survey, nearly three quarters (72 percent) of respondents said they expect processor counts in their data centers to grow by at least 11 percent over the next five years. Approximately 18 percent said they expect processor growth to be in the 26 percent to 50 percent range, and 10 percent said they are planning to increase processor count by a whopping 51 percent to 100 percent by 2010. In a separate survey question, nearly half (48 percent) said "server consolidation" is one of the chief drivers behind data-center renovation."There is real value in server consolidation, both from the perspective of cost savings and in improving the efficiency of backup and recovery," says Darko Dejanovic, vice president and CTO of Tribune Publishing, which recently completed a major server consolidation project at the Chicago Tribune (see "Extra! Data-Center Project a Success," page 77). "Our cost savings on the project were well into the millions of dollars."

But though enterprises see the value of server consolidation, many underestimate the data-center-space requirements to support them, Sjogren says. As we noted earlier, blade servers, in particular, pack more processing power into an increasingly smaller number of racks, and some companies have begun to experience problems in keeping the consolidated servers properly powered and cooled.

"You can't take these huge racks of servers and put them next to each other, plugged into the same circuit breaker," Sjogren says. "There has to be some separation of the units, both in terms of heat and power, and that may mean more square footage than some people anticipate."

Storage, too, is a major driver behind the renovation trend. In our survey, some 58 percent of respondents cited "storage management" as a primary reason for doing a data-center upgrade. The rapid evolution of SAN and iSCSI technology, combined with new regulatory requirements for thorough maintenance of business records under the Sarbanes-Oxley Act, are causing many enterprises to rethink their storage strategies. In many cases, these new technologies and strategies require additional space in a clean, cool environment--and that means expanding (or at least rearranging) the data center.

The Resources Information Technology Program Office (RITPO), a part of the U.S. Department of Defense that provides IT support to the military health-care benefits system, is one example. In 2004, RITPO centralized its storage in a SAN environment built around a SunFire 15K system that helped reduce the need for regionalized servers. The new environment paved the way for server consolidation, but it also created a need for expanded storage facilities to support the SAN.On the business side, many enterprises are looking to data-center renovation as a means of controlling costs as well as making more efficient use of technology. In our survey, more than 38 percent of respondents cited "cost cutting" as a chief reason for upgrading. In addition to making more efficient use of server and storage resources, enterprises are looking for ways to consolidate staff, as well as office space and utilities costs.

"We're already saving money in air conditioning and electrical utilities costs, as well as making better use of our people," said Melind Dere, one of the project managers for the server consolidation project at the Chicago Tribune.

The Data Center's Shifting Face

Clearly, there are many reasons to consider a data-center makeover. But in many cases, the "new look" is vastly different from the sprawling, glass-house environment of yesterday's mainframe computer rooms. The layout and composition of today's data center is shifting, and many IT planners are struggling to keep up.

For one thing, enterprises have a lot more choice when it comes to site selection. In the past, many companies segregated the data center from the rest of the business, housing it in special facilities--and sometimes a dedicated building--to ensure security, power redundancy, fire suppression and a host of other resources. Although this approach is optimum, it's too expensive for many companies, and many enterprises are locating their data centers in standard office buildings, even multitenant buildings. In our survey, 61 percent of respondents cited "cost" as one of the most important factors in selecting a site for a data center.Some companies even choose to house their data center equipment far from headquarters and operate it remotely, says Colin Rankine, an analyst at Forrester Research. "It's not important that hardware resources and human resources are collocated," Rankine says, "but it is important that support staff resources are centralized under one roof."

Just as data-center locations are changing, so is the size of the typical data-center owner. A decade ago, only the largest companies built separate data centers, primarily because of the special needs of the mainframes they housed. Thanks to e-business, however, many small companies want high-availability setups to support 24/7 global businesses, just as larger companies do. As a result, it's not unusual to see smaller data centers--in the 250-to-5,000-square-foot range--going up at locations all over the world. In fact, more than 80 percent of the respondents to our survey said they are supporting data centers of less than 5,500 square feet.

"We're doing more business with smaller companies than we have in the past," says Strategic Facilities' Sjogren. "The 7x24 requirement means that things like redundant power, HVAC and fire suppression are important in companies of all sizes."

Larger companies, too, are building smaller, regional data centers that can be networked and managed centrally. Twenty-eight percent of respondents to our survey said they operate multiple data centers. The Tribune Co., which owns the Chicago Tribune, the Los Angeles Times, Newsday and several other daily newspapers in major metropolitan areas, recently centralized its mainframe operations at its Melville, N.Y., data center, but it maintains local data centers at each of the daily newspaper operations. Some of these regional sites are sharing some processing, and in the long run, they may act as failover sites for one another.

"The limitation right now is in the WAN connection," Dejanovic says. "You can't get dark fiber at long distances, and there aren't many gigabit-speed choices, either." Network Computing readers agree: "Bandwidth management" was cited as the No. 1 problem confronting data centers.In addition to shifts in the size and location of data centers, there's significant change in their makeup as well. In the 1990s, data centers devoted most of their space to the computers and storage systems themselves--generally large IBM mainframes, tape drives and communications processors, all with big footprints. But with the introduction of blade servers and smaller storage units, computer systems now require less square footage per MIP than ever before.

However, the technology for protecting data-center availability continues to grow. Modern heating and air conditioning systems, diesel-powered UPS systems, fire-safety equipment and other systems continue to improve their efficiency, but they aren't shrinking much. And current wisdom around power redundancy requires servers to be plugged into completely separate, redundant circuits, often hogging more space than IT professionals may expect.

"The ratio of computer and storage space to HVAC and electrical space is now close to 1:1 in most new data centers," Sjogren says. "That comes as a surprise to some IT people, but if you don't have cooling and electricity, then you don't have computers." Gartner estimates that enterprises' power density requirements are growing at about 10 percent to 15 percent per year. "Significant power increases as well as floor space will be required in fewer than five years," the research firm said in a 2004 report.What's It Gonna Cost Me?

So how much should you plan to spend on your new or expanded data center? Cost figures are elusive, partly because IT people don't want to disclose them and partly because costs vary widely from region to region. The cost of square footage, electricity and equipment installation clearly crosses a wide spectrum, with downtown Manhattan at one extreme and rural Oregon at the other.

American Power Conversion recently published a white paper that puts the TCO of a high-availability data center at $120,000 per rack. About half of that cost is capital expense: racks (3 percent), cooling equipment (8 percent), power equipment (36 percent) and building improvements (8 percent). The other half is operating expense: electricity (19 percent), data-center service (17 percent) and the cost of owning or leasing the space itself (9 percent).Although APC's numbers are clearly designed to support its case for investing in more efficient power equipment, they do provide some insight on the breakdown of costs for maintaining the data-center facility itself. Obviously, these figures don't include IT operational costs, such as staffing and computer equipment, which can change the equation dramatically.

In the long run, the business case for building or expanding a data center is probably best made by demonstrating a clear benefit, rather than focusing exclusively on costs. For example, if a data-center upgrade can be shown to increase system availability, then that increase can be measured against the cost of downtime. Similarly, if you can measure the money saved by consolidating resources at a single location--and eliminating redundant, underutilized facilities--you'll have a good case for selling it to upper management.

"Take it from somebody who's been there," Dejanovic says. "The savings are there."--Tim Wilson

In "This Old Data Center," five Network Computing technology editors give us their take on the current and future data center. Sean Doherty looks at the emergence of 10 Gigabit Ethernet and explains how to support convergence technologies like VoIP and videoconferencing with QoS and content-switching technologies. In the server and storage area, Don MacVittie envisions more compact, denser data-center systems that make use of blades as well as new bus technology in PCI Express and Hypertransport. In addition, Don forecasts the demise of DAS (direct-attached storage) and speculates on the future of storage (read: iSCSI).

Data-center upgrades, however, go beyond infrastructure, servers and storage. In the not-too-distant future, we'll control processor and fan speeds to reduce electrical and cooling requirements. When not in use, CPUs and fans will run at lower speeds. Until then, Curt Franklin says, data-center managers should watch out for the dreaded "sandwich effect."Finally, Bruce Boardman wades through the network-management glue that holds the old and the new data centers together, and Tim Wilson supplies a realistic view of data-center operations today and charts tomorrow's trends toward centralizing locations and consolidating resources to control costs.

Data centers grew up to house, support and run business applications for customers. And that's how data-center upgrades should be approached--with your business-application requirements in hand for bandwidth, CPU, memory and storage.

Changes in business applications directly impact hardware upgrades and how new technologies are incorporated in the data center. Client-server computing still predominates, but Web services is becoming the darling of the data center. And IBM isn't wrong when it advertises that middleware is everywhere. As a result, data centers must continue to support two-tier (edge and core) and three-tier (edge, middleware, core) architecture in the short term. And that means maintaining many servers or CPUs in a distributed architecture.

In the long term, grid computing's promise of utility or on-demand computing is inviting, but applications will require rewriting. Don't look for enterprises to tackle that in the near future.

--Lori MacVittieEPOLL RESULTS PAGE


SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights