Pitching Blades

Server vendors are hyping a new form that offers less cabling, more efficient use of space and a highly configurable data center. We tested six diverse blade devices to see

June 9, 2003

20 Min Read
Network Computing logo

We asked Cubix Corp., Dell Computer Corp., Hewlett-Packard Co., IBM, NEC Corp., RLX Technologies and Sun Microsystems to participate. Dell and RLX sent one entry each, while HP shipped us four devices. Sun and Cubix declined to participate, both saying their products would not be ready by our test date. And despite initial interest, IBM did not submit its product for testing. NEC sent us its Express5800/ft, which didn't fit our definition of a blade for this review, but is a nice little box (see "Sweet Box, Big Price").



Intel Iometer Sequntial Test
click to enlarge

As we put the blade servers through their paces at our University of Wisconsin-Madison, Real-World Labs®, the question foremost in our minds was: "Is working with blades easier than dealing with a stack of 1U or 2U servers?" The answer is an unqualified "Yes." Just the drastic reduction of cables was a clear win; instead of a set of power, network and KVM cables for each server, there's just one set per blade chassis.

We also found that managing blades is simple, even compared with managing standalone servers. For example, HP's management software can be set for "rip-and-replace" mode. That is, when we removed a blade server from the chassis and placed similar hardware into the slot, the management software detected the new device and sent an image to the blade automatically. Try that with a standalone server.

Everyone's a Winner

We really took a shine to HP's BL20p G2 (Generation 2), which packs in the features and performance that one would expect in an enterprise-class server. And it includes HP's management software, which is more mature than the software from Dell and RLX. However, we did not select an Editor's Choice. Because each vendor is targeting a slightly different market segment, the devices we tested did not lend themselves to an apples-to-apples comparison. For example, some blades only had 100-Mbps network cards, while others had gigabit NICs. Also, the processors installed in the blades varied from a low-power mobile processor to a high-end P4 Xeon. Devices are listed alphabetically by vendor.Dell is targeting the midrange server market with its single blade server. The 1655MC blade server is aimed at Web serving, thin-client computing, small-file/print serving and network infrastructure services (DNS, DHCP and domain controller, for example). Dell includes its OpenManage server-management application, which sports a rapid install service to deploy images to the blades.Dell PowerEdge

1655MC The Dell 1655MC blade server system is a 3U unit that holds as many as six dual-processor blades. Each blade has a dual 10/100/1000 network card and one or two SCSI hard drives with a embedded hardware RAID controller. One downside: The hard drives are not hot-swappable; the entire blade must be removed from the chassis before you can gain access.

Unlike those of the HP p series, the Dell 1655MC's vital components (including power supplies) are packed into the unit. The 1655MC also was the only blade server we tested to host a built-in KVM, with an uplink port so it can be connected to Dell's KVM-over-IP product, allowing remote access to each of the blade consoles. This is the only way to get at the blade consoles remotely, unless you have software terminal services or VNC (virtual network computing) installed, because the embedded RAC (Remote Access Controller) on the chassis cannot access the blade consoles. The RAC did let us power cycle and turn on or off each blade.

What also sets Dell's product apart from rivals is the fact that each blade has a USB port. This makes it easy to add a floppy drive or CD-ROM for installing an OS or loading software onto the blade. The USB port does not need an adapter, as the HP e series does. Dell also includes an integrated network switch, which is optional on models from HP and RLX. To make use of both 10/100/1000 network connections on each blade, a second, optional, four-port network switch is required.



Vendors at a Glance
click to enlarge

The Dell blade system is managed via two components, both Web-based. On a server dedicated to managing the Dell systems, administrators use the Dell IT Assistant and OpenManage Remote Install Server, or RIS (not to be confused with the Microsoft RIS in Windows 2000). The Dell IT Assistant is used primarily for monitoring the system. It also can be used for distributing BIOS updates and keeping track of the hardware inventory. The more interesting piece is in RIS: The RIS system is used to capture and deploy OS images to the blade servers. But RIS can be used only for capturing and deploying images to the 1655MC blade devices, while HP's RDP software can deploy images to nonblade servers as well.We had no problem getting the RIS software to capture and deploy Windows 2000 Server images to the blades. We did run into a few issues when dealing with Red Hat Linux, though. The initial install of Red Hat 7.3 put the 2.4.18-3 version of the kernel on the system. The OpenManage software wanted a newer version of the kernel on the Red Hat box before it would let us install the management components. Once we got the kernel version straightened out, we captured and redeployed the Linux image with the RIS software.

Dell PowerEdge 1655MC, Dell Computer Corp., (800) 289-3355, (512) 338-4400. www.dell.comHP submitted four blades, three of which use the same chassis. These entries run the gamut from the low-end and ultra-dense to the feature-rich and high-end quad-processor blades. All the blades use the same management software, Rapid Deployment Pack (RDP), which supports Windows 2000 and Windows 2000 Server as well as Red Hat and SuSE Linux. The RDP software is licensed per device managed at an additional cost. In contrast, Dell's management software is included free with its blade server. HP's SmartStart and CIM7 software are included with the servers at no charge.

HP has two lines of blade servers, the e series and the p series. HP's BL10e blades use a 3U chassis that can support as many as 20 blades and include integrated power supplies and an optional network switch. The p-series devices all use the same 6U chassis, which requires a separate 3U power supply--two different power supplies are available, both requiring a 220-volt power source. The module can power multiple p-series chassis and comes with four or six redundant power-supply modules.

HP ProLiant BL20p G2

We were impressed with the BL20p. G2 stands for second generation, and this dual Xeon processor blade packs quite a punch: It turned in the highest number of transactions per second in our Spirent Communications WebAvalanche test (see "How We Tested Blade Servers,").The BL20p blades have two hot-swappable SCSI hard drives and an embedded hardware RAID controller with RAID Levels 0 and 1. Besides fast dual processors, the BL20p also offers three 10/100/1000 network cards. An additional network card is dedicated to HP's iLO (integrated Lights Out), which lets administrators remotely access the server console, power cycle the server and check the health of the system. The iLO boards included on the blade-server line do allow access to the OS' GUI-based console.

You can add Fibre Channel storage to the updated G2 blade server with the FC pass-through interconnect module. Other available modules include a patch panel that provides access to each of the network interfaces and a Gigabit Ethernet switch. Each chassis requires two interconnect modules for network access and FC connections for the blade servers.

One downside of the p-series blade servers is the lack of floppy or CD-ROM drives or USB ports. We needed to use a "virtual" drive, from a computer connected through the iLO management interface. This made installing software from CD more difficult, requiring a second computer or placing the installation files on the network first. Both HP's e series line and Dell's 1655MC have USB ports, where a floppy or CD-ROM drive could be connected.



Intel Iometer Database Test
click to enlarge

Using iLO for remote control of a Windows 2000 server also took some practice because we had a hard time keeping the mouse in sync. That is, where the iLO interface thought the mouse should be and where it actually was on the remote session often did not jibe, making it hard to move the mouse near the edges of the remote-controlled server window. Refreshing the browser window usually fixed the problem, but not always, making iLO unusable at times. The RDP software also has a remote-control agent that worked better, but this approach requires the server to be in a working state; iLO does not because it works at the hardware level.

HP's RDP management software really shined, making it a breeze to deploy the OS. RDP let us do scripted installs of both Linux and Windows. Once a blade server was completely configured, RDP captured that image for a faster OS deployment. Because RDP is aware of the blade chassis, we could set it to detect when an unconfigured blade is placed into a slot and deploy a preset image down to the blade. So, in this case, if a blade server fails, just rip it out, put in a new one, and the management software will redeploy the image without administrator intervention. If we also wanted to have our applications redeployed, they had to be installed on the blade before the image was captured. Images are stored on the computer running the RDP management software.RDP also made it easy to perform simple configuration changes to each node. For example, in the RDP interface, we were able to set the workstation name or the network settings for each NIC with just a few clicks. RDP then contacted the remote blade, set the configuration changes and restarted the blade. RDP also kept a history of all configuration changes done to each node.

One thing we found a little odd about the p-series blade servers is that the chassis needs a separate power unit. There are two different models available: One can house four power supplies, the other six. Each unit can supply power to three p-series chassis. All the other blades we tested, and even HP's e-series, have integrated power supplies. This can make price comparisons a little deceiving because the chassis prices of the HP p series and the Dell seem close, but you'll need to buy a power-supply module for the HP p series, which adds a few thousand dollars to the price.

ProLiant BL20p G2. Hewlett-Packard Co., (800) 888-9909, (650) 857-1501. www.hp.com

HP ProLiant BL20p

The first-generation HP BL20p blade is very similar to the G2 blade. The main differences are a slower processor, only three 10/100 network connections and the lack of Fibre Channel connectivity. The G1 blade is managed by the same RDP management software and has the same iLO capabilities. For applications that don't have as high a network-bandwidth or processor-power demand, the G1 blade is a good option at a lower price.ProLiant BL20p. Hewlett-Packard Co., (800) 888-9909, (650) 857-1501. www.hp.com

HP BL40p

For shops that require more horsepower for their servers, the BL40p is up to the task. It features a quad processor Xeon MP box with five (yes, five!) 10/100/1000 network adapters. For storage, the BL40p has four hot-swappable hard drives, making RAID

Level 5 an option. It also supports a Fibre Channel connection via the proper interconnect module. For SAN connectivity, the BL40p has two PCI slots available, providing a redundant connection to your SAN hardware. The downside to the BL40p is its size: only two blades per each 6U chassis, a total of 12 blades per 42U rack (two power modules are required per rack, taking up a total of 6U). But these blades do allow more servers per rack when compared with HP's 4U, 5U or 7U rack-mount servers. The BL40p also sports a hefty price tag compared with the BL20p models.

Why would anyone want to invest in the BL40p versus a traditional quad-processor, rack-mount unit? First, with the RDP management software, you get rip-and-replace functionality that you can't get with a stand-alone system. Also, the potential for less cabling in the rack is there. In the end, though, it depends on the interconnect options put into the blade chassis--for example, if the switch module is purchased instead of the patch panel.ProLiant BL40p. Hewlett-Packard Co., (800) 888-9909, (650) 857-1501. www.hp.com

HP BL10e

On the low-end from HP is the BL10e 900MHz PIII mobile processor blade, with a single laptop-size 40-GB IDE hard drive on each blade. This line of blade servers is meant for applications that don't need a lot of CPU power or for shops that need a large number of servers or workstations. Of the servers tested, the BL10e was the poorest performer, but that's primarily because it uses the slowest CPU. Unlike the p-series blades, the BL10e blades do not have the option of iLO management. Instead, the chassis carries the iLO functionality. The downside is that remote control of each blade is limited. When running Linux, we could get a console session via iLO; for blades running Windows, HP provides a limited text-based interface for a common set of tasks, such as accessing the process list, Windows services and network configuration.

On the positive side, we could add a KVM to each blade via an adapter that plugs on to the front. The adapter also provides two USB ports. Of course, to make use of the keyboard/mouse in Windows, the adapter must be plugged into the blade before it boots into Windows. We also could use terminal services or software like VNC to remotely administrate the boxes.

ProLiant BL10e. Hewlett-Packard Co., (800) 888-9909, (650) 857-1501. www.hp.comRLX Technologies' blade line is geared toward the low-end to midrange server line. For companies, say, an ISP, that needs a large number of servers, for Web hosting or network infrastructure (DNS/DHCP) services, RLX has a solution for you.RLX ServerBlade1200i, RLX System 300ex chassis

The RLX 1200i blade system is similar to that of the HP BL10e line in that it uses the lower-power PIII mobile processor. Unlike the BL10e, however, the RLX 1200i can support two IDE hard drives on each blade, each on its own IDE chain as the master device. Our blades came with a single 40-GB drive, but two 20-GB drives are available. This makes doing RAID 0 or 1 via software possible--no hardware RAID support on the 1200i.

One thing that we missed on the RLX device is access to the console for video, though we were able to do a workaround. The 1200i does perform text-based console redirection over the embedded serial port (also accessible via the management software). Even the BIOS post and boot process can be viewed over the serial connection. For blades running Linux, this isn't a big deal because the serial connection can be used to connect to the Linux text console and log in. For Windows, RLX provides a text-based terminal connection and a number of command-line utilities for managing a Windows blade. The OS install can be done via a scripted install--no GUI console is needed. Between this and accessing the Windows blade with a terminal services connection, we could do everything we needed to on the Windows blade.



NWC Custom Test
click to enlarge

All RLX systems are managed via a single management blade (purchased separately). The good news is that one management blade can manage an entire rack of blade units. Each blade unit has three 10/100 network adapters; one is used for management, two are for general use--RLX labels them as the "private" and "public" network links. Because the RLX systems can have can as many as 167 1200i blades per rack, the company needed an easy way to uniquely identify each blade. On the management network, each node is assigned a unique IP address based on the rack number, chassis number and position in the chassis. For example, when we wanted to manage the blade in Slot 5 in chassis No. 4 in rack No. 0, its IP address was 10.0.4.5.

We set the chassis and rack numbers via a DIP switch at the front of the chassis. Administrators need to get very intimate with this IP number scheme because management is done via a Web browser to the management blade.The management software also sets the machine name based on its location in the chassis. So a Windows blade might take on the computer name of RLX-0-4-5. This renaming feature of the management software can be disabled so the computer name can be set by the administrator.

Managing the RLX system was easy. The Web interface provided a graphical representation of the system that let us drill down to each blade. From the image we could see which LEDs were active on the systems and their states. The Web interface provided a wealth of detailed information, including current voltage levels, temperatures and vital stats on the hardware, OS and software on the blade.



Intel Iometer Max I/O Test
click to enlarge

Unique to the RLX management software is its ability to produce trend graphs of various components. Off by default, we could turn on trend graphs to watch items such as critical temperatures, Ethernet traffic, processor utilization and disk utilization. It was easy to view all the same graphs (for example, the critical temperatures) for all blades on a single Web page, making comparison a breeze.

The RLX management software also made it easy to "provision" the OS out to blades. Like the HP RDP software, the OS can use a scripted, unattended install of the Linux or Windows software. Once a blade was configured, the image could be captured for faster redeployment. It was also very easy to customize the images; for example, it took just a few mouse clicks in the Web interface to change the Red Hat Linux 8.0 install to use the ext3 file system instead of the ext2 file system.

RLX System 300ex. RLX Technologies, (866) 759-9866, (281) 863-2100. www.rlx.comJames E. Drews is a network administrator for the CAE Center of the University of Wisconsin-Madison. Write to him at [email protected].

Post a comment or question on this story.We performed several system tests to analyze the servers' disk I/O and Web performance. Intel's Iometer 2003.02.15 (sourceforge.net/projects/iometer) provided a performance validation of the hard disk subsystems by generating I/O workload and performing a variety of simulated tasks. Spirent Communications WebAvalanche tests provided raw Web performance numbers under an untuned copy of Microsoft IIS.

Iometer Tests

We tested with three basic Iometer tests: database, maximum throughput and maximum I/O rate. The database test used a 2-KB transfer request size, with a mix of 67 percent reads and 33 percent writes. This proportion represents a typical database workload.

On the maximum throughput test, we used a 64-Kbps transfer request, and on the maximum I/O rate test, a 512-byte transfer request. On both tests we set the read/write distribution to 100 percent read and the random/sequential distribution to 100 percent sequential.



Intel Iometer Max I/O Test
click to enlarge

We also customized a test, which we performed three ways: with a 512-byte transfer request size and distributions set at 100 percent read and 100 percent sequential; with a 2-KB transfer request with a 67 percent read, 33 percent write distribution and 100 percent random; and with a 64-KB request set back to 100 percent read and 100 percent sequential.

Spirent WebAvalanche

We used Spirent's WebAvalanche 4.0 to measure transactions per second under Microsoft IIS, untuned (see results in chart, left). We set up this test with a 10-second ramp-up time, 100 users per second. File sizes were set at 7 KB to 10 KB. If the server blade had a 10/100/1000 network connection, a single adapter was used. With multiple 10/100 network connections, adapters were teamed under a load-balanced configuration to provide increased performance. Thus we were almost always processor-limited in our tests, rather than network I/O-limited.As server consolidation becomes a viable option, it's likely you'll see some movement in vertical markets, as vendors take advantage of the concept and offer their own versions. The Database Area Network, or DAN, is one such technology and was recently introduced by Savantis in the form of its dbSwitch.

DAN is a concept that borrows from both server consolidation and SAN technology: It provides a reduction in the number of database servers and, ergo, a lower TCO; affords high availability where there may have been none before; and offers centralized management.

Mission-critical databases are likely to be served by a primary database server and a backup database server. This is not always the case for less critical databases, because the hardware and software costs of providing high availability to these noncritical applications is difficult, if not impossible, to justify. But by consolidating database servers into a DAN, all applications share a pool of database servers, managed by a single device--in this case, the Savantis dbSwitch.The dbSwitch is responsible for management, monitoring and notification of all database servers residing within the DAN. If a single database instance fails, the switch can move the instance to another server within the DAN in a matter of minutes, ensuring that the applications using the database do not suffer unacceptable downtime.

This is not a load-balancing solution. The dbSwitch acts like a proxy--requests to databases are handled by the switch, which directs eacn request to the correct database server in the SAN transparently. Administrators can move database instances from one server to another if desired, but the dbSwitch will move a database instance on its own only if a database instance fails.

Savantis' solution is available only for Oracle. The company says it will support other databases in the near future. As a general concept, the DAN is an indication that server consolidation can be performed on a per-application basis to provide cost savings, and in the process, offer features that make the investment in this type of technology worthwhile. --Lori MacVittieNEC Express5800/ft

When we sent out a call for blade servers, we were on a bit of a fishing trip, seeing what we would reel in. In our invitation, we specified only that we were looking for servers in a blade format. The box NEC sent did come with four "blades," but all are part of a single computer. The two processor blades and two IO blades can be removed/added at any time. If you remove one of each, the computer still runs. Although the device did not meet the criteria for our tests, we decided to dig a little deeper and discovered a cool, albeit pricey, little box.

The NEC Express5800/ft is a 100 percent hardware redundant server using Intel x86 processors. Two of the blades are a set of redundant CPU modules; two are a redundant set of I/O modules that include the hot-swappable hard drives, network connections, and a couple PCI slots. Our system came with a floor-standing chassis, but a rack-mount option is available. The NEC Express5800/ft supports only Windows, but NEC says it plans to support Linux in the future. The previous version of this server does support Linux, if that is needed today.The CPU modules run in lockstep when both are present. Indicator lights on the front will let you know if hardware fails in either module, and the remaining working module will continue to run. For the disk subsystem, each blade contains a set of disks. Veritas Volume Manager for Windows software is used to keep the two blades in sync. In our tests, we were able to remove both a disk blade and a processor blade, and the system acted as if nothing happened.

If you're trying to achieve that elusive 99.999 percent uptime for a system and worried that a hardware failure will throw a wrench in your plan, the NEC Express5800/ft is worth a look.

NEC Express5800/ft, $29,893 as tested. NEC Corp., (800) 338-9549. www.nec.com

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights