Dell Serves Up a Winner

Dell's PowerEdge 6650 gets our nod for its sleek design and competitive price.

September 9, 2002

20 Min Read
Network Computing logo

Spare memory is exactly what it sounds like: extra RAM to which the system can copy if it detects too many correctable errors or an uncorrectable error in the current memory bank. The system administrator can swap the memory out during scheduled downtime or nonpeak times. Memory mirroring is a technique in which the system makes a copy of every piece of data in identical banks. If the primary bank fails, the system makes a fast, seamless transition to the mirrored bank. The only problem with this technique is the doubling of memory costs and the halving of the total amount of memory you can put in the system. DDR RAM is about twice the price of standard SDRAM--about $250 to $400 per gigabyte if you buy it online--so costs can add up quickly (see "Server Chips Ahoy!").

All three servers also have USB ports, which are standard on desktop PCs. Although these ports can be very useful for attaching things such as USB KVM (keyboard, video, mouse) equipment, they also raise security concerns. Anyone can pick up an 80-GB USB hard drive for about $200, attach one to a server and help himself or herself to a chunk of company data. Obviously, physical security is critical.

We found as many differences as similarities in the units we tested. The IBM and Dell servers came with three 36-GB, 15,000-RPM SCSI hard disks; the HP unit had half that storage: three 18-GB 15,000-RPM SCSI hard disks. HP said it couldn't provide us with the 36-GB drives we requested. The ProLiant's hard disk subsystem problems, however, stemmed from cache algorithms, not size. The IBM unit was considerably larger than the other two units--7U rather than 4U--but the extra space was put to good use.

The Dell PowerEdge 6650 won our Editor's Choice award. This solid machine produced good performance numbers across the board and offers better usability and maintenance features than the competition. Best of all, its $31,885 list price, which includes the OS, is $5,200 less than IBM's, and $5,800 less than HP's price. IBM's server is solid, but it lacks some of Dell's rackmount serviceability features. The HP ProLiant DL580 G2's disk-performance issues keep us from recommending the machine.

Dell's PowerEdge 6650 packs power and punch in a small package. This well-designed little server is 4U tall and has a ton of features, such as a front-loading processor drawer, that make it optimal for rackmounting. The most efficiently constructed of the three servers we tested, the PowerEdge 6650 is competitively priced, includes good management software and has excellent stability.

Dell has revamped its design. Beyond its new industrial look, the PowerEdge has features that make the insides highly accessible from the front and top of the unit. True, we had a hard time keeping a straight face while imagining Dell's young spokesman, Steven, saying, "Dude, you're getting Active Bezel," and picturing him extolling the virtues of the server's minimalist gray faceplate, with a neon-blue Dell light that turns to a red exclamation point when there's a problem. But there's a lot to say about what's hidden underneath. The bezel snaps off easily, revealing access to the floppy drive, CD-ROM or optional DVD-ROM drive and power switch. A one-line LCD displays text-based error messages and numeric error codes. The CD-ROM/floppy drive can be removed easily with the system shut down by removing the Active Bezel and opening a small latch on the left side of the drive assembly. Inside, there is also space for as many as five 72-GB drives--a 360-GB capacity.

Server FeaturesClick here to enlarge

Dell put serious thought into the server's internal design. Even getting inside the box is easy--simply turn a thumbscrew on the machine's back and take off the two lockable top covers. To service the memory cards and floppy disk/CD-ROM drive controller, for example, you flip butterfly levers on the cards to raise them out of the connectors without removing them from the system. This technique reduces the possibility of electrostatic-discharge damage. Next, remove the six hot-plug fans in the center of the system.

Once the covers are off, you can actuate two levers on the front of the system to slide out the processor tray. This easy-to-use design has some advantages over more traditional methods of processor removal. You can end up with all the fans lying about, but this is a minor inconvenience. The processors and heat sinks are held down by a large, hinged metal cover. The heat sinks simply sit on the Xeon MP processors, which are secured into ZIF (zero-insertion force) sockets below the heat sinks.

The PowerEdge 6650's two 900-watt power supplies use large, nonstandard power cords (the HP system does too). This lets the unit take in 110-volt standard power, even fully loaded. As with so many of the other components, a lever releases each power supply. The dual independent power sources allow you to plug the server into more than one power grid, for fault tolerance. Unfortunately, Dell's clever designers faltered here: The power supplies must be removed from the top, rather than from the front or back of the unit. IBM makes its power supplies replaceable from the rear, and HP's can be taken out of the front.

The PowerEdge 6650 has eight expansion slots. One is a legacy 32-bit PCI 2.2 slot, for additional backward compatibility. Three of the remaining PCI slots have a PCI-X bus to themselves; the other four share two PCI-X buses. The onboard NICs also share a PCI-X bus. Although this machine has even more slots than the much larger IBM system, the Grand Champion HE chipset provides only six PCI-X buses to work with, so even with an extra PCI-X slot, the aggregate bandwidth you need must not exceed the available bandwidth.

Server PerformanceClick here to enlarge

Dell was the only competitor to include two on-board Gigabit Ethernet NICs, which reside on the rear multi-I/O card. This card also contains the two USB ports, serial port, and PS/2 keyboard and mouse ports. The rear card is convenient, but you can't hot-swap it. You can set the NICs for failover and load-balancing, but if both NICs fail, you cannot swap on the fly. To add hot-swappable, removable NICs, you'll need to disable the on-board ones. Dell says it put the NICs onboard to provide more usable PCI slots. For all but the most mission-critical applications, the onboard NICs will serve nicely.The PowerEdge 6650 comes with Dell OpenManage and OpenManage Server Assistant, which aid in installing, configuring and managing the server. Server Assistant does a slick job and makes installing the server's OS a breeze. OpenManage lets you perform common management tasks and sends out notifications via e-mail. It also allows you to monitor and manage other Dell machines on your network, provided they have loaded the appropriate software instrumentation. You can get the Dell Remote Assistant Card (DRAC) for $699 with a modem or $499 without. This card allows for out-of-band management to keep track of applications, server health and remote setup.

In our tests, the PowerEdge 6650 performed very well, winning three out of four of our Intel Iometer tests. Although the unit finished last in our Caw Networks Web Avalanche test and NetIQ Chariot NIC-performance test, we were satisfied that the server's performance was well within acceptable variance for testing--especially since the differences in performance from best to worst on both tests are truly insignificant (see performance table).

Dell PowerEdge 6650, $31,885 as tested, Dell Computer Corp., (512) 338-4400, (800) 289-3355.

IBM eServer xSeries x255 | Hewlett-Packard Co. HP ProLiant DL580 G2

IBM eServer xSeries x255

In these days of "less 'U' is better," the x255 takes the old-school route, with prodigious disk space in a 7U rack-mount box. The largest of the three machines we tested, IBM's eServer xSeries x255 is a decent machine, particularly for shops that are deploying to remote sites or require maximum internal storage. If you have rack space issues or a weak back, this is not the machine for you, but it's still a solid machine from a solid company.

The top comes off of the box by pulling up a simple lever, sliding the cover back and lifting it off. Inside, the LightPath diagnostic panel, positioned near the separate system processor, indicates what failed on the system. The processor cover comes off with two simple pull-up clips. The processors are held down with thumbscrews, which remind us of old-fashioned milk canisters, positioned on the heat-sink towers. The thumbscrews are spring-loaded to keep proper pressure on the CPUs for cooling; however, they make it very difficult to change the processors.

The eServer's memory card comes up and out with butterfly levers. A simple, spring-loaded slide switch allows easy removal of the fan. IBM solved its power-draw problem by distributing the load through four 370-watt power supplies. It was the only unit we tested to use standard power cables. Getting a power supply out, provided you have good access to the back of the machine, is easy.The front of the machine has a normal, as opposed to slim-line, CD-ROM and floppy drive. However, though it's easy enough to remove the drives--by squeezing the two clips on either side of each one--IBM's system for replacing these drives is clearly inferior to those of Dell and HP. These drives have standard cabling rather than pluggable bays, so you have to take the top and processor covers off to reconnect the data and power cables. The system has space for three 5.25-inch, half-height devices and one 3.5-inch floppy disk drive bay, plus the 12 hot-swap bays for hard disks. The IBM server also sports five PCI-X slots and one legacy PCI 2.2 slot. That's fewer than Dell's, but adequate.

IBM has included an advanced version of the old ISMP (Integrated Systems Management Processor), called the ASMP (Advanced Systems Management Processor). This discrete processor, whose excellent technology IBM migrated from its mainframes to its PCs, monitors the Intel side of the machine and collects data. The ASMP has a few new features, along with all the old ones that made us like the ICMP. Both control the LightPath diagnostics tools, but the ASMP also integrates with IBM's Director server-management software, and features SMTP traps and remote booting.

IBM's standard software is under active development. IBM Director, for example, is a sophisticated method to control or monitor your IBM servers in conjunction with the ASMP. Servers are connected via built-in RS-485 ports to provide out-of-band management. The Director must be set up on a separate server, but it provides a wonderful way to support IBM's systems. You can also get a separate, $499 management board, which will provide not only all the ASMP's functionality, but Ethernet out-of-band management, remote system control and a number of other features. Companies planning to deploy this machine remotely should consider adding this card. The eServer also includes ServerGuide to aid in OS and machine setup. We used ServerGuide to build our test x255 and found it to be smooth and untroubled.

The IBM system performed adequately in all our tests. It turned in middle-of-the-pack Iometer test results, but edged out HP and Dell in our Chariot test because the unit's NIC outperformed the others. In the Web Avalanche test, the IBM came in a close second, with 4,212 transactions per second, using an untuned copy of Microsoft Internet Information Server.

The unit comes with a three-year warranty, provided by IBM's giant Global Services division.IBM eServer xSeries x255, $37,085, IBM Corp., (404) 238-1234, (800) IBM-4YOU.

Hewlett-Packard Co. HP ProLiant DL580 G2

By the time we finished testing the HP ProLiant DL580 G2, a by-product of HP's wholesale digestion of Compaq, we had to reach for the Pepto-Bismol. The ProLiant DL580 G2 obviously was developed by Compaq prior to the merger--the next generation of the Compaq DL580 we were once fond of. But this version is disappointing, even keeping in mind the issues we had with the preproduction unit.

The charcoal gray DL580 G2 can handle up to four UltraSCSI 160 hard drives. The floppy and CD-ROM (or optional DVD-ROM) drives can be removed with the press of a button, for quick replacement or upgrade.HP's dual-source power supplies were the easiest to replace, thanks to a simple clip-and-lever mechanism and their location on the front of the machine, an improvement over the earlier DL580's handle-trigger mechanism. When this machine is fully loaded, it can run on 110-volt power with the larger nonstandard power cables that come with the system, making power-distribution tasks much simpler. As with the Dell PowerEdge, this unit can be hooked to two power grids to ensure uptime. On the back of the machine there is the normal assortment of PS/2 connectors, a serial port and a video port. There also are two USB ports and an iLO port for accessing the integrated Lights-Out management. Ethernet connections require an internal PCI-X NIC.

A cover with a slick sliding mechanism gives you access to the rear two-thirds of the machine, which holds the components you would most likely need to access. Here, you'll find the memory card, processors and fans, all of which can easily be removed. The memory card has two butterfly-style handles that help remove the card and hold it in place. The processors come out by flipping a switch on the ZIF sockets. The processors' voltage regulators, in slots next to the processors, can be pulled out. The internal 5i RAID controller's battery and cache RAM module can be taken out to allow easy movement to another machine in the event of failure. Although you should rarely need to get to the front portion of the machine, you can reach that section by actuating the top cover, pushing a lever and sliding the front cover off. It's not as slick as the back section, but it's more than adequate.

The machine came with the familiar SmartStart utility to load a variety of operating systems. SmartStart is easy to use and useful in system-load and -configuration tasks. The integrated Lights-Out management card is an excellent tool for remote maintenance. It provides for remote text console, system-wellness monitoring, SSL-encrypted communications and automatic configuration for use right out of the box. Because it reduces time needed to tend the server, this feature effectively cuts TCO (total cost of ownership).

The DL580 G2 had some issues, particularly with the Iometer linear 64-KB read test, which shows maximum hard disk subsystem performance. The ProLiant DL580 G2 performed at about 29 MB per second on this test. We were concerned that the smaller 18-GB, Seagate-manufactured hard disks caused the problem, but learned that their performance characteristics are comparable to those of the 36-GB Fujitsu and Seagate drives in the IBM x255 and the Dell 6650. Rather, the problems have to do with the nature of the cache algorithms of the integrated 5i controller. HP claims the algorithms are designed for real-world applications rather than lab tests; however, we have to question this level of performance tuning. With a 64-KB stripe on the RAID controller and a linear read of a properly defragmented hard disk, this unit is destined to underperform.

The DL580 G2 did win the test that maximizes I/O operations by a substantial margin. However, considering the strange behavior the Iometer exhibited in the tests and the dismal performance shown in the sequential-read test, we take this performance number with a grain of salt. In the other two tests, NWC Custom and Database, which simulate real-world performance, the DL580 G2 came in last.In the tests we did with NetIQ's Chariot, the DL580 G2 competed well, and in our Web Avalanche test, the DL580 G2 outperformed the competing machines, though by an insignificant margin.

The ProLiant DL580 G2 was also the loudest machine we tested. HP responded that the server meets "all national and international standards for noise emissions." We don't doubt that, but the machine still howls.

The DL580 G2 comes with a three-year warranty, with next day, on-site response. You can purchase extended warranties up to four-hour response time for critical applications.

HP ProLiant DL580 G2, $37,747, Hewlett-Packard, (650) 857-1501, (800) 282-6672.

Steven J. Schuchart Jr. covers storage and servers for Network Computing. Previously he worked as a network architect for a general retail firm, a PC and electronics technician, a computer retail store manager and a freelance disc jockey. Send your comments on this article to him at [email protected].

We performed several system tests to analyze the servers' disk I/O, Ethernet performance and Web performance (see test results). Intel's Iometer tests provide a performance validation of the hard disk subsystems by generating I/O workload and performing a variety of simulated tasks. NetIQ Corp.'s Chariot tests Ethernet NIC performance, both in send and receive mode. Caw Networks' Web Avalanche tests provide raw Web performance numbers under an untuned copy of Microsoft IIS.In all but the Intel Iometer disk-performance tests, results were almost too close to call, with less than a 2 percent difference between the best and the worst performance. However, the Iometer tests did uncover some strengths and weaknesses. In the database test, for example, the Dell PowerEdge 6650 outperformed IBM's eServer xSeries x255 by about 25 percent across the board. The HP ProLiant DL580 G2 showed anomalous results in both the maximum I/O test--more than double the throughput of the IBM and Dell servers--and the sequential test--less than 30 percent of the throughput of its competitors. HP pinpointed the ProLiant's RAID controller as the underperformer, but was unable to explain the high maximum I/O score.

Intel Iometer Tests

We used three of the Iometer suite's basic tests--the database, maximum throughput and maximum I/O rate tests. We also customized a test. The database test used a 2-KB transfer request size with a mix of 67 percent reads and 33 percent writes. This proportion represents a typical database workload.

On the maximum throughput test, we used a 64-Kbps transfer request, and on the maximum I/O rate test, a 512-byte transfer request. On both tests we set the read/write distribution to 100 percent read and the random/sequential distribution to 100 percent sequential.

We performed our custom test in three ways. The first included a 512-byte transfer request size and distributions set at 100 percent read and 100 percent sequential. The second was a 2-KB transfer request with a 67 percent read-33 percent write distribution and 100 percent random. The third was a 64-KB request set back to 100 percent read and 100 percent sequential.NetIQ Chariot Test

NetIQ Chariot's Ethernet test employed Cisco Systems' 3550 12-port copper Gigabit Ethernet switch, Extreme Networks' Summit7i 24-port copper Gigabit Ethernet Switch, two Cisco 7200 VXR routers and two Cisco 2948G-L3 10/100 switches. The storage network included a Qlogic SANbox-8 Fibre Channel switch, nStor Technologies' NexStor 18F Fibre Channel array with 18 18-GB Fibre Channel Seagate drives; a Qlogic QLA-2300 Fibre Channel host adapter and three Maxtor 5,400-rpm ATA/100 hard disks.

In addition to the devices under test, this test included an HP Compaq DL 580 server and a Dell Precision Workstation 410.

Caw Networks Web Avalanche

We used Caw Networks Web Avalanche 4.0 to measure transactions per second under Microsoft IIS, untuned. We set up this test with a 10-second ramp-up time, 100 Caw users per second. File sizes were set to 7 to 10 KB. Other settings include http 1.1, MTU 1500, and two retries on retransmit.Our test bed for this suite included two Dell PowerEdge 2500s with 512 MB of memory, 36 GB of disk storage, and two 600-MHz Pentium III processors; 12 Dell Optiplex GX1 workstations with 256 MB of memory, 9 GB of disk storage and one 600-MHz Intel Pentium III processor; and eight generic 1U workstations with 256 MB of memory, 10 GB of disk storage and one 600-MHz Intel Celeron Processor.

The Xeon MP processor is Intel's latest model for the midrange server market. Among its features is HyperThreading, which was introduced in 2001. HyperThreading lets a single processor core execute two threads simultaneously. To the OS and the application, a single Xeon MP processor looks like two processors. In a normal multiprocessor environment, the multiprocessor-aware OS takes individual tasks from each application, called threads, and sends them to an available processor. For example, the machine can handle threads from Microsoft IIS and from Exchange at the same time.

Because each processor looks like two, the upper layers of the OS can remain the same and operate as if they were in a normal multiprocessor environment. The split from one physical processor to two virtual processors happens at the silicon level. Each virtual processor shares the execution resources of the single core and keeps track of which thread it is processing. Intel claims this resource sharing can increase processing power by as much as 40 percent. Intel does not claim, however, that HyperThreading is a replacement for multiple processors; it's a complementary technology designed to get the most out of each processor.

These chips also feature a three-level cache system, all integrated on the chip, for increased performance. The 8 KB, Level 1 execution trace cache can handle 12 kilobits of decoded instructions in program order. This cache helps reduce the time the processor spends recovering from incorrectly predicted branches in code. The 256-KB Level 2 cache segment, called the Advanced Transfer Cache, is considerably faster than the Intel Pentium III Xeon's L2 cache, and transfers 32 bytes of data with each clock cycle to the processor core. The object is to prevent idle clock cycles in the core. The L3 cache--either 512 KB or 1 MB--is tied to the system memory clock. It reduces memory latency by storing large data sets right on the processor chip.

This chip also includes Intel's latest iteration of Streaming SIMD (Single Instruction Multiple Data) Extensions, called SSE2. This technology provides special processor functions to reduce the number of instructions required for video and image processing and similar functions. The Xeon MP processor also sports the Rapid Execution Engine, which lets the system perform basic algorithmic functions in half a clock cycle, greatly reducing the time required for these functions.

All three servers we tested feature Server Works' Grand Champion HE chipset.This solid foundation provides many of the servers' base features.

The GC-HE chipset supports up to 64 GB of main memory using the DDR 200 standard, which includes a 400-MHz front-side bus. It also supports four-way multiprocessing with 6.4-Gbps bandwidth on the memory bus. The chipset also offers some interesting main memory protection features, such as Chipkill (an advanced error-checking and -correcting technology), mirroring, spare memory and hot-plug memory board support. GC-HE supports up to six independent, hot-swappable PCI-X buses. PCI-X is the latest generation of PCI, operating at 64-bit data width and 133 MHz, compared with PCI 2.2's 32 bits and 33-MHz bus speed.The bigger, better bus has enough capacity for bandwidth-hungry cards -- such as Gigabit Ethernet, 2-Gbps Fibre Channel and 10-Gbps Ethernet. Right now, the GC-HE chipset provides only 100-MHz support for the PCI-X bus. This is unfortunate for the servers we tested, but not unexpected, considering the youth of the technologies involved. The GC-HE chipset also provides many utilitarian and legacy features, such as PCI 2.2 support, a four-port USB 1.0 interface, dual ATA-100 support and support for ACPI power and event management.



your browser
is not Java


Welcome to

NETWORK COMPUTING's Interactive Report Card, v2. To launch it, click on the Interactive Report Card ® icon

above. The program components take a few moments to load.

Once launched, enter your own product feature weights and click the Recalc button. The Interactive Report Card ® will re-sort (and re-grade!) the products based on the new category weights you entered.Click here for more information about our Interactive Report Card ®.

Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights