Review: Host Bus Adapters

Of the three cards we tested, our Editor's Pick won for its superior processing capabilities, pricing and features.

August 17, 2004

11 Min Read
Network Computing logo

Don't connect your servers to an iSCSI SAN without dedicated iSCSI host bus adapters. The $500 to $600 you spend on a HBA will take a load off the servers' CPU, and give you the storage speed of Gigabit Ethernet. Although prices have remained about the same, this year's accelerators are easier to install and configure than the first-generation devices we tested last year.

The iSCSI platform provides a reasonably priced alternative to 2-Gbps Fibre Channel for block-level access to large storage arrays. This hybrid of SCSI and TCP/IP offers a simple yet high-performance alternative to FC at a low cost. What's more, it has tremendous growth potential, particularly now that 10-Gbps Ethernet--a natural fit for iSCSI--is on the horizon.

True, high-performance FC still dominates mass storage, but its speed comes at a price: You'll pay 50 percent more in TCO for an FC SAN, compared with the cost of an iSCSI SAN, by our accounts. Even FC HBAs cost more. QLogic's 2310F 2Gb FC HBA has a street price of $1,030, for example, while the company's Gigabit Ethernet iSCSI card lists at $679. Further, FC components often are incompatible, and the complex process of configuring and maintaining FC SANs often requires adding admins to your staff or getting outside support.

An accelerator isn't mandatory, of course, but encapsulating SCSI instructions and data and handling the TCP/IP stack is CPU-intensive. The HBA's processing power lets you reap iSCSI's rewards by off-loading that overhead. Our previous tests showed that though iSCSI could run on fiber or copper Gigabit Ethernet hardware, reasonable performance could be achieved only through the use of a dedicated and accelerated iSCSI HBA. Overall CPU utilization dropped from more than 20 percent for bidirectional transfers to less than 5 percent and remained at that level for all but the most computationally intensive operations.

We asked seven vendors to send us iSCSI HBAs for testing in our Green Bay, Wis., Real-World Labs®. Three accepted our challenge. Returning players Adaptec sent an updated version of its 7211C iSCSI card and Alacritech sent the SES 1001 iSCSI Accelerator. New to the lineup is QLogic's QLA4010 SANblade, which was in development during our last tests. We also invited Astute Networks, Emulex, Intel and Xiran. Emulex's product was still in the OEM qualification stage, Intel didn't respond, and Xiran was closed by its parent company, SimpleTech, on June 16. Astute Networks says its multiport cards work as iSCSI targets in the storage device, rather than initiators in the server.

Performance AnalysisClick to Enlarge

For these tests, we set up a dedicated Gigabit Ethernet iSCSI network. We started with an Adaptec SANblock 2-Gbps FC enclosure with 14 73-GB drives. From the SANblock, we set up a 1-Gbps FC connection to a McData 1620 Internetworking Switch, which provided the iSCSI link and handled the fabric translation from FC. We connected the McData unit over Gigabit Ethernet fiber (our 1620 has fiber Gigabit Ethernet ports only) to a Dell 5224 PowerConnect switch, which provided the Gigabit Ethernet copper connection for the iSCSI cards to be installed in our Dell 2650 dual 2.6-GHz Pentium test server. We removed all drives from the server and created three identical Windows Server 2000 boot drives, patched to current levels. This let us configure each card on a separate system disk and avoid cross-contamination.

In our previous test of iSCSI HBAs, we learned that because most iSCSI ASICs use parallel processing, to achieve reasonable efficiency you must provide multiple iSCSI targets with which to connect. This is a result of the off-loading process and is not an issue in FC networks or with iSCSI traffic running on conventional NICs. But it should be a consideration for anyone evaluating iSCSI: An HBA attached to a single-target array probably won't perform optimally. Therefore, the finishing touch for our test bench was to configure the SANblock's 14 drives in four identical 135-GB RAID 5 arrays, leaving two drives to run as hot spares. We then partitioned each of the arrays and formatted the NTFS to create four LUNs, designated as targets and loaded with our 5-GB Iometer data file.

This generation of HBAs, with graphical interfaces rather than command-line configurations, was far easier to install and manage than the earlier one. Adaptec's iSCSI HBA management appears as a simple system tool in the Control Panel, and QLogic provides a Java-based configuration application for setup. Alacritech uses Microsoft's native iSCSI initiator, which lets any Windows computer with a Ethernet NIC use iSCSI-enabled storage devices.

We changed the Iometer protocol used previously, increasing the maximum transfer size to 2 MB and adding a test to measure concurrent bidirectional read and write performance (see "Test Bed," page 13). To achieve full efficiency, which demands multiple IP streams, we created a topology in which four workers were each assigned a LUN. Under this setup, with its multiple data streams, the workers could each perform different tasks.There was little difference in performance during our dedicated read-only and write-only tests, which involved transfer sizes of 64 KB, 1 MB and 2 MB; Adaptec's HBA took a small lead over the cards from Alacritech and QLogic. Big differences surfaced when we moved to bidirectional and 512-byte I/O tests, with Alacritech's card out-performing its competitors every time. Alacritech's HBA also was the winner in our NWC Custom test, a nasty matrix of concurrent reads, writes and file sizes designed to hammer the system with intense but realistic access patterns.

As for CPU utilization, QLogic's HBA had the lowest usage requirements, closely followed by Adaptec's card. Alacritech's consistently posted 1 percent to 2 percent more CPU usage for the same tasks. But don't hold that against it. Although the CPU-usage difference when not using an HBA is dramatic, the usage difference among the cards is quite small; each card claimed less than 5 percent of the CPU's power for all but a couple of the 512-byte IOP tests. Plus, we noted a correlation (though not necessarily cause and effect) between higher CPU usage and improved transfer rates. For example, the Alacritech card showed higher CPU usage overall, but with bidirectional transfers, its additional CPU utilization was less than 2 percent more than the others, yet translated into a speed increase of 20 to 30 MB per second. That speed gain is exceptional for a mere 2 percent hike in CPU utilization.

Speaking of Speed

In our real-world test bed, we replicated a typical mixed-fabric SAN environment with multiple targets that used Ultra320 RAID 5 arrays. For maximum iSCSI performance, consider using a partnered, dual-card configuration for optimal bandwidth. Only Alacritech's platform, which can bond multiple cards, supports duplexing, failover and load balancing using iSCSI redirect.

CPU UtilizationClick to Enlarge

With four workers addressing four targets, the equipment achieved write speeds of 97 MBps and bidirectional speeds of 130 MBps on 1-MB transfers, which are nowhere near the almost line-level transfer speeds shown at independent test labs. But despite the discrepancy between our test numbers and vendors' high-end promises, we're still satisfied with the iSCSI HBA and consider it a reasonable, economical alternative to FC for many apps.

Although the linear read and write tests showed only a 5 percent to 8 percent variance between HBAs, the Alacritech SES1001T iSCSI Accelerator displayed superior processing capabilities, as shown by its strong bidirectional and IOps performance. That, combined with the card's pricing and features, earned it our Editor's Choice award. A table comparing the cards' features can be found above.

By loading the SES1001T iSCSI with impressive features and eliminating some superfluous ones, Alacritech has shown it's been paying attention to the progress and needs of the iSCSI market. The SES1001T has impressive data-center-level features, including link aggregation and failover. The card also has Ethernet consolidation capabilities that let you use the iSCSI link to communicate with switches and storage devices that support in-band management, obviating a separate management connection.

Also available in a Gigabit Ethernet fiber version, the SES1001T (copper) is a low-profile, 66-MHz/64-bit bus-mastering PCI card with 16 MB of on-board memory and a single RJ-45 Gigabit Ethernet port. It comes with an optional half-height mounting bracket and is a great choice for Windows 1U servers starved for interior space.

To connect this card, we installed Alacritech's Windows hardware driver and Microsoft's iSCSI Software Initiator 1.04a. From within the initiator software, we assigned an IP address to the card, then identified and configured the iSCSI target. One minor problem arose during configuration: The dynamic disks we created for testing would not reactivate automatically after a system restart. This is a known issue with the Windows 2000 start-up sequence and can be fixed with a registry edit.This card thrived under load. Its best performance came when we pushed it to the max, lending some credibility to the extreme line-speed performance capabilities the company reported using highly optimized configurations. For all its grace under pressure, however, the SES1001T lagged a little behind the others when performing simple read operations. Plus there's the matter of increased CPU usage. The accelerator's only other downside was its lack of Unix/Linux support.

Regardless, Alacritech's single-ASIC design, based on its patented SLIC (session-layer interface control) Data Offload Architecture, is extremely powerful. The card leaves plenty of room for growth because the microcode for its 1000x1 ASIC can be reprogrammed through a driver upgrade.

SES1001T iSCSI Accelerator, $559. Alacritech, (877) 338-7542, (408) 287-9997. www.alacritech.com

The QLA4010C is a full-height, short form-factor PCI-X card with 32 MB of DRAM and a QLogic ASIC. It's the only HBA we tested that runs at 133-MHz, rather than the standard 66 MHz.

This card installed like a dream. QLogic's GUI is well-designed and as easy to use as the Microsoft initiator. In fact, the QLogic interface for managing the initiator and targets seemed to bypass the problem of losing dynamic disks.Under test, the card's performance was close to that of Adaptec's, but in overall transfer speed, the card finished third. To its advantage, the QLA4010C consistently posted the lowest CPU utilization, even on the occasional test when its transfer speed exceeded the competition. The only other factor that worked against QLogic was cost, with a street price $100 higher than its closest competitor.

Uniquely, the 4010C can boot directly into Windows or Linux from SAN storage. The other vendors have only promised this. This capability lets you run servers without the added cost and support involved in mounting directly attached storage in each machine.

QLA4010C SANblade iSCSI HBA, $679. QLogic, (800) 662-4471, (949) 389-6000. www.qlogic.com

Last year's Editor's Choice, the 7211C must have undergone a few changes, because we ran into some problems this time around.

The 7211C is a full-height, short form-factor 66-MHz PCI-X card with a TOE ASIC, a Storage Protocol Accelerator, an Intel 80200 processor for iSCSI functions and 512 MB of DRAM. Although the new card's circuit board design was identical to that of the earlier 7211C, the main components contained several generational changes.Our problems started early, when the first card Adaptec sent refused to connect to the switch at Gigabit Ethernet speeds. After calling Adaptec repeatedly and fiddling with the drivers and the new BIOS config utility, we received two more cards. The second card installed correctly, but refused to recognize or even reformat the existing partitions on our RAID 5 LUNs. Finally, the third card brought success--but not before we had reformatted the partition with the Adaptec card and reloaded the data.

The GUI is a substantial improvement over the former command-line interface, but strangely, Adaptec's documentation still insists that you load iConfig.exe, though the program doesn't exist. From a performance standpoint, the Adaptec did well, winning the read tests and a few others; but its overall performance still came in second, just ahead of QLogic. CPU utilization for the 7211C was consistently low but not quite as low as QLogic's.

Pricing placed this card in the middle of the pack as well, but the difficulties we experienced in setup and configuration caused it to finish third. In fairness to Adaptec, the vendor was extremely responsive. Aside from those setup problems, the Adaptec card is a stable performer.

ASA-7211C iSCSI HBA, $559. Adaptec, (408) 945-8600. www.adaptec.com

Steven Hill owns and operates ToneCurve Technology, a digital imaging consulting company. Write to him at [email protected].An economical alternative to Fibre Channel, iSCSI gives you block-level access to large storage arrays. To maximize this platform's power, equip your server with an iSCSI HBA (host bus adapter) to off-load processing power from the CPU.

We tested three iSCSI HBAs: Adaptec's 7211C iSCSI card, Alacritech's SES 1001 iSCSI Accelerator and

QLogic's QLA4010 SANblade. All three reduced our test system's CPU usage and improved iSCSI performance, and were easier to install and configure than the generation of cards we tested last year. Our Editor's Choice pick, the Alacritech SES 1001 iSCSI, turned in the best numbers, had the best price and came with extra features, such as load balancing and failover.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights