Don't Sink Your IP SAN

Storage vendors are taking the plunge into iSCSI, but CPU utilization is a big concern. Adaptec's 7211C adapter floats to the top for its consistent performance and competitive price.

May 12, 2003

13 Min Read
Network Computing logo

You can tackle the CPU utilization problem a couple of ways. As always, there's the classic brute-force method: Servers with more horsepower have more CPU headroom and therefore are less affected by iSCSI. Simple, but expensive.



Maximum I/O 512-bpm Sequential Read
click to enlarge

Our suggestion: Hardware, baby. Fibre Channel HBAs handle the FC protocol; likewise, to get the most out of your iSCSI-connected servers, you'll need a specialized adapter that will rein in CPU utilization.

These devices work in two ways. Some off-load the TCP stack onto hardware, relieving the system CPU of that burden. Adapters using this method are called TOE (TCP Offload Engine) cards, and they look like any other NIC to the operating system, though you have to add an external iSCSI initiator--the software drivers that make iSCSI work. Microsoft is slated to have one available next month.

The second method is similar in approach to the TOE card, but with a twist: iSCSI HBAs appear to the operating system to be SCSI cards, and they come with their own iSCSI initiators--in iSCSI parlance, the "iSCSI driver."

For our tests, we gathered iSCSI adapters from Adaptec, Alacritech and Intel Corp. in our Green Bay, Wis., Real-World Labs®. Invitations went to Emulex Corp. and QLogic Corp. as well, but iSCSI products from those companies were not ready in time for our test window.We tested two iSCSI HBAs, the Adaptec iSCSI Adapter 7211C card and the Intel Pro/1000 T IP Storage Adapter, and one TOE card, the Alacritech 1001x1 Copper Single-Port Server and Storage Accelerator (we pity the advertising agency that has to come up with a catchy jingle for that).

Our test setup used a Cisco 9216 Fibre Channel switch with an iSCSI blade, a Nishan 3000 Series Storage Switch, a Cisco Catalyst 3550 Gigabit Ethernet switch, and a Eurologic SANbloc 2 Fibre Channel RAID enclosure with 14 73-GB drives. The cards were tested in three identical Dell 2650 2U servers with 1 GB of memory and two 2.6-GHz Pentium 4 processors.

Overall, we found these products a little raw. For example, many ease-of-use and convenience features are missing. Command-line implementations are common. But after talking with vendors, we believe that many of these early look-and-feel issues will soon be resolved. Moreover, even without management bells and whistles, iSCSI is not terribly hard to use.

Our individual results were interesting, to say the least. The Alacritech card performed well in every test--provided it had two targets (see "Multiple Target Woes"). But its overall CPU utilization was a bit higher than that of its rivals. The Adaptec card, meanwhile, didn't function as well in throughput or read tests, but it turned in a strong overall performance, with ry good CPU-utilization numbers. For its part, the Intel Pro/1000 T IP Storage Adapter did well in most tests and aced CPU utilization, but it has performance issues with reading, regardless of the number of targets.

The wild card was our NWC Custom test, which we designed to simulate real-world access patterns. The Adaptec iSCSI-7211C scored very well in this test and was simply the most consistent card in performance, posting CPU utilization scores that won outright or came very close in every test. Throw in a very competitive price (all pricing is MSRP) and you have an Editor's Choice award winner. Adaptec iSCSI Adapter 7211C (Copper) | Alacritech 1000x1 Copper Single-Port Server and Storage Accelerator | Intel Pro/1000 T IP Storage Adapter

Adaptec iSCSI Adapter 7211C (Copper)



We installed this sweet little card in a Dell 2650 and had no problem getting it up and running. Driver installation was simple as well. The 7211C sports a standard PCI form factor in a standard PCI 64-bit, 66-MHz slot. Like the Intel Pro/1000 T IP Storage Adapter, it runs its own iSCSI initiator and looks like a standard SCSI card to the system.After installation we dove into the configuration interface. Unfortunately, it was like traveling back in time to the 1980s--the configuration utility is a command-line executable that we had to copy by hand to the hard disk. Self-explanatory text menus greeted us, reminiscent of our long-past dial-up BBS days. Although this configuration utility was better than the beta Microsoft iSCSI drivers that we had to use for the base NIC and the Alacritech card, it fell way below our expectations. When we queried Adaptec, the company said it plans to have a easy-to-use GUI configuration utility available next month--not a moment too soon.

Adaptec did include a configuration binary for Linux but said it has no plans for a Linux GUI configuration utility. Our Network Computing Linux guru, technology editor Lori MacVittie, tells us that will suit Linux wonks just fine.

Other than the aforementioned multiple-target issue, testing on the Adaptec 7211C went smoothly. It wasn't the speediest demon on our multiple-target read tests, but its performance was acceptable. On our write tests, the 7211C showed very good performance, and it was downright stingy in CPU utilization. The 7211C scored well in both the IOps and 2-Kbps database tests, while it won outright on throughput and was within a hair of the Intel Pro/1000 T IP Storage Adapter for CPU performance in our custom test.

The Adaptec 7211C demonstrated good throughput and great CPU performance, all at an attractive price.

Adaptec iSCSI Adapter 7211C, $660. Adaptec, (408) 945-8600, (800) 442-7274. www.adaptec.comAlacritech 1000x1 Copper Single-Port Server and Storage Accelerator

We really liked Alacritech's 1000x1 card. A key benefit: It functions as a standard NIC, and that gives this card an edge for repurposing. Should you decide not to use it for iSCSI, it's suitable anywhere you would use a standard NIC, providing CPU relief for almost any TCP task.

The card comes in a low-profile configuration with a spare bracket for both low-profile slots and normal PCI slots. Like all the cards in we tested for this article, it's a standard PCI 64-bit 66-MHz card. Installation was as easy as inserting the card, booting and installing the drivers.

Sadly, getting iSCSI to run was not so easy, but it wasn't Alacritech's fault. The company does not provide an iSCSI driver with its card, so we did our testing with the beta Microsoft iSCSI driver, which lacks a GUI. The command-line experience was truly miserable (we do expect Microsoft to have a full GUI interface for this driver next month).

Throughput performance for the Alacritech card was very good. CPU utilization under iSCSI was not as impressive. Compared with the baseline Broadcom Ethernet NIC embedded in the Dell 2650 we used for testing, the Alacritech's CPU utilization was awesome. However, CPU utilization was consistently high, sometimes almost double, compared with the Adaptec and Intel iSCSI HBAs. While still leaps and bounds better than the embedded Broadcom NIC, the Alacritech simply couldn't compete with the added iSCSI-specific off-load capabilities that the Intel Pro/1000 T and the Adaptec 7211C brought to the table.

Higher CPU utilization performance and a significantly higher price cost the Alacritech 1000x1 the crown, but we wouldn't hesitate to recommend this adapter if you're worried about investment protection--you can always repurpose the card for use in non-iSCSI applications.Alacritech 1000x1 Server and Storage Accelerator, $999 (copper), $1,399 (fiber). Alacritech, (408) 287-9997. www.alacritech.com

SUBIntel Pro/1000 T IP Storage Adapter

The Intel Pro/ 1000 T storage adapter is a likable card, but its read performance problems bumped it down to the bottom of the pile.

The Pro/1000 T comes as a standard PCI 64-bit 66-MHz card and is physically a bit larger than the other two cards we tested. It uses an Intel 80200 processor based on the company's XScale architecture and Wind River's TINA (Tornado for Intelligent Network Acceleration TCP-off-load technology. It also features upgradeable flash memory, making it a breeze to update the firmware.

The Pro/1000 T led the pack in ease of use. Installation was smooth, and the graphical control-panel configuration utility was simple, clean and intuitive. Setting up targets and accompanying security that might be needed was no problem.

Difficulties arose, however, when we started testing. The PRO/1000 T performed very well in write tests, but read tests got us a very confusing 55-MBps transfer rate. Using two targets didn't improve the situation, so we called Intel. We sent packet traces with Ethereal and tried several variations, with no luck. Intel claims that it gets much better performance in its laboratories, along the lines of 70 MBps. We were unable to achieve that rate, and even if we had, that score still would trail its rivals.On the plus side, we were impressed with the Pro/1000 T's CPU utilization scores, which were neck-and-neck with Adaptec's across the board.

Intel Pro/1000 T IP Storage Adapter, $695. Intel Corp., (800) 628-8686, (408) 765-8080. www.intel.com

Steven J. schuchart Jr. covers storage and servers for Network Computing. He previously worked as a network architect for a general retail firm, a PC and electronics technician, a computer retail store manager and a freelance disc jockey. Write to him at [email protected].

Post a comment or question on this story.

The archenemy of iSCSI is Fibre Channel, which comes in 1-Gbps and 2-Gbps flavors, but iSCSI's ace is its use of Ethernet and IP. This benefits you in two ways: First, price; Ethernet ports are comparatively inexpensive. Second, Ethernet is well-understood while Fibre Channel requires a significant knowledge investment. Another plus for iSCSI is its ability to connect via WAN links. Your application's latency sensitivity will affect how big (that is, expensive) a pipe you'll need, but keeping your SAN safely tucked away in your data center while servicing remote locations is a big draw.

Of course, there's a reason Fibre Channel has been holding off iSCSI--raw speed. With a top end twice that of current Ethernet/iSCSI, shops with heavy storage loads will likely stick with Fibre Channel for now.However, we do expect iSCSI to make headway in small, single-purpose SANs that don't require huge throughput. With that in mind, we hammered on the new crop of iSCSI adapters from Adaptec, Alacritech and Intel. All three entries were capable cards--albeit a little rough around the edges--but Adaptec's iSCSI 7211C took the crown thanks to its strong and consistent performance and competitive price.In our Green Bay, Wis., Real-World Labs®, we fired up three Dell 2650s with 2.6-GHz Intel Xeon processors and 1 GB of RAM, and loaded each with its own SCSI subsystem for the Microsoft Windows 2000 SP3 base operating system. On the Ethernet side, we used a 12-port Cisco Catalyst 3550, and storage was provided by a Eurologic SANbloc 2 RAID enclosure with 14 73-GB, 10,000 RPM 2-GB Fibre Channel hard disks. Finally we applied the glue that makes all of this stuff talk, a Cisco MDS-9216 Fibre Channel switch with an IP storage blade. For sanity's sake we also had a Nishan 3000 Series storage switch on site--the Cisco hardware was pretty fresh.

Our testing was performed with Iometer version 2003.02.15 (see iometer.sourceforge.net for more details). Tests had five-second ramp-ups and two-minute stable test runs. As a baseline, we tested the embedded Broadcom NIC without any performance enhancement.

Tests:



Maximum Throughput
click to enlarge

• Database: 2-KB random I/Os with a mix of 67 percent reads and 33 writes, which represents a typical database workload (see results left and right).

• Maximum Read Throughput: Transfer request size to 64 KB, percent read/write distribution to 100 percent read, and percent random/ sequential distribution to 100 percent sequential (see results online).• Maximum Write Throughput: Transfer request size to 64 KB, percent read/write distribution to 100 percent write, and percent random/ sequential distribution to 100 percent sequential (see results online).



2-Kbps Random Read/Write Database Test
click to enlarge

• Maximum I/O Rate: Transfer request size to 512 bytes, percent read/write distribution to 100 percent read, and percent random/sequential distribution to 100 percent sequential (see results on page 58).

• NWC Generalized Custom Test: Transfer request sizes of 512 bytes with 33 percent access distribution, 2 KB with 34 percent access distribution, and 64 KB with 33 percent access distribution. On the 512-byte segment, the percent read/write distribution was set to 100 percent read, and the percent random/sequential distribution was 100 percent sequential. On the 2-KB segment, the percent read/write distribution was 67 percent read, 33 percent write, and the percent random/sequential was 100 percent random. On the 64-KB segment, the percent read/write distribution was 100 percent read, and the percent random/sequential distribution was 100 percent read. This test represents what we feel will be a typical, if small, standard workload for a storage subsystem.During our iSCSI tests we learned that there's often a problem reading a single target. To make sure we have the terminology straight, we define a target is a device you can partition into multiple LUNs (logical unit numbers; in other words, what happens when you partition your local hard disk into multiple drives). For example, we split our Eurologic SANBloc 2 into two targets, each running at RAID 0. We then partitioned each target into two LUNs. In Windows, this looked liked four drives--E, F, G, and H. E and G were primary partitions; F and H were extended partitions on each target, respectively.

In contrast to every other storage protocol, read operations for iSCSI are more difficult to perform than writes. Incoming streams need to be reordered and processed, while that operation is handled on the target end for writes. Cards that perform TCP off-load tend to become bound by their internal CPUs, where the base NIC in the Dell 2650 had a pair of 2.6-GHz Xeon processors to work with.

The Alacritech and the Adaptec cards both performed well. The Alacritech uses a parallelized processor to handle TCP streams, meaning that it is much more efficient when handling multiple streams than a single stream. With only one target, you get only one TCP stream, and the onboard processor becomes a bottleneck--specifically, to the tune of 85 MB per second with iSCSI on the Alacritech card. On the Adaptec card, read performance was only 64 MB per second with a single target but rose to 87 MB per second with two targets. The Intel PRO/1000 T IP Storage Adapter card performed the worst, getting only 55 MB per second with two targets and about the same with one.On-board CPU performance is the root cause of the slower performance for both the Alacritech and the Adaptec cards. At the time of this writing, we're unsure of the cause of the performance problems with the PRO/1000 T; Intel is investigating the issue.

So, are multiple targets the norm? In iSCSI's early markets--small departmental and small-business SANS and distance applications--a single target is likely, but as iSCSI is adopted into larger installations, the likelihood that multiple targets will become the norm increases greatly.

R E V I E WiSCSI Adapters



Sorry,
your browser
is not Java
enabled




Welcome to

NETWORK COMPUTING's Interactive Report Card, v2. To launch it, click on the Interactive Report Card ® iconabove. The program components take a few moments to load.

Once launched, enter your own product feature weights and click the Recalc button. The Interactive Report Card ® will re-sort (and re-grade!) the products based on the new category weights you entered.

Click here for more information about our Interactive Report Card ®.





Read more about:

2003
SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights