Strategic Info Management: iSCSI Vs. Fibre Channel

Underdog iSSCI is looking to finally enter the SAN 'Heavyweight Division.' Can it stand up to Fibre Channel's stranglehold on the big-enterprise SAN market, or is iSCSI headed for a

October 6, 2006

16 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Ladies and gentleman, we're in for a classic slugfest. Traditional SAN vendors-- including EMC--are putting iSCSI interfaces on their midrange arrays, and iSCSI players are introducing higher performance systems, like the EqualLogic PS3000, that use SAS (serial-attached SCSI) back ends. Will these and other innovations be enough to break Fibre Channel's stranglehold on the big-enterprise SAN market?

We don't expect iSCSI to K.O. Fiber Channel within the next few years, if for no other reason than that organizations with considerable investments in FC equipment and training aren't going to jump ship just because iSCSI now runs on 10 Gigabit Ethernet. Gartner likewise estimates that disk arrays with iSCSI native host interfaces will

represent just 10 percent of SAN market revenue by 2009.But iSCSI does have momentum, and we do like an underdog. We decided to re-examine the conventional wisdom on iSCSI's drawbacks in light of recent advancements. How many are still based in fact, and how many are just FC vendors spreading FUD to protect their profit margins?

Strategic Information Management

New Moves

Several leading iSCSI array vendors, including EqualLogic, Intransa and Lefthand Networks, have an innovative array-growth model that bears watching. This technology, which we'll call array grouping, improves performance as you add storage.

Reader Poll Results

Click to enlarge in another window

On a conventional modular array, such as an EMC Clariion CX3 Model 40, a controller module--which can have a redundant pair of controllers--manages several shelves of disk drives. The controller has four front-end FC connections to hosts and four back-end FC connections to drive shelves. These connections, and the controller's cache memory, are used by all host and drive connections. As you add more servers and drives to your SAN, the bandwidth and performance available to each server decreases.

In contrast, when you add capacity to an iSCSI system that supports array grouping, you can add another entire drive array, with additional controllers, cache and host connections. Multiple arrays are managed as a single group and your LUNs can be spread across all available drives in the group, increasing performance in tandem with capacity. These iSCSI vendors do this by having the entire group respond to a single virtual IP address, taking advantage of the fact that iSCSI runs on top of IP. Because FC is a Layer 2 protocol this technique isn't available for FC arrays.

In addition, last year, buying an iSCSI array meant you had to live with SATA drives and accept their performance limitations for transactional apps. But now iSCSI/SAS arrays with 10-Gbps interfaces are on the horizon. On these puppies even high-performance applications can run over iSCSI. Of course, large enterprises move slowly and are loath to put their crown jewels on disk arrays from innovative start-ups. They'll dip their toes in the water using iSCSI-to-Fibre Channel bridges to put Windows servers on the SAN cheaply, but we see a bright near-term future for iSCSI in midsize and fast-moving organizations.

We're also watching EMC's progress with SRDF (Symmetrix Remote Data Facility) software for remote storage replication. We expect that by next year or 2008, companies will be able to link Fibre Channel and iSCSI SANs at remote locations.

FUD BustersOne of the funniest lines we've ever heard a salesman spew was how you don't want to use Ethernet for storage because collisions will cause data to be lost. Perhaps not surprisingly, the guy's company is out of business.

For those of us who were around for the Token Ring-Ethernet wars of the 1980s, the arguments between Fibre Channel and iSCSI advocates about the theoretical advantages of buffer credits as a flow-control mechanism sound awfully familiar. What those of us who are paying attention learned from history is that implementation is more important than theory. Token Ring was theoretically better than Ethernet. But Ethernet's more open nature led to innovative solutions and extensions, like switching, that made theoretical advantages moot.

By The NumbersClick to enlarge in another window

Like the switched phone network, Fibre Channel relies on intelligence in the network, or more accurately, in the switches, to handle services like naming and zoning. iSCSI, being built on top of the classic dumb network Ethernet, shifts those services to servers on the network.

Rather than setting up zones in the switching fabric to prevent servers from stepping on LUNs (logical unit numbers) that they shouldn't, iSCSI relies on arrays to have access control lists for their LUNs, limiting access by IP address or CHAP password. If you want to query for the available devices on an iSCSI SAN, you'll have to set up a server for iSNS (Internet Storage Name Service), whereas Fibre Channel switches provide this service natively.FC vendors usually lead with a jab at iSCSI's reliability. Let's take a look at this and other assertions you may be exposed to when you go shopping for your next SAN.

»FUD: You need an expensive TOE (TCO/IP Offload Engine) card or HBA (host bus adapter) for good iSCSI performance or to boot from SAN.

Not so, for now anyway. Vendors from Adaptec to QLogic have long pitched specialized Ethernet cards that either off-load the overhead of computing TCP/IP headers and checksums through a TCP offload engine, or off-load the entire iSCSI stack using a SCSI or Fibre Channel HBA. For various reasons--including a desire on the part of some FC devotees to reduce iSCSI's cost advantage--vendors and pundits started asserting that you need one of these special cards to get reasonable iSCSI performance.

To bust this myth, we ran IOmeter on a Dell 1600SC server with a single 2.4-GHz Xeon processor accessing an EqualLogic PS-100E disk array. We ran read and write tests with transaction sizes from 512 bytes to 2 MB, using an Intel Pro 1000/MT gigabit Ethernet card and a QLogic QLA4010 iSCSI HBA.

As expected, CPU utilization was, on average, significantly lower when using the HBA--it never exceeded 18 percent for the Ethernet card. Performance was comparable with the HBA; on our NWC custom test, which is our best approximation of a real-world server, performance with the iSCSI HBA was 35 percent faster. Thing is, as we discuss later, most servers can't keep up anyway.Until recently, the one clear advantage iSCSI HBAs had was the ability to boot from SAN. Software iSCSI initiators don't load until after the OS is loaded, creating a chicken-and-egg problem: You need the OS to load the iSCSI initiator and access the SAN, but you need to access the SAN to boot the OS.

Enter emBoot and its Netboot/i and Winboot/i products that let Windows servers load their OSs using PXE (pre-execution environment) from an iSCSI disk array. Although they require a bit more work than an HBA--you must install the OS to a local drive, then use emBoot's tools to copy it to the SAN--the emBoot products (which differ primarily in that Winboot/i uses Microsoft's iSCSI initiator where Netboot/i has its own) do let you clone volumes for additional servers and use shared boot volumes for server farms or test environments.

Today, we recommend iSCSI HBAs or TOE cards only for the rare server that is running an application that is both CPU- and disk-intensive and needs every last little bit of performance. Typical Exchange, Web and file servers get along just fine with standard Ethernet cards.

We added the "for now" caveat because as iSCSI arrays get 10-Gbps iSCSI interfaces, and are fast enough to keep them fed, TCP off-load will make more sense.

»FUD: Using Ethernet for storage exposes your SAN to users and DOS attacks.Our standard answer: "If an attacker has made it to your SAN--iSCSI or FC--you've got bigger problems."

Tell most administrators they can build a SAN from off-the-shelf Gigabit Ethernet components and they start picturing plugging servers and arrays into their existing switches. After a little thought they realize that leaving disk arrays exposed to end users, viruses, worms and DoS attacks is a bad idea, and set up an isolated iSCSI network.

Vendors tell us that 80 percent of iSCSI SANs use dedicated switches for iSCSI traffic, with no route from the SAN network to user segments. All but a few of the remaining 20 percent use dedicated VLANs on chassis switches. So, like on a Fibre Channel SAN, someone would have to have control of your servers to attack your SAN.

Also, unlike Fibre Channel, the iSCSI protocol supports encryption and CHAP, which uses two-way authentication, as well as LUN masking. We're waiting impatiently for the first IPsec-enabled high-performance iSCSI array.

»FUD: 4-Gbps Fibre Channel will perform four times faster than iSCSI on Gigabit Ethernet

Planned DeploymentClick to enlarge in another window

This common misconception comes from a failure to understand the difference between speed and bandwidth. Physics dictates that data networks, like highways, have a speed limit. A faster network is more like a highway with more lanes than the Autobahn: It can carry more traffic without backups, but each trip takes about the same amount of time.

In the past we've done informal benchmarking with similar low-end arrays with iSCSI and Fiber Channel interfaces. The Fibre Channel devices generally have a 15 percent to 20 percent performance advantage.

For this article, our friends at Xiotech loaned us a Magnitude 3D array with 16 FC drives that had both 1-Gbps iSCSI and 4-Gbps FC interfaces. We were surprised when we ran our usual IOmeter benchmarks and the same array was more than five times faster connecting over Fibre Channel. Only when we were doing 2-MB sequential reads from the array's cache did the data rate exceed 1 Gbps. Because we've seen iSCSI performance on other arrays substantially faster than the Xiotech, this test tells us more about Xiotech's implementation than about iSCSI capacity.

The most important thing to know is that how fast your application is going to run, assuming disk I/O is the limiting factor, depends much more on the specifics of your disk array than on whether you use iSCSI or FC to connect your servers to it. The truth is, most servers aren't I/O bound. In fact, most of the servers in your data center are just sitting there most of the time waiting for your users to ask them to do something. Now that 10-Gbps Ethernet ports are becoming affordable, suddenly the FC guys are saying, "Well, most of the time our 4-Gbps pipe isn't full anyway, so there's no benefit from 10 Gbps."»FUD: iSCSI infrastructures aren't really that much less expensive than their Fibre Channel equivalents.

We've heard estimates that the cost spread of similarly configured iSCSI and FC SANs is no more than 15 percent. We don't buy it.

Volume and competition in the Gigabit Ethernet switch market have driven the price of even top-end, dual-power-supply, non-blocking switches like Foundry's FWSX 448 down to around $100 per port. If you can give up a little switch robustness and use multipath I/O so your SAN can survive a switch failure, costs can be substantially less.

By comparison, there are only four FC switch vendors, and Brocade is in the process of absorbing competitor McData. Even a lower-end FC switch, such as the QLogic SANbox 5600, carries a street price of about $350 per port. Add in the cost of SFP optical transceivers, fiber-optic cabling and HBAs for your servers and we're talking about an $800 to $1,000 premium per server to run Fibre Channel.

If you're looking for the kind of big-honking-redundant-fabric, no-single-point-of-failure switch that the FC guys call a director, in deference to IBM's ESCON director of old, the ratio remains about the same.But the real savings come when you want to extend your SAN over the WAN to support array-to array replication for disaster recovery. Because iSCSI is a TCP/IP protocol, you can simply connect a router to your SAN switch and go. With FC you'll need special routers at each end, costing $15,000 or more, that encapsulate the FC protocol in TCP/IP. You'll also have to decide if you want to extend the logical SAN fabric between your sites or route between two independent fabrics.

»FUD: You need a $100,000 per year storage admin to run an FC SAN.

Many systems and network guys see FC as akin to black magic. A network designed by disk-drive engineers, FC doesn't follow the ISO model we all know and assigns new names--like World Wide Name, or WWN--to familiar concepts like MAC addresses. iSCSI brings not only the simplicity of Ethernet plumbing, but also storage devices that offer improved management interfaces and don't require 200 lines of CLI scripting to get something done.

For those brave souls who seek to learn how to manage an FC SAN, typical training curriculums from SNIA, Global Knowledge, Brocade and others are much too heavy on theory, and too light on practical applications. Who has time to sit through a three-day class on data encoding and packet formats?

Thankfully, FC vendors--driven in no small part by Microsoft's Simple SAN initiative--have lately made management more straightforward. For example, the Xiotech Magnitude 3D we used for benchmarking allowed us to do the whole FC setup from a single graphical console; creating LUNs and linking them to our servers took just a few minutes. A smart server administrator could be taught to use one of these simpler systems in a day or so. The Xiotech model will grow to four or five switches to support 100 or so servers.Companies with $5 million invested in storage hardware don't care that they have to pay $100,000 for a storage admin.

»FUD: Fibre Channel products don't interoperate.

An old, and thankfully fading, truth. Hey, we'll acknowledge that FC vendors are making some strides.

Just two years ago many 1-Gbps FC products just wouldn't plug and play, period. Today you can choose array, HBA and switch vendors and expect that their gear will mostly work together. Some switch incompatibilities remain, and it's a rare SAN that has FC switches from more than one vendor working flawlessly with all advanced features enabled. But they are making progress--under pressure from iSCSI's interoperability. That makes enterprises the ultimate winner in this storage smackdown.

Howard Marks is founder and chief scientist at Networks Are Our Lives, a network design and consulting firm in Hoboken, N.J. Write to him at [email protected]. Basic Training

Some vendors, and evan a few noted analysts, believe that even today's iSCSI implementations with iSNS and sophisticated RAID controllers are too complex. They insist that driving proprietary applications out of the drive array and just connecting disk drives to Ethernet can push storage costs down and empower user organizations to build innovative solutions.

Frankly, we don't see the need--there's value in keeping storage networks separate, for security and other purposes. But some new offerings are answering the call for storage simplicity anyway.

Zetera's SoIP (Storage over IP) protocol looks to eliminate complexity in the disk enclosure and use the IP network to manage some array functions. SoIP enclosures--currently available from NetGear and Bell Micro's Hammer Storage division--have a minimal Ethernet-to-disk interface that doesn't have any complex RAID functionality. Each disk drive, and RAID 0, 1 or 10 logical drive in an SoIP enclosure, gets an IP address from a DHCP server, and the server driver uses multicast UDP to communicate with the drives.

A server writing to an SoIP volume sends a multicast to the volume's IP address. The message includes the LBA (logical block address) range and is heard by all the enclosures that hold any part of the volume. Enclosures that have partitions including the blocks being accessed write the data to their drives and send acknowledgements.SoIP can deliver incredible scalability because partitions can span many enclosures. RAID management is shared between the server's SoIP driver and the inherent behavior of multicast across the network. Because it relies on UDP multicasts, which aren't usually routed, it's a single subnet proposition. Users needing wide-area replication will have to use host-based replication.

Even more basic is AoE (ATA over Ethernet), which eliminates even the IP layer, sending ATA drive commands in Ethernet packets. AoE was developed by Coraid, which remains the primary manufacturer of AoE drive enclosures.

Coraid has released the protocol to the open-source community. A free initiator is included in most Linux distributions and downloadable for Windows servers. Coraid has also posted an open-source target for Linux on Sourceforge for the roll-your-own crowd; other vendors could extend this.

Coraid's initial implementation put each ATA drive on the network with its own Gigabit Ethernet port, relying on the host's logical volume manager for RAID functionality. Its newer products include RAID controllers, so a whole RAID set can share one MAC address. Like SoIP, AoE, isn't routable so, like SoIP, it's strictly an in-the-data center solution.

These products are low-cost, but there's nothing small about the technology. The Massachusetts Institute of Technology Media Lab, for example, is constructing a high-performance data storage array (PDF) based on Zetera's Z-SAN technology to collect and analyze video and audio data to help researchers understand early childhood cognitive development.Executive Summary

Ever since the IETF released the final standards in early 2003, iSCSI has been a classic disruptive technology. Don't let its still-modest market share fool you--as Clayton Christensen says in his classic business book, The Innovator's Dilemma (Collins, 2003), disruptive isn't about sales. It's about the path.

In "Storage Smackdown" we examine some of the FUD circulating about iSCSI. But here's the big picture: Old-guard tech vendors listen to customers who want bigger and faster. New tech is portrayed as too small and not powerful enough for the biggest shops--but it appeals to a whole new, and larger, customer group that can't afford old tech. These guys take the new tech and use it for new applications. Over time, the new tech gets better and starts being applicable to the old tech's applications, but remains less expensive.

Before iSCSI, most midsize enterprises with between 50 and 200 Windows servers, and maybe one or two Solaris or AIX servers to run ERP, missed out on the advantages of networked storage. Few could justify spending $2,000 to $3,000 per server to connect to a Fibre Channel SAN. Enter first-generation iSCSI disk arrays. Vendors used low-cost SATA drives to keep prices down, and suddenly midsize companies had a more level playing field. Even die-hard Fibre Channel shops benefit from iSCSI's downward pressure on pricing and standards adherence. It's all good.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights