Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

LAN And SAN Unite: Page 2 of 4

ETHERNET EVERYWHERE
At present, both the Xsigo and 3Leaf appliances require InfiniBand for their physical connections to servers. It's an obvious choice: Designed in part for virtual I/O, InfiniBand offers very low overhead and network latency of less than 100 nanoseconds--comparable to that of a PC's local memory bus. Like Fibre Channel, it can scale to 20 Gbps, which is again comparable to the bandwidth of the AMD and Intel chip interconnects.

InfiniBand is also cheaper than alternatives: A 10-Gbps InfiniBand host channel adapter (HCA) costs around $700, compared with at least $1,000 for an Ethernet NIC and more than $2,000 for a Fibre Channel HBA. Switch port prices show similar variation, though some users may not initially need a switch on the server side of the box. While switches are necessary for 3Leaf's planned memory networks, Xsigo's I/O Director has add-on InfiniBand modules that let it connect directly to servers.

Despite InfiniBand's advantages, most in the industry see Ethernet as the long-term future for a single, converged transport. Both Xsigo and 3Leaf plan to support it eventually, as do many larger players. In February, Cisco launched the Nexus 7000, a giant data center switch aimed at consolidating multiple networks into one. Unlike the startups, Cisco isn't even bothering to support InfiniBand, though it says it may add InfiniBand modules if there's enough customer demand.

The most compelling argument for moving to Ethernet is that everyone has it anyway. The persistent trend in networking has been toward increasing dominance of Ethernet over other technologies. Though some users are replacing physical cables with Wi-Fi, that's really an extension of Ethernet, not a replacement for it. What started out as a way of linking PCs together has become a universal connection to the Internet, increasingly used for voice as well as data, and even for WAN services in addition to the LAN.

Several standards and initiatives, collectively known as Data Center Ethernet, aim to improve Ethernet's latency, giving it characteristics similar to Fibre Channel and InfiniBand. There are also plans to increase Ethernet's speed beyond 10 Gbps, though its usual tenfold speed boost probably isn't realistic in the short term.

"It'll be double-digit years before we see 100-Gig Ethernet on the market," says Koby Segal, COO at InfiniBand vendor Voltaire. "It's not just switches, but the whole ecosystem of the cables, connectors, and backplanes."

What It All Means

SERVER VIRTUALIZATION means that storage networks will be more critical than ever: Virtual servers need virtual storage.

NETWORK CONSOLIDATION is the end, I/O virtualization is the means. Uniting SAN and LAN into a single fabric can pay big dividends.

MEMORY NETWORKS are the next step after storage networks, but few apps will really need them for the foreseeable future.

INFINIBAND is currently the only realistic transport for a converged network that unites memory, storage, and Internet traffic, but that will change within a year or two.

100-GBPS ETHERNET is the long-term future, but it could still be a decade or more away. Waiting for it means being left behind.

The Ethernet community doesn't really dispute this. The IEEE's 802.3ba working group expects to have a standard for 100-Gbps Ethernet ready by 2010, but that doesn't mean products will support it at full rate. In fact, the group's charter calls for the standard to include two rates: 40 and 100 Gbps. Though vendors usually compete to exceed standards, technical limitations mean that might not be possible this time. "We believe 40 is the most likely," says Ravi Chalaka, VP of marketing at Neterion, a startup focused on high-performance Ethernet.

The technical challenges of reaching 100 Gbps are formidable. For example, the current spec calls for a bit error rate of no more than one in 1012, which means getting about 1 bit wrong for every 125 GB transmitted. That's acceptable over a low-speed link, but at 100 Gbps it would mean making an error on average every 10 seconds--something that would cause serious problems at the application layer, leading to delays and congestion as dropped packets are retransmitted. Vendors hope to reach a rate of one in 1015, which would put nearly three hours between errors on average. But like the 100-Gbps speed itself, this is an aspiration, not a guarantee.

Of course, the same challenges apply when trying to scale any networking technology, and it's no coincidence that 40 Gbps is also the next speed ramp for InfiniBand. The technologies are so similar that Mellanox sells a 10-Gbps InfiniBand HCA that can also act as an Ethernet NIC. At 40 Gbps, both can reuse much technology originally developed for OC-768 WAN links, which run at almost the same speed. There's nothing comparably popular at 100 Gbps.

Of the Data Center Ethernet initiatives, the most important to storage looks like Fibre Channel over Ethernet. Expected to be ready by next year, FCoE is supported by most in the industry, including IBM, Cisco, QLogic, and Brocade, with many vendors already demonstrating proprietary implementations. The intention is that Ethernet switches will be able to speak Fibre Channel, or FC, natively, further blurring the distinction between LAN and SAN.