Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

LAN And SAN Unite

Convergence is about more than just voice and data. Storage and networking vendors promise that the next-generation network will unite local and storage area networks, virtualizing both over a single network at 20 Gbps, 40 Gbps, or even 100 Gbps. And it isn't just the networks that are coming together. The future SAN also involves convergence of storage with memory. The same principles that originally abstracted storage out of a local server can be applied to RAM, too, while storage targets shift from hard disks to flash memory.

Longtime SAN users might feel a sense of déjà vu. After all, SAN and LAN convergence was an early claim of iSCSI, which promised to make Fibre Channel unnecessary by routing storage traffic over IP networks. While iSCSI is a viable option, it's mostly used in low-end networks that need speeds of 1 Gbps or less. And even here, many users prefer to keep the SAN separate from the LAN, even if both are built from the same commodity Gigabit Ethernet switches.

The new push toward unified networks differs from iSCSI in both its ambition and the resources behind it. Vendors from across the networking and storage industries are collaborating on new standards that aim to unite multiple networks into one. Startups Xsigo and 3Leaf Systems already have shipped proprietary hardware and software aimed at virtual I/O that can converge SAN with LAN, though not yet memory.


InformationWeek Reports

This industry isn't talking consolidation just for consolidation's sake. The catalyst is server virtualization, for which either a SAN or network-attached storage is critical as a way to separate workloads from data. Without virtual storage, the flexibility inherent in virtualization is reduced as data must be moved or replicated whenever a virtual machine is set up or torn down. Decoupling storage from processing makes it easier for workloads to be moved around.

The big disagreement is about what this single network will be. This time around, no one is talking about routing everything over IP; the low-latency and low-overhead requirements of data center networks dictate that the only realistic choice is to use a fully switched fabric, meaning the converged network will be confined to the data center. But will it run over Ethernet, Fibre Channel, or InfiniBand?

Virtualization also brings a greater urgency to network consolidation. A single nonvirtualized server would usually have only two network connections: a network interface card for the LAN and a host bus adapter for the SAN. In a server running multiple VMs, each virtual server needs its own NIC and HBA, which will quickly become unmanageable unless they're virtual. And if the networks need to be virtualized anyway, aggregating them together over a single high-performance transport will make the data center more flexible.

Storage is just the first step. The SAN has been so successful that companies from Cisco Systems to Intel-backed startup 3Leaf see remote memory as the next step, doing to RAM chips what the SAN did for disk drives. "Most servers have enough compute power, but they're constrained by the number of memory slots available," 3Leaf CEO B.V. Jagadeesh says.

DIG DEEPER
THE APP-AWARE NETWORK
Switch vendors are making their devices smarter. IT must decide if this should be feared or embraced.

The slots in a typical server can physically support about 32 GB of locally installed memory, whereas 64-bit CPUs and operating systems support 16 TB or more. In 3Leaf's envisaged architecture, a CPU in one server is connected to other servers that take on roles analogous to storage targets. The CPUs are linked through Advanced Micro Devices' Coherent HyperTransport and Intel's QuickPath--technologies developed for interconnecting CPUs within a PC, but which 3Leaf says it can extend to run over a network using a special chip that plugs into one of the processor sockets on each server. The same network carries one or more virtual SAN or LAN links, so that only one physical cable is needed for all three types of traffic.

How real is this? 3Leaf says its chip is in the test phase, with a version for AMD due to ship before the end of the year and one for Intel a year later. (As with multiple CPUs on the same server, it won't be possible to mix AMD and Intel processors together, as each uses a different proprietary bus.) 3Leaf already ships a box that can virtualize Fibre Channel SANs and Ethernet LANs, as does competitor Xsigo, which is backed by Juniper.

3Leaf calls its box an I/O Server, reflecting that it's built from standard PC components and could be licensed by server vendors; Xsigo calls its product an I/O Director to emphasize that it's a SAN appliance. But architecturally, both work in the same way. Virtual NICs and HBAs within each server (or each VM) tunnel Ethernet and Fibre Channel traffic to the box, where it's linked to a SAN and LAN via physical Ethernet and Fibre Channel. These aren't conventional virtual LANs, as the boxes don't terminate connections or have MAC addresses. As far as switches on the SAN and LAN are concerned, traffic goes straight to the virtual adapter.

Impact Asessment

(click image for larger view)
  • 1