Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Are SAN-Less Storage Architectures Faster?

As I discussed in a recent column, an ever-increasing number of solutions now provide a SAN-less infrastructure for virtual environments. Initially these SAN-less designs were sold on the concept of simplicity (no SAN, fewer headaches) but now several vendors claim a performance advantage by leveraging internal SSD as well.

At first the performance advantage might seem obvious, since the storage is directly inside the hosts and I/O does not need to traverse a network. But that is not always the case -- it depends on how the SAN-less storage system is architected.

Many SAN-less designs stripe a file across all the hosts in the virtual cluster, much like a scale-out storage system stripes data across its nodes. Essentially this is a virtualized scale-out storage system with a VM running on each host, acting as a storage node. While this method does bring high availability to the design, it also introduces a network. If the SAN-less design stops here, it will have the same network-related performance limitations as a shared storage system has.

[ For more on shared vs. local storage, see Is Shared Storage's Price Premium Worth It? ]

Many SAN-less systems take a different approach, so that the data needed by a Virtual Machine (VM) on a host is completely intact on that host -- that way it does not need to go across a network to fetch data. Many of these designs leverage internal PCIe-based flash storage or internal solid state drives (SSD). If the VM can get all its data from an internal flash memory storage device, performance will be excellent.

The challenge with this approach is making sure that VM data is protected if there is a failure inside the server so that it is available if the VM is migrated to another server. There are three basic techniques to ensure its availability while taking advantage of local performance.

The first is a drop-through technique, in which the data is still stored on the local solid state storage device and is then written through to a shared storage device in real time. Basically, this is a split write, or mirrored, approached. This method can have some latency on write traffic but should realize excellent performance on reads. It also obviously re-introduces a SAN and eliminates the advantage of being SAN-less.

More common is the second technique, discussed in Building The SAN-Less Data Center. This approach keeps all the data on the PCIe Flash Card/SSD in the host, as described above, but replicates a second copy to the other hosts in the striped fashion that scale-out storage systems use. This way, all the hosts have access to the secondary copy for migration in case of failure of the primary host, without needing a secondary SAN. Also, once the migration occurs, data can be re-copied into the new host so that access is again local.

Finally, there is the newest technique, discussed in The Benefits of a Flash Only, SAN-less Virtual Architecture.

Each of these options should deliver excellent performance, especially since all reads would come from the locally attached storage. But each requires dual copies of data, and while not officially a SAN, they do require some form of a specialized network. The question remains: Is this performance better than a properly configured SAN, and if so, is it worth the limitations? I'll discuss that in my next column.

From SDN to network overlays, emerging technologies promise to reshape the data center for the age of virtualization. Also in the new, all-digital The Virtual Network issue of Network Computing: Open Compute rethinks server design. (Free registration required.)