Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Saving The SAN

In my recent column The Upside-Down SAN, I talked about how some technologies such as server-side caching are affecting the role of the storage area network (SAN). These technologies want the SAN to be a capacity-focused storage tank and want performance-sensitive data moved into the server on a flash memory device. Although this strategy has a lot going for it, you don't always need to turn the SAN upside down to get performance.

With server-side caching techniques, all of the active data (the active read data, at least) is now stored on a flash device inside the server. This avoids network and storage system latency. To some extent it also simplifies shared storage design because it no longer needs to be concerned about high-performance transfers.

The advantages of high-speed SAN

If you can fix the performance issues, then the SAN has distinct advantages over server-side caching. It is by its very nature shared, which is critical for many database environments and, of course, virtualized server environments. Although server-side caching technology might eventually be able to provide some form of sharing, it certainly is not there yet.

[ Feeling overwhelmed by your storage choices? Read Are There Too Many Storage Solutions? ]

The other advantages of a SAN over server-side caching are the same as its advantages over any direct attached storage device. In general, SANs make data protection and data management easier and more efficient.

Fixing the SAN

The big advantage that server-side caching has over the SAN is performance. The SAN has a number of areas that performance can bottleneck; however, in general, performance problems occur: on the network (including the storage system's bandwidth capability), at the storage controller, and at the storage device itself.

Most of the time we--me included--get too focused on the device aspect of this. But SAN storage device performance can be addressed to a degree by using the same technology that the server-side caching vendors use: flash-based solid-state storage. Although not all flash devices are the same nor are they implemented the same, almost any flash design will outperform an overworked hard disk.

The network is also relatively straightforward to fix from a performance perspective. Eight-gigabyte (8GB) fibre channel and 10-gigabit Ethernet (10GbE) are becoming less expensive every day. The challenge that storage networks have is their complexity. But this is not a SAN disadvantage compared to server-side caching. Even with the upside-down SAN you are going to need a storage network of some kind. Storage networks are getting easier but complexity of storage networking is a reality that both designs have to deal with.

Vendors are making storage networks easier to manage and third-party tools can provide a broader view. As I noted in What Your SAN Fabric Manager Isn't Telling You, the tools not only can help you manage your current storage network, you might be able to optimize it so you don't even have to upgrade right away.

The big challenge in storage performance is how well the controller--the storage computer--is handling all of this network and device I/O. The controller's job is to get data to and from the network and storage devices as fast as possible. In an increasing number of incidences, it is becoming the big bottleneck in storage performance. These controllers need to be designed to handle massively parallel I/O and designed with flash storage in mind.

When designed correctly, the controller can remove itself as the last potential bottleneck. We have seen SAN-attached flash-based storage appliances that are optimized for high-network throughput and low-latency flash memory turn in performance results that are only a couple of dozen microseconds slower than of a directly attached PCIe card. Another key here: the data services that the controller provides either must be simplified or delivered more efficiently, so they don't become a burden on the controller.

As I discuss in one of my Chalk Talk videos, PCIe SSD or Appliance SSD, this is not an either-or situation. A mix of the technologies is probably more appropriate than going all in on one or the other. For the typical data center, it makes sense to use flash-only or flash-enhanced shared storage for the bulk of the workload. Then use PCIe SSD for a smaller set of workloads that have unique performance demands.

Track us on Twitter

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

The pay-as-you go nature of the cloud makes ROI calculation seem easy. It’s not. Also in the new, all-digital Cloud Calculations InformationWeek supplement: Why infrastructure-as-a-service is a bad deal. (Free registration required.)