In my last column I covered the viability of scale-up solutions, which essentially make the controller faster. At some point, however, that controller may reach its breaking point, and that might happen before you're ready to go through the storage refresh process. Using a controller offload technique can allow you to delay a storage refresh and derive more return on your original storage investment.
Controller offload techniques are pieces of technology you can apply to your storage infrastructure that reduce the load going to a particular storage system. Basically, they help you solve the performance problem--or at least lessen its impact--without throwing out your current investment.
If you have one specific workload that is stealing storage I/O from other applications, that is an ideal case to isolate on a separate storage system. Solid State Disk (SSD) has made this discussion more interesting. In the past, simply adding another storage system, or a faster storage system, did not usually provide a significant enough performance boost to be worth the extra management required for multiple storage systems. SSD makes the investment worthwhile.
[ For more on maximizing storage controller performance, see Are Storage Controllers Overworked? ]
This solution can be PCIe-based SSDs in the server or a shared SSD system on the SAN. As discussed in my recent video, the method you choose depends on your situation. As a general rule of thumb, if the data to be accelerated needs to be shared, or if there is a large number of workloads to be accelerated, then an appliance-based approach may be more appropriate. PCIe is ideal for accelerating a finite number of servers, or a compute-based scale-out architecture.
Permanent placement of data to a SSD is fine for specific workloads, but when you need a broader approach to provide a performance boost to a large number of servers and workloads, a caching solution may be easier to implement. As with the permanent placement described above, the best solution depends on your needs. As described in my article Requirements For Enterprise Server Side Caching, server-based SSD caching provides fast acceleration, eliminates the network bottleneck, and can substantially reduce traffic flow to the storage system.
Network caching, or systems that sit in front of the controller to provide broad-based acceleration, may be ideal for situations in which a range of workloads need to be accelerated, such as server virtualization, or for environments where shared access to data needs to be managed. I discussed this in Cost Effectively Solving Oracle Performance Problems. The cache chosen has to match the environment. Most appliance-based caches focus only on a particular protocol; there are several NFS-based solutions and only a couple block-based, fibre channel cache accelerators.
Thanks to SSD, the concept of deploying a separate storage area just for performance has appeal. The performance boost is worth the extra potential management burden. Caching, however, can provide much of the performance boost without the management burden since data is moved to SSD transparently. You do not need to replace your storage to address performance problems caused by an overworked controller.
Follow Storage Switzerland on Twitter
Our State Of Storage 2012 report highlights promising new technologies that aren't yet on most respondents' radar and offers advice as you plan your 2012 storage strategy. (Free registration required.)