Storage

10:06 AM
George Crump
George Crump
Commentary
50%
50%

Is Server-Based Storage Tiering Right For You?

Maybe we're making the whole job of improving network performance more complicated than it has to be.

Server-based tiering is going to be in the news a lot over the next few weeks, thanks to EMC's Thunder and Lightning announcements and, of course, all the other vendors' counters to it.

As I wrote last week, server-based caching and tiering--the process of moving data closer to the server--can help break network bottlenecks. But at some point you have to wonder: have we taken the whole concept of tiering too far? Are we making the placement of data too complex? Let's break it down.


What's the problem that server-based tiering Is trying to solve?

Creating a tier of flash-based storage in the server is supposed to improve performance but that is a broad swipe at what is typically a nitpicky problem. How would this solution compare to other performance improvement methods? The competing view would be that solid-state storage systems on a high-speed storage network would offer much the same performance benefit. Alternative: if you are going to go "local" with data, why not go local with all your data and use one of the cloud storage designs?

Which performance problem does a server-based tier or cache of storage solve? By moving data off of mechanical or solid state drives and putting it inside the server, the implication is that either the storage system or the network is the performance problem and putting the most active data inside a server will improve performance.


The network performance improvement.

Can the network cause a storage performance problem? Sure it can. But is a server-based storage tier the best way to solve the problem? As always, it depends. If you have an average network, 4GB FC or 1GB Ethernet, it is certainly possible, especially with server virtualization, to flood the storage network channel to the point that it becomes the bottleneck. Certainly moving most of the active data, even if it is only reads, off of the network and putting it inside the server will improve performance.

Is implementing a server-based tier worth the effort and expense if you are going to be upgrading to 8Gb FC, 10GbE, or 16Gb FC in the next year or so? Maybe not. As I noted in my article, Is PCIe SSD Always Faster, when you get to the point where every server or host has access to this type of bandwidth, and assuming that the OS or hypervisor knows how to use that bandwidth efficiently, then the performance delta between having the data local on a PCIe base SSD in the server or a shared SSD on the SAN is measured in microseconds.

The advantage that a server-based storage tier will bring is the ability to fix host-specific performance problems without having to upgrade the network or change out the storage system. For legacy environments that are having network performance problems, there is certainly value to the strategy. Another question is if a single vendor is qualified to provide all these tiers of storage and do it better than a vendor focused on just one specific type of storage.

If you are currently considering a refresh of your storage or storage network or both, you might want to wait to see how that improves your performance before implementing a server based tier or cache. You might want to consider a solid-state storage appliance or SSD to go in your storage system along with that network upgrade as well. If the key stumbling block to upgrading the storage network is price, there are many new methods that can use inexpensive 10GbE as a connectivity option.


The storage system problem

Next time we'll look at storage systems' performance problems, which might actually be the main motivation for legacy storage manufacturers to bring server-based storage tiering solutions to market. Certainly most, if not all, storage systems can add solid state to their chassis but many don't get near the performance they should. Maybe server-based storage tiering is a way to hide that reality.

Track us on Twitter

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
Tom Isakovich
50%
50%
Tom Isakovich,
User Rank: Apprentice
2/9/2012 | 4:59:34 AM
re: Is Server-Based Storage Tiering Right For You?
Hi George,

On this topic, a real competitor to server-side flash caching is not actually network storage (which serves a different purpose altogether), but rather plain old DRAM. The server-side caching provided by Lightning (VFCache) is read-only. As a result, persistence is not critical. Compared to a 300GB PCIe SLC flash card (formatted capacity closer to 260 GB) at $15/GB (about $4500), you can add 256GB of DRAM for about $3000. DRAM is not only less expensive, it is vastly faster, does not tie up a PCIe slot, does not require special blade form factors, does not require host-based software, and is something the server manufacturer can easily integrate without cracking open the box. If small in size and read-only, server-side flash cache is of rather limited utility in comparison, at least in the current first generation that suffers from limited VMware support, SLC's high cost, and a lack of write caching.

Any form of server-side caching, whether DRAM or PCIe flash, is of course complimentary to high-performance network storage. In fact, the performance gains of server-side caching are incremental to the performance gains provided by flash-based network storage. In short, adding more memory-based technology instead of spindles...server-side and network storage-side...is always a good thing.
Hot Topics
5
Do We Need 25 GbE & 50 GbE?
Jim O'Reilly, Consultant,  7/18/2014
White Papers
Register for Network Computing Newsletters
Cartoon
Current Issue
Video
Slideshows
Twitter Feed