As I stated in our article "Scaleable NFS Acceleration with Enterprise Solid State Cache," one of the more popular use cases, especially initially, is using SSS as a cache. This method is a particularly popular way to accelerate NAS-based systems. The challenge is that caches by definition can get refreshed constantly and that means a lot of write traffic as the cache is updated. The smaller the cache area and the more active the data set is, the more frequent those updates are going to be.
A simple solution to the frequent caching update problem is to make the cache so large that it rarely needs to be updated. In other words, a very large portion of the active data set is stored in cache most, if not all, of the time. This not only minimizes the amount of writes to the cache, it also means the caching engine can expend less processing power analyzing which data should be in cache and which should not.
Cache Writes To RAM
Another caching technique that should help is developing a reliable write cache area that is based on DRAM, which does not suffer the same write performance and wear penalties that flash memory does. DRAM is perfect for writes, since in most environments writes are small and transient. That means that a file or block of data is usually very active right after it is initially created, but the size of live, active data is typically microscopic compared to the older, less active data on a storage system. The DRAM area will need to be smaller because of cost and will need to be protected by either batteries or capacitors in case of power loss. Those technologies have matured greatly over the past few years and can provide power to a DRAM area long enough to clear itself to either flash or mechanical based storage.
In addition to large caching and write caching, storage suppliers should be taking a very hard look at using storage optimization technologies like compression and deduplication in conjunction with flash-based storage. For the solid-state purist, this may sound like sacrilege because of the latency that will be introduced as the data is optimized. Certainly some performance loss will occur, but in almost every case that storage would still perform significantly better than a mechanical hard drive alternative. Most vendors look at storage optimization as a means to reduce SSS cost, and it certainly will since more data can now be stored in less space. The other important impact of storage optimization is less data is written to the same space. In the flash world, this means an important reduction in writes.
For this to work though, the storage optimization has to be inline, meaning it has to be done prior to the data being written to the flash storage area. A post-process optimization would actually increase the write problem and potentially shorten the life of the flash area since data would be written to flash, analyzed, and then re-written in its optimized state.
The technology that suppliers use has a role to play in making sure you get maximum life from your SSS investment. The use of larger and more intelligent caches, leveraging DRAM, and optimizing the flash storage area are good examples. There are two other techniques that may seem more radical that virtually eliminate the concern--pure solid-state storage using only DRAM or flash-based storage. These approaches require an entry of their own and is something that we will discuss in our next entry.
Follow Storage Switzerland on Twitter