Storage managers are becoming increasingly confident that solid state storage (SSS) is reliable and will provide a performance boost to their environment when applied correctly. The challenge is how to properly apply solid state technology so that applications can benefit without having to rearchitect your current application and storage infrastructure.
The historical use case for SSS, and still the most common, is to speed performance of database applications. In most cases, it is specific elements within that application that are moved to the solid state disk tier. This can be indexes, redo or undo logs, or the increasing database itself. In these environments, there is a known problem and using solid state disk usually leads to greater revenue generation potential. The required changing and potential rearchitecting of the environment was worth the potential return on investment (ROI).
There are applications and environments that can be enhanced by SSS that may not have the clear ROI that revenue generating databases do. In these environments, SSS has to be enabled for wider distribution of the technology so its cost can be spread out across a greater number of applications. This is where SSS implementation and cost justification becomes more complicated but there are several SSD first steps that may be ideal for the storage manager.
The first and potentially most simple step is using SSS as a cache. Caching has been around forever and its algorithms are well understood. Solid state enhances it by making significantly larger cache sizes than in the RAM based caches that used to be used. In this case, SSS is leveraged either within a single server or via an appliance to accelerate a broad range of applications. As we discussed in our recent article "How Data Centers Can Benefit From SSD Today" the cache systems come in several forms and are becoming more advanced and intelligent. Some can now accelerate writes as well as reads and others have very sophisticated analysis tools that improve cache accuracy. Another strategy, as we discuss in "Scalable NFS Acceleration With Enterprise SSD Cache", is to use caching systems so dense and so scalable that the entire database environment can be cached. No need to worry about a cache miss if the whole environment is in cache.
There are two key benefits to using a caching technology as your first step to SSS. First, no or limited changes have to be made to the infrastructure. Simply place these devices in the server or in the storage infrastructure and as soon as the data is analyzed performance will begin to accelerate. Write cache systems show that performance boost almost instantly but special attention needs to be paid to the availability of those systems.
The second benefit is that the solid state investment, especially when deployed in the storage infrastructure, is now available across a wide range of servers and applications. Caches are ideal for situations where different applications could use a performance boost at different times. For example, the cache could mitigate a desktop virtualization boot storm in the morning and then service key database applications during the rest of the business day.
Caching isn't the only solution for wide spread SSS deployment but it is one of the ideal first steps and for some enterprises it may be all they need. There are however other potential first steps to look at and in our next entry we will look at some other SSD first steps that you can take.
Track us on Twitter: http://twitter.com/storageswiss
Subscribe to our RSS feed.
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.