An example of when to implement SSS is when there is a performance bottleneck in the storage I/O path that is causing an application to wait on storage. As we discussed in our recent webinar “Understanding Solid State Storage Performance” this can either be caused by a single performance demanding database application or by multiple servers all requesting information from the same data set at nearly the same time. This is the classic use case and what we are concentrating on is the I/O workload. The goal is to know your I/O workload and identify which files are being accessed the most and move those to a SSS system.
With the decline in the cost of the SSS and its variety of formats, there is a temptation to just throw technology at the problem instead of understanding it. The advent of flash based storage, caching, and other technologies though has moved the SSS discussion beyond the traditional database workload consideration and more into the mainstream of the data center to solve performance issues that may not directly affect revenue generation. Stand-alone devices have benefitted thanks to more affordable and higher capacity flash-based technologies. Applications that were difficult to identify specific components to load into high speed storage can now have their entire dataset loaded on to SSS, eliminating the need for application re-design and tempting users to not understand their I/O needs. This can lead to the selection of the wrong SSS device, either spending too much budget on a device you can’t fully exploit or worse, spending too little and not seeing a performance gain.
One of the key components in I/O workload assessment is understanding what your write needs are in these environments. While flash-based SSS are best in read heavy workloads, they still provide very good write performance, certainly better than mechanical HD. Additionally several companies have gone a long way to addressing write performance and wear issues through sophisticated controller design. Despite this, if write performance is critical or is a high percentage of the workload profile, we suggest looking at technologies that have implemented DRAM as part of the solution. There are server based solutions that leverage DRAM and flash as we discussed in our article “What is Server Based Solid State Caching” and there are appliance solutions as we discuss in “The Advantages of DRAM SSD.”
If your workload is more balanced--a typical benchmark is 80% reads--then a flash-based SSS may be more than adequate. If you are looking at a flash-based SSS with a 50/50 read/write workload then quality and intelligence of the flash controller becomes increasingly critical and some integration of RAM becomes desirable.
The cost decline and technology improvement of SSS has made it tempting not to have understanding of your I/O workload. For example, you could implement many SSS solutions and have them provide you a performance boost without any investigation of your environment. The advantage of understanding the I/O workload will allow you to identify the right solutions for your data center. The result should be better return on the SSS investment and better user satisfaction.
Follow Storage Switzerland on Twitter
IT teams areas are packing more information on fewer devices, delivering faster throughput while using less space and power, and managing the needs of more applications with fewer people. Our new report shows how smart CIOs will accelerate this trend by adopting new multipurpose arrays and converged networks. Download our report here. (Free registration required.)