SSD's New Role: Operational Efficiency

Thanks to the decreased cost of Solid State Disk (SSD) its role within the enterprise continues to evolve. As we discussed in our last entry (http://www.networkcomputing.com/backup-recovery/ssds-new-role---power-efficiency.php) this is leading SSD out of the realm of being used to solely solve the performance demands of the fringe use cases and more into mainstream. Beyond the power utilization improvements that SSD can bring there are also operational improvements to consider.

George Crump

April 27, 2010

2 Min Read
Network Computing logo

Thanks to the decreased cost of Solid State Disk (SSD), its role within the enterprise continues to evolve. As we discussed in our previous blog, this is leading SSD out of the realm of being used solely to solve the performance demands of the fringe use cases and more into mainstream. Beyond the power utilization improvements that SSD can bring, there are also operational improvements to consider.

Using SSD as part of an automated tiering system, as a stand alone SSD appliance or now with feature rich storage systems that are purely SSD can make the tuning of performance a much simpler task. I am always amazed at the level of detail we get into with clients and we go through the art involved in wringing maximum performance from an array based on mechanical drives. There are RAID levels to consider, number of drives to factor, drive types and capacities, how many and which shelves to involve and how many snapshots to maintain. This process would almost be acceptable if things never changed. No application stays the same. The reality is that once you have figured out the perfect balance of all of these factors, then things do indeed change. New applications affect old ones, or older applications add more users and needs greater performance. The process starts all over again.

These are all "end of the line" performance issues. With SSD much of these problems are removed. For most environments it is simpler and faster to implement SSD than the time served going through countless configuration options. It is true that front end of the I/O path, server and infrastructure, needs to still be contended with, but at least with SSD the back end can be put to rest. I believe we are quickly approaching a point where if there is any performance concern at all, it is simpler and more cost effective to leverage SSD instead of trying to design and re-design the perfect mechanical drive configuration. This still leaves a lot of life left for mechanical drives, as most applications will get all the performance they need from a basic RAID allocation. The moment you need to go beyond that is the moment you should consider SSD.

Each of the SSD implementation methods has its pros and cons. There are different integration challenges to each method to make sure that you get solid performance and acceptable data protection, but those steps are pretty well documented now. Once that work is done, in most cases, there is no longer the constant revisiting of performance tuning, it for the most part is finished. Finally with SSD in place as the rest of the infrastructure catches up, the end of the I/O path is ready and able to deliver data.

About the Author(s)

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights