The first cost-saving area to consider is the potential to eliminate dozens--if not hundreds--of standard disk drives. Once upon a time, the default method of improving storage performance was to add disk drives to the volume that the application was accessing. As we discussed in our white paper, Visual SSD Readiness Guide, as long as you have queue depth, adding more disk drives will usually improve performance.
The problem is that in an application environment with high parallelism and lots of transactions, that queue depth can be quite high and to exhaust it, it might take more hard disk drives than you can possibly fit into your budget or fit into your data center. The compromise has been to add enough hard drives to meet today's performance demand and worry about tomorrows later.
Solid-state storage can change this scenario. A single solid-state appliance can generate more input/output operations per second (IOPS) in a single chassis than dozens or hundreds of hard drives, and it can resolve a storage performance problem permanently. As we discussed in our recent webinar with InformationWeek, we now recommend that any time you're adding hard drives for performance reasons that you strongly consider SSDs as the more viable alternative.
The potential cost savings of SSD in this scenario comes from eliminating hundreds of disk drives and replacing them with an SSD appliance or set of SSDs. There is even potential for energy savings. Although the typical SSD or SSD device when looked at on a per-flash-module basis requires roughly the same amount of wattage that a hard drive does, the fact that you can provide better IOPS performance with significantly fewer devices should lead to an overall cost savings.
If you have a performance-demanding application and the server that is hosting an application has relatively low CPU utilization--say less than 40%--then you more than likely have a storage I/O problem. This is one of the simplest ways to tell if solid-state storage will help your performance problems. What is essentially happening is the CPU is "waiting" on the storage and that leads to low CPU utilization.
A side benefit to fixing the storage I/O bottleneck with solid-state storage is that you can now make the CPU work harder. If you have an application for which you had to aggregate the performance load across multiple servers in order to get better performance, a move to solid-state storage will improve the storage I/O performance and allow you to consolidate down to fewer servers, saving a significant amount of IT budget. Fewer servers leads to simpler infrastructures, which should reduce storage administration time as well.
In What is Storage Class Memory, I wrote about the concept of building a second-tier of "RAM" leveraging flash-based storage. Although flash is considered more expensive than hard disk storage it will always have a price advantage over DRAM. In this implementation flash can be put inside the server and then leveraged as a place to page out memory in a virtual memory configuration or as a disk cache to lower storage I/O going back and forth in the storage network.
The result should be the ability to buy significantly less DRAM than in the past--as well as eliminate or at least delay potential upgrades to the storage network infrastructure to improve overall storage I/O performance. Both of these factors again lead to a significant IT capital expense savings.
Although priced at a premium when looked at on a dollar-per-gigabyte basis, SSDs can be a significant cost saver when looked at on an IOPS-per-gigabyte basis. And this isn't just "marketing math" to justify more-expensive technology. As we've seen, there are legitimate areas where solid-state storage can dramatically decrease the cost of doing business.
Track us on Twitter