Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Calculating a Use Case for SSDs: Page 2 of 3

If you always have a queue depth of higher than 1, this means you always have an outstanding command that is waiting on the storage to respond. To improve performance, you must either improve the response time per drive, or, because drives can process commands in parallel, you can also increase the number of drives. The challenge with adding one or more drives is that you get no performance benefit with lower than 1 outstanding disk operation per drive. For example, if you had a 5+1 drive RAID group and you had an outstanding queue depth of 10, then adding more drives to the array should increase performance. However, if in that array group you had an outstanding queue depth of 4, then adding additional drives to the array would provide few, if any, performance dividends.

Two outcomes result from studying queue depth. First, you can see if adding drives is an option to improve performance or if reducing the response time of the drives is the only way to increase performance, once your queue depth is lower than the number of drives in the array. Second, you can determine if the number of drives you need to add to the array group to match the queue depth is going to require an extremely large investment in drives.

In both of these cases the investment in SSDs may be the ideal, and in many cases a less expensive, solution. If you determine that you would require a large number of drives for a smaller dataset to support the application's I/O requirements, solid-state disks can be a much more economical solution. The other option for improving drive performance is to short stroke the drive. Short stroking a drive is the process of only formatting the outer edge of the drive platters, which are also the fastest sections of the drive. Doing so can improve drive response time. The challenge is that this requires additional drives and means having to purchase a very fast, expensive drive -- but only formatting one third of it.

If you cant bleed off the queued commands in parallel, then you can lower response time via SSDs by simply executing more commands because of their raw speed advantage, thus also lowering queue depth. Again, ioStat and PerfMon are valuable tools in revealing the current response time. To improve the response time of mechanical drives, you typically either need to buy faster RPM drives, short stroke those drives, or increase the size of the storage cache. The challenge with response time per drive is that we are stuck with 15,000 RPM drives for the near future. Compared to mechanical drives, where typically 5 to 10 milliseconds is considered a very good response time, solid state disks have dramatically better response time. Flash-based systems today are capable of delivering 0.2 millisecond response times and DRAM-based systems can easily deliver 0.015 millisecond response times.

The final option to improve the response time of mechanical drives is to increase the size of the cache. The problem is that the size of the cache is limited on most systems and the cost of cache memory is often prohibitive. While workloads like sequential logs and most write tasks tend to be cache friendly, the storage system cache still has to deal with the storage software overhead that manages snapshots, replication, and other common storage features and so typically has a response time of 0.5 milliseconds. DRAM-based solid-state disks can offer high memory capacities, utilizing more commodity memory, and are significantly faster for these operations and have less latency.