While there are plenty of options that can solve a performance problem, and vendors are quick to release their respective 2 million IOPS benchmarks, all solutions to a performance problem have a performance cost. Evaluating those costs and determining what is the best route for your data center is a critical step in determining the overall cost of performance. In this entry we will look at the cost of generating performance out of a hard disk drive storage system.
Performance of an HDD-based array can improve as more drives are added to the system, assuming that there is sufficient queue depth to be able to sustain each drive as it is added. Queue depth is essentially the number of near simultaneous storage requests made by the environment. This can be a large group of users all accessing the same database at the same time or a high number of applications all accessing the same storage array at the same time.
Part one of the problem is generating enough depth. For single threaded applications, adding additional hard drives typically won't improve their performance. They are at the mercy of the rotational speed of the drive. Part two of the problem is being able to afford enough drives to bring the queue depth down to zero.
If a single threaded application has a performance problem, the only solution then is to reduce latency. In the case of HDD technology, that means add a faster rotating hard disk (rpm). The faster the drive rotates the more expensive the drive is. The big problem with RPM speed is that today we are limited to 15,000 RPM drives. In other words, you can only go so fast.
Many performance problems can be alleviated by adding more drives to the array because the application is multi-threaded or there is a lot of simultaneous access from multiple applications. The goal is to lower queue depth by adding drives. The true cost here is the cost of actually buying those drives. For some environments, to make a significant impact on queue depth is going to require the purchase of high double-digit and potentially hundreds of hard disk drives. There is also a hidden cost in that this is not a very efficient use of drive capacity as most of the drives will likely not be fully used. As a result, resolving a high-queue-depth performance problem with HDD technology typically means hundreds of gigabytes if not terabytes of wasted disk capacity.
There are other costs associated with solving performance problems with hard disks, especially in the high-queue-depth scenario. First of all, of course all of these drives need to be powered and cooled, so electrical costs are also part of the true cost of solving a performance problem with hard disks.
Secondly, there is the cost of the physical floor space required to house all these disk drives. That is not a problem until you actually run out of data center floor space. Unfortunately, this is becoming a common occurrence in many data centers. In some cases, the true cost of solving performance problems with hard drives comes when adding one more drive means you have to build a brand-new data center, which of course would cost millions of dollars.
As a result, the true cost of solving performance problems with HDD technology can be so high that customers are increasingly are looking to solid state storage technologies to solve those problems. They offer higher IOPS per gigabyte of capacity and consume less power and floor space. Also, since they are not rotational media, there is no latency and a single threaded application that does not generate high queue depths will also be greatly helped by solid state storage.
Solid state storage though is not without its own true costing problems and is something that we will discuss in an upcoming entry.
Follow Storage Switzerland on Twitter
Who rules the LAN? We surveyed 444 IT professionals for their perspective on campus LAN switches from six vendors in our 2011 InformationWeek LAN Equipment Vendor Evaluation Survey. Download the report now. (Free registration required.)