Storage system capacity is no longer a top concern among the IT professionals I speak with. It's been replaced with "How do I maintain top performance for a given application?" This is especially true in the virtual environment, where storage I/O is shared. With no guarantee of a specific performance level, mission-critical applications will not be virtualized.
Storage Quality of Service (QoS) is similar to Network QoS: It ensures that a particular application or workload always gets a certain performance level. For storage systems, this level is typically expressed as IOPS. An increasing number of storage systems now claim to offer some form of QoS.
Storage QoS typically sets the maximum number of IOPs that a particular application may use. To ensure that the application gets this performance level, the IOPS of the storage system are totaled and allocated to each application. Once the total amount of storage system IOPS has been assigned, you must either upgrade that system or purchase another one.
[The Internet of Things brings a whole new set of storage and security challenges. Read Internet Of Things: What About Data Storage?]
Some systems allow you to over-allocate IOPS, but this can be risky. It is similar to thin provisioning capacity, where you allot more capacity than you have, assuming that you will never fill up all volumes at the same time. The same idea holds true for performance, in theory: Not all of your applications will require peak load at the same time.
Throttling vs. QoS
In my opinion, true QoS provides the ability to set both a lower and upper storage performance parameter for each application. Throttling sets only a minimum threshold, ensuring that an application will always have at least that level of storage performance, and if the system can deliver more, it will do so. For a midsized datacenter, the minimum requirement is probably sufficient.
For the enterprise and cloud provider, however, the minimum is not enough. The problem with lower-limit QoS is that applications get more storage performance than they need until the system is under load. While this may be fine in some environments, enterprises and cloud providers want applications to get the exact experience their customers are paying for -- no more, no less. That means being able to set both a minimum and maximum threshold.
Flash vs. hybrid
The final challenge of delivering storage QoS is how to configure the system itself. Most storage systems that deliver QoS capability leverage flash. If the system is all flash, performance is constant and allocating it for specific workloads is relatively straightforward.
Hybrid systems, on the other hand, are more challenging. To keep costs down, they leverage both disk and flash -- two tiers of storage with very different storage performance profiles. Maintaining a QoS guarantee requires careful management of the flash tier to ensure that applications that need a high level of IOPS performance are either always on flash or are quickly moved to it as performance begins to peak.
QoS on a hybrid system is possible, but it requires more work on the part of the storage software developer -- and some pointed questions from IT professionals.
Storage QoS safely virtualizes mission-critical applications by assuring their performance, and it should be considered a key requirement of any new storage system. Storage QoS may be the key element in pushing datacenters to 100% virtualization.
Interop Las Vegas, March 31 to April 4, brings together thousands of technology professionals to discover the most current and cutting–edge technology innovations and strategies to drive their organizations' success, including BYOD security, the latest cloud and virtualization technologies, SDN, the Internet of things, and more. Find out more about Interop and register now.