More on Performance Metrics: The Relationship Between IOPS and Latency

The best predictors of storage-related application performance have always been latency and IOPS. Hybrid SSD and HD systems are changing how those numbers should be viewed.

Howard Marks

August 9, 2012

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

In my last post, "SSDs and Understanding Storage Performance Metrics," I explained how storage users pay too much attention to throughput. The performance of mission-critical applications in data centers is actually probably more dependent on the storage system's ability to deliver a large number of IOPS while satisfying each I/O request with minimal latency.

Unfortunately, getting useful IOPS and latency figures from vendors, or even some product reviews, isn't easy. A big part of the problem is there's no such thing as a standard I/O operation. When we talk about IOPS, we have to be really clear about both the size and type of the I/O operation in question. You'll commonly see vendors claim their systems can deliver 100,000 or 1 million IOPS while not saying whether those are 512-byte sequential reads, 4K random writes or some other mythical data I/O chosen simply to make their storage systems look good.

As if that weren't bad enough, you'll also see vendors juking the stats by running simple benchmarks like Iometer against very small logical disks. By doing so, all or most of the I/Os are actually being performed to and from the controller's cache rather than from the storage disks themselves.

More Than IOPS

Even if a vendor's miraculous new storage system could deliver 1,000,000 4K random IOPS with a 60/40 read/write mix, I'd want to know how much latency there was for each of those million IOPS. In a recent blog post explaining IOPS and latency, Dimitris Krekoukias examined the performance of Oracle on a system that was delivering 15,000 IOPS with an average latency of 25ms. The database engine on that system reported a high level of I/O wait time, even though it was grinding through 15,000 IOPS.

One commenter at Krekoukias's Ruptured Monkey blog recounted how the credit card processor he worked for had determined that its fraud prevention application ran fast enough to not significantly slow down the charge authorization process only if the storage array supporting that application had latency under 4ms.

The key to delivering high application performance is a combination of high IOPS and low latency. As an application or benchmark stresses a storage system, it may continue to deliver high IOPS but at higher levels of latency, and that may seriously affect real-world performance.

This relationship between IOPS and latency is one very good reason to pay more attention to published results from benchmarks like JetStress, TPC-C and SPC-1 rather than simple performance tests like Iometer. The rules for reporting results from these benchmarks require vendors to disclose not only the raw performance that their devices achieved, but also the transactional latency. JetStress will fail a system under test if latency exceeds 20ms, while SPC requires reporting latency at several load levels and TPC results include average, 90th percentile and maximum latency.

SSD Caches Change the Game

While average latency is most directly related to typical application performance, as we move to hybrid storage systems that mix flash and spinning disks, the 90th-percentile latency reading will become more significant. Your application's apparent performance may be driven as much by the latency for the 10% of transactions whose data isn't entirely in flash as much as by the much lower latency for the 90% that are.

That 90th-percentile figure may also expose inconsistent latency. This is a problem with some flash-based systems in write-intensive applications, as SSDs have to perform housekeeping to free up a fresh page to write to. Well-designed systems have sufficient RAM cache and overprovisioned flash to keep latency relatively constant.

Once you bundle up a group of disk drives, or SSDs into a system, many factors--from controller CPU and cache size to the way the system organizes its data--can affect both IOPS and latency. Your best bet is to pay attention to both stats.

In our next installment, we'll look at how RAID affects storage system performance and how to configure a set of disks for a given application's performance needs.

About the Author

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights