Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Why Flash Storage Benchmark Testing Is Not Hype

I've been briefed on or directly involved with a few lab tests in the past few months that demonstrated the performance of flash storage systems. Although the performance achieved is generally well beyond what most organizations need today, there is value in doing these tests and getting their results. Interestingly, not everyone agrees.

The most common complaint is that no one needs this kind of performance. That is not exactly accurate. Most might not need it, but some do -- right now. High-frequency trading (HFT) and high-performance computing (HPC) are two excellent examples.

Also, as virtual server and virtual desktop environments become more dense, with more virtual machines per host resulting in fewer, more storage-I/O-demanding hosts, we will see an increasing performance demand. Finally, it is important to note that most of the millions of IOPS tests are on sequential read I/O, not random read/write I/O. On those tests, performance often drops as low as the 500k IOPS range, a requirement that we are starting to see in some heavily virtualized environments.

[ Ignore vendor pressures and satisfy your own storage needs. Read How To Pick The Right SMB Storage System. ]

The adoption of virtualized servers and desktops as well as clustered servers is a significant change in the way we measure or should measure IOPS. No longer are we looking to meet the demand of a single application with a dedicated storage device. We are now looking to meet the demand of dozens of hosts, all driving traffic to a single or clustered set of storage devices. The combined IOPS of the data center is now a critical factor.

We also now have storage systems capable of supporting a mixture of each of these workloads: virtualized servers and desktops as well as clustered applications and scale-up applications. In the past we had to allocate separate storage to each, so large, combined IOPS numbers were not needed. Now we can support mixed workloads on a single system, which simplifies storage management but requires storage performance.

There is also the value of what we learn about storage system and infrastructure design from these tests. For example, in a recent test we learned the value of having multiple PCIe hubs to transfer data to the storage infrastructure. We also learned the advantage of using Gen 5 Fibre (16 Gbps FC) Channel instead of 8G. These lessons apply in any performance-constrained situation and justify why you should be investing in advanced servers and networks now instead of later.

Finally, there is also the reality that eventually most data centers will need this performance. A few years ago, tests that were delivering 100,000 IOPS were being ridiculed for being unrealistic, now 100k IOPS is a common request for data centers.

You probably don't need 1 million IOPS today but you might very well soon. The good news is the work is already been done; you can apply that learning on today's infrastructure and know we are ready for you in the future.