The End Of The High-End Storage Array Era
With the rise of all-flash arrays and object storage, big and fast no longer go together for enterprise storage.
February 4, 2016
I came to the realization recently that one of the big changes in the storage business over the past few years has been the bifurcation of storage system size and speed. Where high-end arrays were once the answer for capacity and speed, they’ve now been topped by all-flash arrays in performance and object storage for capacity.
Today we buy high-end storage systems like EMC VMAX, Hitachi Data Systems Virtual Storage Platform, and HPE 3PAR because they provide the highest levels of resiliency, availability and serviceability (RAS) for our most critical applications. Back in the age of spinning disks, these steadfast champions of enterprise storage were not just the safest place to store your data, but held the capacity and performance crowns as well.
The high-end monolithic arrays could put up record-setting capacities and performance numbers because in the disk age, both capacity and performance came from supporting as many disks as possible. Each drive slot today can deliver 4 to10 TB of capacity and 80 IOPS from a 7200 RPM drive or 146 to 600 GB and 200 IOPS from a 15K RPM disk. By wide striping over a thousand or more spindles, high-end arrays could deliver petabytes of capacity or hundreds of thousands of IOPS without new-fangled technology like flash memory.
Now that end users have gotten over irrational fears that SSDs would explode when they exceeded their allocated write endurance or that a hash collision would cause all their data to turn into a brown goo resembling week-old Wheatena, a handful of SSDs can replace the 500 spindles it used to take to provide 10,000 IOPS. Many organizations that bought high-end arrays in the past for their performance have discovered that smaller all- flash arrays, especially those architected for flash from the ground up, can provide million IOPS performance without the astronomical cost of providing six nines of reliability.
interop-las-vegas-small-logo.jpg
Learn more about the changing storage landscape in the Storage Track at Interop Las Vegas this spring. Don't miss out! Register now for Interop, May 2-6, and receive $200 off.
As the system administrators that consume storage shift from the managers of a few IBM mainframes and RISC systems to folks that manage x86 servers like cattle, I’m hearing that more organizations want to distribute their systems across multiple storage systems rather than putting all their eggs in the one basket that is a huge, monolithic array. Now, a VMAX or a VSP is more like a three-ply, Kevlar-reinforced US Navy Seal battle basket than the wicker number Little Red Riding Hood used to ferry goodies to grandma’s house, but one really good basket is a tough sell to the folks building distributed compute systems for reliability.
Just as users seeking performance now have all-flash arrays as an alternative to high-end storage arrays, scale-out storage systems provide greater capacity and simpler management over the long run than a great honking array. Today's scale-out systems can address a broad spectrum of performance; they range from AFAs like those from SolidFire or EMC XtremeIO to Isilon’s scale-out NAS and object storage systems.
All in all, it’s no surprise that high-end storage system sales are sagging. Flash and scale-out systems are taking over those applications where scale or speed, as opposed to RAS, was the primary reason the customer bought a high-end array.
Disclosure: Dell, EMC, SolidFire and HP are or have been clients of DeepStorage, LLC and I’ve attended several HDS analyst events at the HDS' expense.
About the Author
You May Also Like