All-Flash Vs. Hybrid Storage Arrays: Which Is Better?

Second-generation all-flash arrays have overcome previous limitations and are surpassing hybrid arrays as an enterprise storage option.

Jim O'Reilly

March 17, 2015

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Everyone today agrees that flash or SSDs are the primary storage tier of the future. Performance is of course the differentiator, but there are still debates about how to speed things up. Is it better to add an all-flash array (AFA) to an existing SAN, or just upgrade the existing arrays by adding SSDs?

This was an easier decision when AFAs were expensive and had just a few terabytes of capacity. They were economically infeasible as primary storage and found their first market as caching devices in front of the SAN. At the same time, apps still weren’t optimized for high-speed, low-latency IO.

The result was a new class of hybrid arrays, with a few SSDs and a lot of hard disk drives. Data in these appliances can be segregated by setting up LUNS and then using auto-tiering software to migrate data back and forth. Cost and the capabilities of the array limited the SSD storage to perhaps four or eight drives, typically in a RAID 1 configuration.

In many ways, hybrid arrays have the characteristics of bolting a V12 engine into a low-end sedan. Enterprise SAS SSDs can deliver 500K or more IOPS. Most RAID controllers struggle above 1 million IOPS, and older arrays can’t sustain anywhere close to that number. Another issue is the response latency through traditional RAID controllers and Fibre-Channel networks. This was good enough for a bunch of hard drives (which manage around 150 IOPS each), but inadequate for SSD speeds.

Array vendors had to decide whether to  speed up the controllers to cope with SSD or to look at alternatives. Second-generation all-flash arrays  made the decision essentially moot. These units took advantage of faster, denser flash devices and got much bigger and faster.

At the same time, features such as auto-tiering connected these units more effectively to the traditional hard-drive arrays, largely reducing the drudgery associated with storage management. With all-flash arrays adding a compression option for the hard drive arrays, there are savings to offset acquisition cost of the AFA, mainly in not having to expand HDD capacity to cope with storage growth for at least a couple of years. Taken together, these factors make the story for the all-flash array very strong against the hybrid approach.

With the playing field tilting so strongly towards the all-flash array, one would expect the vendor side to provide all-flash appliances. It seems the industry was somewhat in denial, and only EMC and IBM among the traditional large storage vendors have a true AFA. Dell, HP and HDS offer souped-up hybrid arrays with some optimization for flash. Typically, these are marketed as all-flash arrays since they can be loaded up with just SSD, but this is more spin than reality.

NetApp has both a hybrid filer, built from the existing FAS designs, and a true all-flash array, called FlashRay. We can expect the other hybrid vendors to join the mainstream and either design or buy AFA products in the next year.

Figure 1:

However, the lack of commitment to the AFA approach has cost established vendors and allowed a bunch of startups such as Violin Memory and Pure Storage to gain traction in the market. Some of these vendors are into third-generation products with yet more horsepower and lower prices. They are expanding features, such as Windows Storage Server integration, and adding much faster interfaces, including RDMA Ethernet and InfiniBand ports. With cheaper 3D NAND flash devices moving into production, capacities will expand by large factors over the next 18 months.

At the same time, the AFA vendors recognize that the old SCSI operation system stack and traditional interfaces are both bottlenecks in future designs. They are working with SNIA to define a standard for NVMe over Fabrics, to allow that low-overhead SSD protocol to move into a LAN model.

With AFAs rapidly overtaking the traditional or hybrid array, the basic structure of future storage systems is changing. We are moving to a new tiering model with AFA as the primary Tier 1 storage for all “hot” data and with boxes of inexpensive consumer-class HDDs as compressed, archived secondary storage in Tier 2. Even this configuration is challenged, since the flash vendors now predict crossover between flash and HDD in capacity in 2016, and in price per terabyte in 2017. This would make all-flash data centers the norm moving forward.

One anomaly in all of this is object storage. Most of the recent appliances in this segment are built as hybrids, with typically two or four SSDs and eight or 10 bulk HDDs. More than anything, this reflects the way object storage software, and especially Ceph, has evolved, and object storage being viewed as secondary storage for many years. With big data driving growth in the object storage market at a very high rate, we should expect to see all-flash arrays as object stores by 2016, though there may be competition from vSANs.

About the Author

Jim O'Reilly

President

Jim O'Reilly was Vice President of Engineering at Germane Systems, where he created ruggedized servers and storage for the US submarine fleet. He has also held senior management positions at SGI/Rackable and Verari; was CEO at startups Scalant and CDS; headed operations at PC Brand and Metalithic; and led major divisions of Memorex-Telex and NCR, where his team developed the first SCSI ASIC, now in the Smithsonian. Jim is currently a consultant focused on storage and cloud computing.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights