Are SSD-Based Storage Arrays Doomed?

Our blogger examines SSD-based storage arrays: Are latency and cost drawbacks? What about reliability?

Howard Marks

May 18, 2012

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

My friend and fellow storage blogger Robin Harris wrote a post back in March asking, "Are SSD-based arrays a bad idea?" He argued that packaging flash in little metal boxes with SAS or SATA interfaces was limiting the potential of an all-flash storage system. Since reading his post, and several of the responses, I've been thinking about the question.

Robin's post lays out several arguments for a from-scratch array design like Violin Memory's, where the array vendor buys raw flash chips and builds its own modules of some sort. In this post, I'm going to examine each of Robin's points. At some point in the future, I'll bring up the advantages of using more conventional SSDs in an all-solid-state array.

Robin's first argument is about latency: He says that SAS/SATA stacks, being designed for disks with millisecond latency, aren't optimized for the 50-microsecond latency of flash and are adding latency themselves. While that's probably true for the volume managers and file systems of most operating systems, the 6-Gbps SAS/SATA chips used on server motherboards and RAID controllers were designed knowing they may be connected to SSDs and introduce minimal latency. After all, SSD-based array vendors like Nimbus Data and Whiptail can deliver latency of less than 250 microseconds from SAS SSDs.

The second performance argument is that the 6-Gbps disk interface doesn't provide enough bandwidth between the flash and processor. While this can be true for systems that use SAS expanders to multiplex the traffic for multiple SSDs through a single channel, the difference between a 6 G-bps SAS channel and an 8-Gbps, 16-lane PCIe 2.0 slot is 33%, not an order of magnitude.

When it comes to cost, Robin argues that the cost of DRAM on a DIMM is well over 90% of the cost of the DIMM, where the flash in an SSD accounts for only 50% to 65% of the cost of that SSD. My problem with this argument is that DIMMs are the very definition of a commodity. A DIMM is just 36 RAM chips and a very simple controller. Other than minor speed differences in CAS, one controller is the same as another, and DIMMS from Vendor A are pretty much the same as DIMMs from Vendor B.

SSDs are, by comparison, much more complex. An SSD maker, even one building SSDs from the industry parts bin, needs to choose a controller, select firmware options, add a RAM cache, over-provision the flash to improve performance and device life, and, for an enterprise SSD, add an ultra capacitor to power the flushing of the RAM cache to flash in the event of a power outage.

While it may be possible to save money building your own modules--and Nimbus Data's Tom Isakovich argues they save building their own SSDs--if those modules won't have a SAS or SATA interface, the array vendor will have to invest many R&D dollars on designing their own controllers.

Robin's reliability argument is based on Google's experience with over 100,000 disk drives that showed that only about half of disk failures were in the mechanical marvel we call the head disk assembly. If, he argues, the SAS or SATA interface electronics cause half of all disk drive failures, why are we building SSDs with these failure-prone components? However, I'm pretty sure most of the electronics failures on hard disk drives aren't the Marvell, LSI or PMC-Sierra SAS or SATA chip, but the head preamplifiers, read channels and other, more analog components on the hard-disk PC board.

First, as you can see on the photo of the circuit card for a Western Digital VelociRaptor below, there's just one SATA chip, but the analog and control sides of the disk drive include a lot more components and, therefore, things to break. Then there's my long experience that analog electronics are just a little bit twitchier than classic digital devices like SATA interface chips.

Western Digital VelociRaptor

Do SSDs have a future in the all-solid-state array market? They sure do. Could a vendor looking for performance at all costs eek a few more IOPS and a few fewer microseconds of latency from custom modules? Probably, but I wonder if it will turn out to be worth it.

Disclaimer: We at DeepStorage believe solutions speak for themselves. We're currently researching several projects related to flash memory.

About the Author

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights