Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Are SSD-Based Storage Arrays Doomed?

My friend and fellow storage blogger Robin Harris wrote a post back in March asking, "Are SSD-based arrays a bad idea?" He argued that packaging flash in little metal boxes with SAS or SATA interfaces was limiting the potential of an all-flash storage system. Since reading his post, and several of the responses, I've been thinking about the question.

Robin's post lays out several arguments for a from-scratch array design like Violin Memory's, where the array vendor buys raw flash chips and builds its own modules of some sort. In this post, I'm going to examine each of Robin's points. At some point in the future, I'll bring up the advantages of using more conventional SSDs in an all-solid-state array.

Robin's first argument is about latency: He says that SAS/SATA stacks, being designed for disks with millisecond latency, aren't optimized for the 50-microsecond latency of flash and are adding latency themselves. While that's probably true for the volume managers and file systems of most operating systems, the 6-Gbps SAS/SATA chips used on server motherboards and RAID controllers were designed knowing they may be connected to SSDs and introduce minimal latency. After all, SSD-based array vendors like Nimbus Data and Whiptail can deliver latency of less than 250 microseconds from SAS SSDs.

The second performance argument is that the 6-Gbps disk interface doesn't provide enough bandwidth between the flash and processor. While this can be true for systems that use SAS expanders to multiplex the traffic for multiple SSDs through a single channel, the difference between a 6 G-bps SAS channel and an 8-Gbps, 16-lane PCIe 2.0 slot is 33%, not an order of magnitude.

When it comes to cost, Robin argues that the cost of DRAM on a DIMM is well over 90% of the cost of the DIMM, where the flash in an SSD accounts for only 50% to 65% of the cost of that SSD. My problem with this argument is that DIMMs are the very definition of a commodity. A DIMM is just 36 RAM chips and a very simple controller. Other than minor speed differences in CAS, one controller is the same as another, and DIMMS from Vendor A are pretty much the same as DIMMs from Vendor B.

SSDs are, by comparison, much more complex. An SSD maker, even one building SSDs from the industry parts bin, needs to choose a controller, select firmware options, add a RAM cache, over-provision the flash to improve performance and device life, and, for an enterprise SSD, add an ultra capacitor to power the flushing of the RAM cache to flash in the event of a power outage.

While it may be possible to save money building your own modules--and Nimbus Data's Tom Isakovich argues they save building their own SSDs--if those modules won't have a SAS or SATA interface, the array vendor will have to invest many R&D dollars on designing their own controllers.

Robin's reliability argument is based on Google's experience with over 100,000 disk drives that showed that only about half of disk failures were in the mechanical marvel we call the head disk assembly. If, he argues, the SAS or SATA interface electronics cause half of all disk drive failures, why are we building SSDs with these failure-prone components? However, I'm pretty sure most of the electronics failures on hard disk drives aren't the Marvell, LSI or PMC-Sierra SAS or SATA chip, but the head preamplifiers, read channels and other, more analog components on the hard-disk PC board.

First, as you can see on the photo of the circuit card for a Western Digital VelociRaptor below, there's just one SATA chip, but the analog and control sides of the disk drive include a lot more components and, therefore, things to break. Then there's my long experience that analog electronics are just a little bit twitchier than classic digital devices like SATA interface chips.

Western Digital VelociRaptor

Do SSDs have a future in the all-solid-state array market? They sure do. Could a vendor looking for performance at all costs eek a few more IOPS and a few fewer microseconds of latency from custom modules? Probably, but I wonder if it will turn out to be worth it.

Disclaimer: We at DeepStorage believe solutions speak for themselves. We're currently researching several projects related to flash memory.