Solid State Drives New Wave of Storage System Startups

It was just last year that many of us in the storage chattering class were talking about how the market for mainline storage systems had consolidated after 3Par and Compellent were swallowed up by HP and Dell, respectively, leaving the market with just one or two smaller vendors to keep the big boys honest. That discussion failed to recognize that solid state storage had made the transition to the mainstream, and that there was a new wave of startups building storage systems optimized for flash.

Howard Marks

February 27, 2012

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

It was just last year that many of us in the storage chattering class were talking about how the market for mainline storage systems had consolidated after 3Par and Compellent were swallowed up by HP and Dell, respectively, leaving the market with just one or two smaller vendors to keep the big boys honest. That discussion failed to recognize that solid state storage had made the transition to the mainstream, and that there was a new wave of startups building storage systems optimized for flash.

The new lineup of storage system vendors includes startups Violin, Whiptail, Pure Storage, Kaminario and SolidFire with all-solid-state systems, along with Nimble Storage, Tintri, Nexgen and Tegile with designed from scratch hybrids. Add in re-startup Starboard Storage (formerly Reldata) and the slightly more established Nimbus Data and Greenbytes, and the market for innovative flash-based storage systems starts to look pretty exciting.

While just about every midrange storage system on the market today has some flash storage option--and some, like EMC's VNX and HP's Lefthand, have all-solid-state models--the differences between spinning disks and flash memory are so significant that a redesign of the storage system from top to bottom is required to maximize the impact of flash. The conventional argument for new storage system architectures for flash is that conventional RAID controllers don't have the compute and I/O horsepower to handle more than a handful of SSDs. After all, if the controller was designed to handle the 200,000 IOPS a 1,000 spinning disks can handle, it will be vastly overloaded when facing 20 SSDs that can each deliver 100,000 IOPS.

I think there's more, and less, to it than that. After all, many of these systems, from all-solid-state startups and the 800-pound gorillas of the storage business, are using what are essentially Xeon servers as their RAID controllers. Nehalam- and Westmere-based systems should be able to handle the 200,000 to 500,000 IOPS a midrange system needs to deliver. I think the big difference between an established array with flash caching and/or tiering and the best of the newcomers is more about data layout than about controller CPU cycles.

It's all about the difference between flash and spinning disks. Disk drives provide the symmetrical performance for read and write I/O, but asymmetrical performance for sequential vs. random access. Flash is symmetrical for random/sequential, while asymmetrical for read/write. Add in the fact that disk drive failures are almost entirely random occurrences, while flash memory has limited write endurance, and the best way to deal with each storage device is very different.

To maximize performance, conventional disk arrays do everything they can to avoid moving the heads on a disk drive. Not only does moving the heads take 2 to 15ms, a really long time if you're a computer, but on shared storage devices, moving the heads to get data for one application means they may have to be moved right back to where they were when the application that last read data needs more or wants to write an update. When dealing with flash, the goal is not to minimize random I/O but to minimize writes, especially writes smaller than a full flash block, as any write to a flash device requires a whole block be erased and then written to.

So conventional arrays allocate valuable RAM cache to read-ahead buffers, and use other tricks to minimize head motion. Many of the new generation instead use log structures or non-volatile write coalescing buffers, and organize the data on SSDs so data written at the same time is in the same block, rather than storing data the way it's presented to the hosts.

New technologies require new designs. Organizations without a huge installed base that want more of the same are in a better position to provide them.

DeepStorage has worked for Dell, HP, EMC and Starboard Storage. We hope to add the rest of the companies mentioned in this post to our client roster soon.

About the Author

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights