Twenty years ago academics proposed that a redundant array of the inexpensive drives PCs used could replace the high performance drives that made mainframe DASD farms look like laundromats. The dominance of STEC's ZEUS IOPS in enterprise storage battling it out with Pillar and Dell/Equallogic's whole shelves of lower cost Intel and Samsung SSDs reminds me of the RAID vs SLED battle of the past.

Howard Marks

August 27, 2009

3 Min Read
Network Computing logo

Over the past 20 years RAID technology has become so ingrained in how we build storage for anything more important than the family photos that most storage practitioners have forgotten that it was once a revolutionary idea.  In 1988 before Patterson, Gibson and Katz published their seminal paper, which both coined the term RAID and defined levels 1-5, mainframes and minicomputers used 14" drives that resembled your basic Maytag washer. Patterson et. al. dubbed these SLEDs (Single Large Expensive Drives). Meanwhile PCs had 3.5" disks that were pitiful by comparison. The idea that a bunch of pitiful drives could out-perform and out-last the expensive enterprise disk was revolutionary at the time but led to the ultimate demise of SLEDs.

In truth the SLED makers were running out of options to make their drives ever bigger, stronger and faster. The IBM 3380 used as an example in Patterson's paper had 4 independent head positioners and could deliver 200 IOPs, but that complexity drove the price up to $15/MB and power consumption to over 6KW for a single 7.5GB drive. While the 14" diameter of the platters made room for 4 head combs it also made spinning the disk faster impractical. This technology had reached its zenith.

Today's enterprise SSD market reminds me of the RAID vs. SLED days. Most vendors from EMC and IBM to HP and Compellent have added STEC's Zeus IOPS SSDs (the SLED equivalent) to their fibre channel arrays and it's easy to see why. Not only do Zeus IOPS deliver a whopping 45,000 read and 16,000 write IOPs, they also come with a Fibre Channel interface so vendors can just plug them to JBODs where spinning FC disks went and re-tune the firmware on their controllers to accommodate the new devices.

The only problem with STEC's flagship drives is the price. Street price for a 146GB unit is around $16000, or about $110/GB or $1/IOPS. Since enterprise users will almost always use mirrored pairs, that makes the cost of entry $32K. Compared to 20-200 short stroked 15K RPM drives that's cheap, but it does require segregating 146GB of hot data into its own LUN.  
Similar to RAID, Pillar and Equallogic are using SATA interface SLC devices from Intel and Samsung that are less than a quarter as fast as the STECs for write I/O but also less than 1/20th the cost. By using a whole shelf of 50 or 64GB beasts, they can give their users twice the space and IOPS of a pair of STECs for what should be about the same price, once the slot costs are factored in.  

This approach makes sense for these vendors, just as adding STEC made sense for EMC. Both Equallogic and Pillar were figuring out how to best add SSD to array architectures that used point-to-point SATA RAID controllers in each shelf. The enterprise guys had systems that supported SATA but were performance optimized for FC drives on FC loops.Since the Intel and Samsung drives have a 2.5" form factor, I'm wondering how long it's going to be before someone bundles a Nehalem motherboard, LSI Logic SAS/SATA RAID controller and 36 Intel 25-Es with Falconstor NSS to try to set the land-speed record for 3U storage systems. Maybe I can make a video...

With STEC playing SLED and Pillar and Dell/Equallogic SSD  playing RAID, things could get interesting. Think anyone else will consider an SSD RAID model?  I'm sure STEC as a single source is worrying some vendor somewhere.

About the Author(s)

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at:

Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like

More Insights