The Perfect SAN Array

Back when I wrote for PC Magazine, we did an issue once a year on the Perfect PC. It assembled the best features and performance parts from the various systems we'd tested over the past year, and with the help of the art department, we'd combine them into our idea of the perfect PC. Today I'd like to do the same for a SAN array, discussing the specs of the perfect midrange array.

Howard Marks

October 1, 2009

4 Min Read
Network Computing logo

Back when I wrote for PC Magazine, we did an issue once a year on the Perfect PC. It assembled the best features and performance parts from the various systems we'd tested over the past year, and with the help of the art department, we'd combine them into our idea of the perfect PC. Today I'd like to do the same for a SAN array, discussing the specs of the perfect midrange array.

My dream array would have no battery. Like Adaptec's server RAID cards, it would use an ultra-capacitor and flash instead. When the power fails, it dumps its write cache to MLC flash, which shouldn't take more than a minute or so.

I've always found cache battery maintenance annoying. Having to bring down a server or storage array to change the battery is like taking your car to the shop because the spare tire is flat. No fun, no progress, but reduced productivity and for most of us in the mid-market, a lost evening or weekend day. I sure hope more vendors jump on the ultracapacitor and flash bandwagon, so that in a few years we can look at cache batteries with the disdain we now have for removable disk packs.

The dream array would use SAS as the exclusive drive, and a JBOD interface. Now that Seagate and the other drive vendors are shipping both 15Krpm performance oriented and 5400-7200RPM capacity oriented drives with dual port SAS interfaces, we can standardize on SAS. As a JBOD interface, a single SAS cable carries three times the bandwidth of an 8gbps FC loop at a significantly lower cost. Even with the additional bandwidth, the SAS expander based drive chassis has to be less expensive to produce than FC SBODs, since the expander chips are off-the-shelf silicon.

The drives would include on drive disk encryption with the array controller handling all the keys so I can send failed drives back to the factory for credit without worrying about my data.Rather than building RAID sets up from physical drives and then slicing them back into LUNs my dream array --  like Compellent Storage Centers, HP EVAs and 3PAR InServs among others -- I would build LUNs from blocks spread across a larger universe of available drives. Of course, this would enable thin provisioning, data protection across drives of dissimilar sizes, faster failed drive rebuilds, and spare capacity distributed across the array rather than spare drives. A long warranty, a la Xiotech Emprise, would be nice, but I'm not sure I want to give up hot swappable drives just yet.

It would also use RAM and Flash intelligently to accelerate access to my hot data. In addition to flash LUNs and automated tiering that moves hot data to SSDs on a block by block basis, my dream array would, like SUN's Readzilla, have a huge, 200-500GB, MLC flash read cache so that my file system metadata and the frequently accessed code in my VM images is cached.

Servers would access the array via 8Gbps FC and 10Gbps Ethernet with CEE/DCB support.  The Ethernet ports would support both iSCSI and FCoE.  The iSCSI implementation would let me choose whether I want to present each LUN as an individual target, Equallogic style, or the array as a single target with multiple LUNs like FC arrays do. While I'm dreaming I might even want an FCoE name server so servers could directly connect via FCoE without an FCoE switch.

Those Ethernet ports would also support both sync and async replication. This should include the ability for a pair of replicating arrays to both present the same LUN identity (Target and LUN, IQN Etc.) to servers the way Compellent's Live Volume or HDS' HAM so I can Vmotion a server from one data center to the other and have it always access its local array.

While I'm dreaming, lets include an iSCSI initiator that can connect to any other iSCSI array and do Asynchronous replication.  Array vendors keep telling me they can replicate better and faster if they control both ends of the circuit, but I can't understand why someone doesn't offer a least common denominator replication protocol as open source and shame their competitors into supporting it.That's all I have room for today.  What features would your dream array have?  Which would you like to become standard on next years models?

About the Author(s)

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights