Rethinking The Midrange Array

Rather than a cosmetic makeover, new midrange array architectures are challenging the dominance of the old two-controller models, with scale-out systems, hybrid systems that move RAID to the drive shelves and shared cache systems.

Howard Marks

May 7, 2012

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

New midrange array architectures challenge the dominance of the old model using two controllers with coherent non-volatile RAM caches. In addition to scale-out systems, vendors have recently started offering hybrid systems that move RAID to the drive shelves and, most interestingly, shared cache systems.

Ever since Data General set the mold for the midrange array with the first Clariion in the 1990s, mainstream midrange arrays have followed a similar design: a pair of controllers sharing control of a group of dumb drive shelves and some amount of battery backed up NVRAM cache.

Regardless of whether the two controllers are running in an active-active or active-standby configuration, the write cache has to be kept coherent across the two controllers. If it wasn't and a controller acknowledged a disk write and failed before flushing it to disk, the data would be lost. And when the other controller took over, the application's data would be corrupt.

This requirement to maintain cache coherency means array vendors have to engineer a high-bandwidth, low-latency link between the controllers. This limits the size of the write cache and consumes controller CPU cycles that could be used to implement more advanced features such as replication or data reduction. The complexity of maintaining a shared cache is also why we haven't seen systems with more than two controllers, like HP's 3PAR and EMC's vMAX, in the midrange.

Rather than putting a few gigabytes of NVRAM cache in each controller, some of the latest systems I've seen are using solid-state drives (SSDs) in the drive shelves as the top-level cache. These are either high-performance SLC flash SSDs, like STEC's ZuesIOPS, or ultra-high-speed RAM SSDs with internal ultra-capacitors to allow them to dump their contents to internal flash in the event of a power failure.

Since the cache is now in the shared drive shelf--or, even better, mirrored across SSDs in multiple drive shelves--it's available to all the controllers in the array without the overhead required to maintain coherency. In a dual-active controller system where each controller "owns" some volumes, the controllers just need to exchange messages to adjust the allocation of cache space between them, rather than the data itself.

When I first saw systems with this architecture, I was concerned that putting the cache at the end of a SAS connection would create a bottleneck and limit cache performance. Truth is, the kind of transactional applications that actually see a significant benefit from caching don't send enough data to saturate the 6 Gbps of a SAS channel. And the performance hit of a few microseconds of additional latency doesn't seem to matter much as the shared cache system I’ve seen are still blazingly fast.

An application like SQL Server or Exchange typically reads and writes 8K pages to and from its storage. It would take more than 90,000 8K IOPS to saturate a single 6-Gbps SAS channel, and a vendor looking to provide even greater performance from an all-solid-state array could simply use more NV-RAM SSDs on additional channels.

About the Author

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights