Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Rethinking The Midrange Array

New midrange array architectures challenge the dominance of the old model using two controllers with coherent non-volatile RAM caches. In addition to scale-out systems, vendors have recently started offering hybrid systems that move RAID to the drive shelves and, most interestingly, shared cache systems.

Ever since Data General set the mold for the midrange array with the first Clariion in the 1990s, mainstream midrange arrays have followed a similar design: a pair of controllers sharing control of a group of dumb drive shelves and some amount of battery backed up NVRAM cache.

Regardless of whether the two controllers are running in an active-active or active-standby configuration, the write cache has to be kept coherent across the two controllers. If it wasn't and a controller acknowledged a disk write and failed before flushing it to disk, the data would be lost. And when the other controller took over, the application's data would be corrupt.

This requirement to maintain cache coherency means array vendors have to engineer a high-bandwidth, low-latency link between the controllers. This limits the size of the write cache and consumes controller CPU cycles that could be used to implement more advanced features such as replication or data reduction. The complexity of maintaining a shared cache is also why we haven't seen systems with more than two controllers, like HP's 3PAR and EMC's vMAX, in the midrange.

Rather than putting a few gigabytes of NVRAM cache in each controller, some of the latest systems I've seen are using solid-state drives (SSDs) in the drive shelves as the top-level cache. These are either high-performance SLC flash SSDs, like STEC's ZuesIOPS, or ultra-high-speed RAM SSDs with internal ultra-capacitors to allow them to dump their contents to internal flash in the event of a power failure.

Since the cache is now in the shared drive shelf--or, even better, mirrored across SSDs in multiple drive shelves--it's available to all the controllers in the array without the overhead required to maintain coherency. In a dual-active controller system where each controller "owns" some volumes, the controllers just need to exchange messages to adjust the allocation of cache space between them, rather than the data itself.

When I first saw systems with this architecture, I was concerned that putting the cache at the end of a SAS connection would create a bottleneck and limit cache performance. Truth is, the kind of transactional applications that actually see a significant benefit from caching don't send enough data to saturate the 6 Gbps of a SAS channel. And the performance hit of a few microseconds of additional latency doesn't seem to matter much as the shared cache system I’ve seen are still blazingly fast.

An application like SQL Server or Exchange typically reads and writes 8K pages to and from its storage. It would take more than 90,000 8K IOPS to saturate a single 6-Gbps SAS channel, and a vendor looking to provide even greater performance from an all-solid-state array could simply use more NV-RAM SSDs on additional channels.