QLogic's Mount Rainier Brings Flash Cache to HBAs

What can you expect from QLogic's Mount Ranier Project when products eventually appear? For starters, three versions. Learn what else is in store.

Howard Marks

September 10, 2012

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

QLogic recently announced its Mount Rainier Project, which promises to deliver server-side flash caching--not through software and an SSD, as many others do, but built into a new line of Fibre Channel HBAs. While products from this project won't appear on the market for several months, many of the features of Mount Rainier are a big step forward from most of today's server-side caching products.

As I've been following developments in the server-side caching market closely during the past few months while developing a report, I was expecting QLogic and/or Emulex to introduce caching Fibre Channel HBAs. Doing the caching in hardware would not only shift the compute and memory overhead of caching from the server's main processor to the HBA, but it would also eliminate most driver and software compatibility problems. As long as the caching HBA uses the same driver as its cacheless counterpart, it would inherit that deep OS and hypervisor support.

It would be relatively simple, I thought, to implement a write-through cache in the FC ASIC and use something like a Viking SATADIMM on the HBA to provide 64 or 128GB of cache. System administrators could then specify which LUNs they want to cache through the HBA management software.

QLogic, however, decided that rather than making a small evolutionary step by adding a little read cache to an HBA, it would go big. The company announced it would make three versions of Mount Rainier, all based on its 8-Gbps Fibre Channel HBAs. The first model is a new HBA teamed with custom PCIe SSD, which QLogic will OEM from an unspecified vendor. The HBA and SSD are joined by a PCIe x4 cable connection. While this configuration takes up two valuable PCIe slots, the HBA with PCIe x4 cable connection could be interesting when combined with 2.5-inch PCIe SSDs from the ones Micron supplies to Dell to future SCSI Express and NVM Express standard versions.

A second version of the HBA has a SAS connector for use with a standard SSD, and the third is a single slot-integrated HBA and SSD. The integrated version draws 50 watts from the PCIe bus, which is significantly more than standard slots provide, and clearly a design for a specific server OEM that QLogic wouldn't name.

If QLogic was just announcing caching HBAs, it would have just another interesting tool for addressing storage performance problems. Luckily, it's going a couple of steps beyond any of the caching solutions currently on the market by mirroring the cache across a pair of HBAs. Cache mirroring allows QLogic to implement a write-back cache that not only accelerates reads like a write through cache, but also caches disk writes.

The problem with server-side write caching is what happens when a server fails. Some of the data will be trapped in the server's cache so if you just bring the workloads that were running on that server up on another server, it will be looking at old, inconsistent data. To properly recover, you'd have to pull the SSD from the server, put it in a new server with the caching software and run a cache flush process before letting applications access the data.

Mount Rainier HBAs act simultaneously as Fibre Channel initiators (to access data on the array) and targets, to allow other cards to access the cache for mirroring. If a virtual server that was being cached on server A is vMotioned to server B, the new server's HBA can access the cache data on server A until it populates the local cache. If a server fails, the Mount Rainier card on its mirror partner can flush the data to the back-end storage.

QLogic clearly used the code word project to describe these as technology, not product, announcements, so it will be several months before you can buy caching HBAs. Even with the delay, I think QLogic is advancing the state of server-side caching technology. I'm looking even further forward to the 10=Gbps Ethernet/FCoE CNA version connected to NVMe SSDs that will give me all the I/O needed in a single slot.

Disclaimer: QLogic has ordered a copy of my server-side caching report. Dell, Emulex and EMC are or have been clients of DeepStorage LLC, and Micron has provided SSDs for use in the lab.

About the Author

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights