Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

QLogic's Mount Rainier Brings Flash Cache to HBAs

QLogic recently announced its Mount Rainier Project, which promises to deliver server-side flash caching--not through software and an SSD, as many others do, but built into a new line of Fibre Channel HBAs. While products from this project won't appear on the market for several months, many of the features of Mount Rainier are a big step forward from most of today's server-side caching products.

As I've been following developments in the server-side caching market closely during the past few months while developing a report, I was expecting QLogic and/or Emulex to introduce caching Fibre Channel HBAs. Doing the caching in hardware would not only shift the compute and memory overhead of caching from the server's main processor to the HBA, but it would also eliminate most driver and software compatibility problems. As long as the caching HBA uses the same driver as its cacheless counterpart, it would inherit that deep OS and hypervisor support.

It would be relatively simple, I thought, to implement a write-through cache in the FC ASIC and use something like a Viking SATADIMM on the HBA to provide 64 or 128GB of cache. System administrators could then specify which LUNs they want to cache through the HBA management software.

QLogic, however, decided that rather than making a small evolutionary step by adding a little read cache to an HBA, it would go big. The company announced it would make three versions of Mount Rainier, all based on its 8-Gbps Fibre Channel HBAs. The first model is a new HBA teamed with custom PCIe SSD, which QLogic will OEM from an unspecified vendor. The HBA and SSD are joined by a PCIe x4 cable connection. While this configuration takes up two valuable PCIe slots, the HBA with PCIe x4 cable connection could be interesting when combined with 2.5-inch PCIe SSDs from the ones Micron supplies to Dell to future SCSI Express and NVM Express standard versions.

A second version of the HBA has a SAS connector for use with a standard SSD, and the third is a single slot-integrated HBA and SSD. The integrated version draws 50 watts from the PCIe bus, which is significantly more than standard slots provide, and clearly a design for a specific server OEM that QLogic wouldn't name.

If QLogic was just announcing caching HBAs, it would have just another interesting tool for addressing storage performance problems. Luckily, it's going a couple of steps beyond any of the caching solutions currently on the market by mirroring the cache across a pair of HBAs. Cache mirroring allows QLogic to implement a write-back cache that not only accelerates reads like a write through cache, but also caches disk writes.

The problem with server-side write caching is what happens when a server fails. Some of the data will be trapped in the server's cache so if you just bring the workloads that were running on that server up on another server, it will be looking at old, inconsistent data. To properly recover, you'd have to pull the SSD from the server, put it in a new server with the caching software and run a cache flush process before letting applications access the data.

Mount Rainier HBAs act simultaneously as Fibre Channel initiators (to access data on the array) and targets, to allow other cards to access the cache for mirroring. If a virtual server that was being cached on server A is vMotioned to server B, the new server's HBA can access the cache data on server A until it populates the local cache. If a server fails, the Mount Rainier card on its mirror partner can flush the data to the back-end storage.

QLogic clearly used the code word project to describe these as technology, not product, announcements, so it will be several months before you can buy caching HBAs. Even with the delay, I think QLogic is advancing the state of server-side caching technology. I'm looking even further forward to the 10=Gbps Ethernet/FCoE CNA version connected to NVMe SSDs that will give me all the I/O needed in a single slot.

Disclaimer: QLogic has ordered a copy of my server-side caching report. Dell, Emulex and EMC are or have been clients of DeepStorage LLC, and Micron has provided SSDs for use in the lab.