Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

QLogic Launches FabricCache HBA to Accelerate Writes

Last month Qlogic announced the first fruit of its Mount Rainier Project, the QLE10000 FabricCache adapter. By building flash memory and caching into a Fibre Channel HBA, FabricCache, and future Mount Rainier adapters, can provide flash caching without the overhead of software drivers in the host operating system or hypervisor.

Even better, FabricCache goes one step further than most server-side caching software approaches. These approaches generally implement a write-through or write-around cache that accelerates read requests. They forward write requests directly to back-end storage and don't acknowledge the write request until the data is safely written to the back end.

The problem, of course, is that write-through caches do little to speed writes. You could implement a write-back cache in a server, but that puts your data at risk because it will only exist in the cache if the server crashes before the data can be written to the back end.

FabricCache is a two-slot PCIe product with an 8Gbps Fibre Channel HBA in one slot, and a 200 or 400GB SSD in the other. The two cards are linked by a ribbon cable that carries the data, while the SSD uses its PCIe slot only for power. Each card acts as both a typical Fibre Channel initiator and as a target, allowing other Mount Rainier-based HBAs to access their SSDs. When caching writes, a FabricCache adapter writes the data to its local SSD and to the SSD attached to another FabricCache adapter. Because the data is in cache on multiple servers, if one server crashes, the other server(s) holding the data can flush it to the back-end storage.

Qlogic has talked about several other form factors for the SSD-HBA connection for future products. I'm pushing the company to support accessing the cache pool from a FabricCache HBA that doesn't have SSDs of its own. While that wouldn't provide the very low latency of a local SSD, it would provide performance similar to an all-SSD array that's accessed via Fibre Channel. Customers could decide that they only need 800 or 1000GB of cache in a vSphere cluster of 12 or 16 servers.

Other vendors such as PernixData and Verident have announced write-back caching in software, but those solutions require RDMA network support. While these may be great for new data centers where an Infiniband or 10Gbps network with the proper RDMA support can be installed at the same time, I can't see RDMA-based solutions as Band-Aids to performance problems in existing datacenters.

[ Join us at Interop Las Vegas for access to 125+ IT sessions and 300+ exhibiting companies. Register today! ]

FabricCache may be the easiest way to accelerate write-intensive applications for many users. I was talking to a user that had an application that presented an 80% write workload on an older array. His options included spending thousands to add SSDs to an array that has just a year or two of life left (and mediocre SSD support), using software for a write-through cache, or adding FabricCache adapters. I recommended FabricCache.

Qlogic, like the rest of the Fibre Channel network industry, relies heavily on OEM sales through server and storage suppliers. While FabricCache may eventually be available from your favorite storage array vendor, it may take awhile because Qlogic is working out the OEM qualifications and pricing. Qlogic is currently shipping the QLE10000 through its VAR channel. Qlogic is allowing VARs to set their own prices, which have been hovering at around $7500 for the 200GB version and $11,500 for the 400GB model.

An earlier version of this blog post implied that FabricCache would not be available until the OEM qualification processes were complete.

Disclosure: Qlogic has purchased a copy of my upcoming server-side caching report, as have several of the other players in the market.