QLogic Launches FabricCache HBA to Accelerate Writes

QLogic's FabricCache, a Fibre Channel HBA that combines flash memory and caching, may be the easiest way to accelerate write-intensive applications.

Howard Marks

April 30, 2013

3 Min Read
Network Computing logo

Last month Qlogic announced the first fruit of its Mount Rainier Project, the QLE10000 FabricCache adapter. By building flash memory and caching into a Fibre Channel HBA, FabricCache, and future Mount Rainier adapters, can provide flash caching without the overhead of software drivers in the host operating system or hypervisor.

Even better, FabricCache goes one step further than most server-side caching software approaches. These approaches generally implement a write-through or write-around cache that accelerates read requests. They forward write requests directly to back-end storage and don't acknowledge the write request until the data is safely written to the back end.

The problem, of course, is that write-through caches do little to speed writes. You could implement a write-back cache in a server, but that puts your data at risk because it will only exist in the cache if the server crashes before the data can be written to the back end.

FabricCache is a two-slot PCIe product with an 8Gbps Fibre Channel HBA in one slot, and a 200 or 400GB SSD in the other. The two cards are linked by a ribbon cable that carries the data, while the SSD uses its PCIe slot only for power. Each card acts as both a typical Fibre Channel initiator and as a target, allowing other Mount Rainier-based HBAs to access their SSDs. When caching writes, a FabricCache adapter writes the data to its local SSD and to the SSD attached to another FabricCache adapter. Because the data is in cache on multiple servers, if one server crashes, the other server(s) holding the data can flush it to the back-end storage.

Qlogic has talked about several other form factors for the SSD-HBA connection for future products. I'm pushing the company to support accessing the cache pool from a FabricCache HBA that doesn't have SSDs of its own. While that wouldn't provide the very low latency of a local SSD, it would provide performance similar to an all-SSD array that's accessed via Fibre Channel. Customers could decide that they only need 800 or 1000GB of cache in a vSphere cluster of 12 or 16 servers.

Other vendors such as PernixData and Verident have announced write-back caching in software, but those solutions require RDMA network support. While these may be great for new data centers where an Infiniband or 10Gbps network with the proper RDMA support can be installed at the same time, I can't see RDMA-based solutions as Band-Aids to performance problems in existing datacenters.

[ Join us at Interop Las Vegas for access to 125+ IT sessions and 300+ exhibiting companies. Register today! ]

FabricCache may be the easiest way to accelerate write-intensive applications for many users. I was talking to a user that had an application that presented an 80% write workload on an older array. His options included spending thousands to add SSDs to an array that has just a year or two of life left (and mediocre SSD support), using software for a write-through cache, or adding FabricCache adapters. I recommended FabricCache.

Qlogic, like the rest of the Fibre Channel network industry, relies heavily on OEM sales through server and storage suppliers. While FabricCache may eventually be available from your favorite storage array vendor, it may take awhile because Qlogic is working out the OEM qualifications and pricing. Qlogic is currently shipping the QLE10000 through its VAR channel. Qlogic is allowing VARs to set their own prices, which have been hovering at around $7500 for the 200GB version and $11,500 for the 400GB model.

An earlier version of this blog post implied that FabricCache would not be available until the OEM qualification processes were complete.

Disclosure: Qlogic has purchased a copy of my upcoming server-side caching report, as have several of the other players in the market.

About the Author(s)

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights