Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

CacheIQ Rises From The Ashes Of StorSpeed

While I've gotten used to the vulture-capital-driven
creative destruction of companies in the tech business, even I sometimes get whiplash
at how fast the VCs will eat their young. While cloud storage gateway vendor Cirtas is the latest example, NAS caching
startup StorSpeed set a new land speed record coming out of stealth in October
2009 and blowing up just five months later.  Now StorSpeed founder Greg Dahl, and a new
executive team, have re-entered the market as CacheIQ.

The founders of CacheIQ--including NetQoS founder Joel
Trammell and sales wiz Keith carpenter, in addition to StorSpeed survivor Dahl--have
taken StorSpeed's basic concept of a network protocol-oriented read cache for
NFS traffic and adopted it to run on commodity hardware. This avoids the costs, and
long production/test cycles that custom hardware like StorSpeed was using

CacheIQ's RapidCache joins products from Avere,
Alacritech and Violin Memory in providing an external cache for NFS traffic such as VMware hosts and Oracle databases. These
other caching systems mount the back-end storage system via NFS and present a
new mount point to the servers and/or users that are accessing the data. While this allows the vendors to add additional
FAN (file area network)-style NAS virtualization, like Avere's new global name
space, it also somewhat complicates the configuration.

Instead of terminating the NFS connections, CacheIQ inserts
its Flow Director, a 24-port 10Gbps Ethernet switch, in the data path between
the user systems and the back-end storage. The Flow Director "sniffs" the data passing on non-NFS traffic and referring
NFS traffic to the cluster of data servers that store the cache. A typical RapidCache installation has a pair
of redundant Flow Directors and up to eight data servers. Each data server is a
six-core Westmere-based server with 144GBytes of RAM and up to 3.2TBytes of flash SSDs
creating a two-tiered caching architecture.

Should the data server(s) fail, the system just forwards all
traffic to the back-end storage as a switch. In fact, you can even have some
access go through the RapidCache and other access, like backup traffic, bypass
the cache. The system checks the back end
for updates, like client-side caching does, so users accessing data from the
cache don't get stale data. And, since it doesn't cache writes, there aren't
questions about whether the data on the back end storage is up to date. I'm intrigued by the whole external caching market and am
glad to see a new player with a different approach. I wish CacheIQ good luck.