EMC's Lightning Strikes
The storage cognoscenti have been all atwitter this morning as EMC announces the details of Project Lightning, the flash-based server cache solution it previewed last May at EMCworld. The first version of the renamed VFCache is now available, and it's clearly a version 1.0 product. Hopefully, EMC will get some of the road map items out the door, as well as the just announced Thunder, soon.
February 6, 2012
The storage cognoscenti have been all atwitter this morning as EMC announces the details of Project Lightning, the flash-based server cache solution it previewed last May at EMCworld. The first version of the renamed VFCache is now available, and it's clearly a version 1.0 product. Hopefully, EMC will get some of the road map items out the door, as well as the just announced Thunder, soon.
Leaving aside the irony that competitor Hitachi Data Systems used Thunder and Lightning as code names for its high-end disk arrays that were a thorn in the side of EMC's Symmetrix sales, EMC seems to be building a comprehensive portfolio of flash-based technologies. Its arrays can use flash as dedicated volumes, sub-LUN tier with FAST, and use flash as a cache with FASTcache. VFCache extends caching, at least read caching, to the server, and Thunder promises all-flash arrays delivering millions of input/output operations per second (IOPS).
VFCache is an off-the-shelf PCIe flash card from Micron (the P320) or LSI (not, as rumored, Intel), with EMC's special-sauce caching software. The software, which is available for Red Hat Linux and Windows, with support for VMware and Hyper-V, installs as a filter driver in the server's, or virtual server's, operating system and uses the flash it's been allocated as a write-through cache. Since all writes are forwarded to the back-end storage immediately, snapshots and other functions that access data directly on the back-end storage will work properly.
However, because it's a write-through cache, VFCache will speed only reads. Since many mainstream applications like database servers do two to five times as many reads as writes, just caching reads can still result in a significant performance boost--as much as 80% for Oracle in some EMC benchmarks. In the VMware environment, system administrators can slice and dice the 300 Gbytes of SLC flash on the card and allocate it to VMs to use as cache. Since the cache is local to the host, when admins want to vMotion a virtual machine to another host, they have to manually disable caching before moving the workload.
EMC promises "a rich roadmap of VFCache technologies," including deduping and compressing data in cache to increase its effective size; MLC-based PCIe cards; mezzanine cards for blade servers; SSD formats; and, of course, the array integration that made Lighting more interesting than stand-alone caching solutions like those from Flashsoft, Fusion-IO's IOturbine or Nevex. With array integration, a host could tell the back-end array not to use its valuable flash to cache data that's already in the local server cache, or specifically to cache to flash as the VM is about to be vMotioned to another host. EMC also promises to address clusters and vMotion by making the cache in multiple servers coherent, although only time will tell if that's worth the CPU and network overhead.
This announcement highlights one of the limitations of Cisco's UCS blade servers that's always bothered me. The standard UCS blades have only one mezzanine slot per blade and no LAN on motherboard, so the mezzanine slot has to be used for network and storage I/O, which means converged networking is required. That's opposed to other blade systems, where it's just a good idea. More significantly, it means UCS blades, other than the double-wide blades that take two of the eight slots in a UCS chassis, have no available I/O expansion. I would guess that one of the primary drivers for making the VFCache software work with SAS/SATA SSDs is that they can be used in UCS blades, though the disk interface bottleneck will mean SSD VFCache will never be as quick as PCIe flash.
As the EMC guys said, after the Lightning comes Thunder, so EMC previewed a new all-flash storage system as Project Thunder. The system holds several PCIe flash cards and connects to multiple servers. To keep latency to a minimum, the server-to-storage connection is via RDMA (Remote Direct Memory Access) over InfiniBand or 40 Gbps Ethernet. One Thunder can deliver millions of IOPS to the kinds of customers now considering solutions from the likes of SolidFire, Nimbus and Pure Storage.
EMC would not comment on price, but said that it would be competitive with solutions from Fusion IO.
Disclaimer: EMC is not currently a client of DeepStorage, although it has been in the past and may be in the future, if it doesn't mind me picking on Lightning.
About the Author
You May Also Like