While using PCIe flash as a cache is cool and all, EMC isn't the first to think of it. Marvell's Dragonfly and Fusion IO's own directCache use flash as cache. The key difference is that EMC designed Project Lightning to act as cache for shared storage in the dynamic environment of vSphere with vMotion, Dynamic Resource Scheduling (DRS), High Availability (HA) and Fault Tolerance (FT). Given that EMC owns 80% of VMware, it's a sure thing that Project Lightning will appear on the VMware hardware compatibility list soon after it hits the streets, making my friend Stephen Foskett, and the rest of the world's Steely Eyed Storage Guys who treat vendor HCLs as holy writ, happy.
To achieve the seemingly contradictory goals of local cache with virtual server mobility, EMC has made the Project Lightning cache write-through, so the back-end storage system has all the data. They've even made it back-through so the Project Lightning system won't acknowledge that data's been written till it makes it to the back-end storage. The net result offloads read IOs from the back-end storage system. This leaves more of that system's available performance for write traffic, so in most cases a write through cache speeds reads, too.
Now, a PCIe write-through cache is all well and good, and one on the VMware HCL is even better, but it's still just a PCIe flash card and a little software. Surely in six months or so, Fusion IO, Intel, LSI and the rest of the PCIe flash card vendors will catch up and start cutting into EMC's usually substantial margins.
By that time, EMC should be showing the first versions of the Project Lightning array integration it kept hinting at when I spoke to the company at EMC World. With array integration, the Lighting software on the server can work with the array to ensure that the same data isn't cached in both places and/or that data cached in the server isn't promoted to array flash to maximize the effectiveness of the flash available.