EMC Struck By Lightning

As I discussed in "The Elephant, The Blind Men and Fusion IO," the jury may be out on Fusion IO as a company, but it's indisputable that putting flash right on the server's PCIe bus keeps latency to a minimum and therefore maximizes application performance. Until now, the problem has been that PCIe flash is usually used to emulate wicked-fast direct-attached storage (DAS),and that means you have to choose between speed and the flexibility that VMware's vMotion provides. In addition to acting as

Howard Marks

May 18, 2011

3 Min Read
Network Computing logo

As I discussed in "The Elephant, The Blind Men and Fusion IO," the jury may be out on Fusion IO as a company, but it's indisputable that putting flash right on the server's PCIe bus keeps latency to a minimum and therefore maximizes application performance. Until now, the problem has been that PCIe flash is usually used to emulate wicked-fast direct-attached storage (DAS),and that means you have to choose between speed and the flexibility that VMware's vMotion provides. In addition to acting as DAS, EMC's new Project Lighting will also act as a cache to shared storage enabling vMotion.

While using PCIe flash as a cache is cool and all, EMC isn't the first to think of it. Marvell's Dragonfly and Fusion IO's own directCache use flash as cache. The key difference is that EMC designed Project Lightning to act as cache for shared storage in the dynamic environment of vSphere with vMotion, Dynamic Resource Scheduling (DRS), High Availability (HA) and Fault Tolerance (FT). Given that EMC owns 80% of VMware, it's a sure thing that Project Lightning will appear on the VMware hardware compatibility list soon after it hits the streets, making my friend Stephen Foskett, and the rest of the world's Steely Eyed Storage Guys who treat vendor HCLs as holy writ, happy.

To achieve the seemingly contradictory goals of local cache with virtual server mobility, EMC has made the Project Lightning cache write-through, so the back-end storage system has all the data. They've even made it back-through so the Project Lightning system won't acknowledge that data's been written till it makes it to the back-end storage. The net result offloads read IOs from the back-end storage system. This leaves more of that system's available performance for write traffic, so in most cases a write through cache speeds reads, too.

Now, a PCIe write-through cache is all well and good, and one on the VMware HCL is even better, but it's still just a PCIe flash card and a little software. Surely in six months or so, Fusion IO, Intel, LSI and the rest of the PCIe flash card vendors will catch up and start cutting into EMC's usually substantial margins.  

By that time, EMC should be showing the first versions of the Project Lightning array integration it kept hinting at when I spoke to the company at EMC World. With array integration, the Lighting software on the server can work with the array to ensure that the same data isn't cached in both places and/or that data cached in the server isn't promoted to array flash to maximize the effectiveness of the flash available.Of course, when a VM is migrated from one host to another, a simple write-through cache in the new host will have to be populated from the back-end storage, resulting in a temporary performance hit. Eventually, EMC may even hook into the vCenter server to accelerate the population of the cache on the new host.

Part of me wonders if the next step is for Emulex and/or Qlogic to add a flash write-through cache to their HBAs and kill two birds with one stone. Then we'd really have to evaluate how much software integration with the back-end storage was really worth.

Disclosure: Emulex and EMC are or have been clients of DeepStorage.net.

About the Author(s)

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights