02:39 PM
Howard Marks
Howard Marks
Connect Directly

A Bad Combo: PernixData's FVP and VSAN

Using FVP from startup PernixData together with VMware's VSAN is an all-around bad idea. Combining two flash management layers would be inefficient and reduce performance.

With all the buzz at VMworld about PernixData making its first big conference appearance, and the announcement that VMware's VSAN would actually become available early next year, enough folks were talking and writing about using PernixData's FVP and VSAN together that it triggered a discussion among some of the storage and virtualization cognoscenti on Twitter.

Most of us dismissed the idea out of hand. Sure, PernixData’s FVP and VSAN are both cool, but not all great tastes taste great together. Chocolate and peanut butter, great combo; chocolate and sushi not so much.

The obvious problem, as described by Chris Wahl on Twitter, is that both VSAN and FVP provide flash acceleration and the combination would be flash on flash. To implement both FVP and VSAN, we’d need two SSDs in each server, one for cache and one for VSAN. However, much of the data in FVP’s cache would be an additional copy of data that VSAN already had on its SSD.

The combined system would waste CPU cycles moving data up and down between the SSDs. Plus, performance would actually be worse than if all the SSD space was managed by one of the two platforms, which would be able to hold just one copy of any data block and therefore include more data for a higher hit ratio.

So at first glance, FVP and VSAN together seemed like a pretty bad idea. Users would have to buy both products and more flash than if they went with either product alone.

However as I thought more about it, I saw it wasn’t just a bad idea--it was a really bad idea. Not only would using two flash management layers in the same system be inefficient in its use of flash, it would also generate huge amounts of network traffic and shorten the life of the SSDs it ran on.

FVP is a write-back cache that uses synchronous replication to protect data against server failures. VSAN uses its SSD as a write buffer and a tier of storage, also replicating data to a second server in the cluster to protect against server failures. Let’s take a look at the data flow in a system that uses both technologies to see just how bad an idea this combination would be.

[EMC launched new VNX arrays with upgraded software to take full advantage of SSD performance. Get the details in "EMC Reboots VNX For Flash Storage."]

When an application writes data, it will be cached by FVP in the local SSD and replicated to another server. At some later time, FVP will flush its cache, writing to the storage provided by VSAN. VSAN will store the data on the local SSD and once again replicate the data to another server in the cluster.

Read I/Os are less problematic. If the data is in FVP’s cache, it will satisfy the request by hiding the I/O from VSAN. Data blocks that are sort of warm may be migrated up and down between the FVP cache, VSAN SSD and spinning disks less accurately than they would on a system that took a holistic view of the available flash, but the hot and cold data will end up where they belong.

However, by combining these two technologies, we’ve doubled the amount of data written to the SSD, reducing its life, and doubled the amount of replication traffic across the network as VSAN protects data that was already pretty safe in the FVP cache.

Either VSAN or PernixData’s FVP alone will let you use flash in your servers to speed up your applications. Combining them will not only cost more and require more flash but might also slow things down and unnecessarily tie up your network.

Find out the pros and cons of flash storage in Howard Marks' session SSDs In the Data Center at Interop New York this October.

Howard Marks is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage ... View Full Bio
Comment  | 
Print  | 
More Insights
Hot Topics
Do We Need 25 GbE & 50 GbE?
Jim O'Reilly, Consultant,  7/18/2014
White Papers
Register for Network Computing Newsletters
Current Issue
Twitter Feed