Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

VMware's VSAN Benchmarks: Under The Hood: Page 2 of 2

In addition, VMware set failures to survive to zero, so there was no data protection and, therefore, no replication latency in this test.

When I pointed out on Twitter that no one should run VSAN with failures to tolerate (FTT) set to zero, the response was that some applications, like SQL Server with Always On, do their own data protection. That is, of course, true, but those applications are designed to run directly on DAS, and if you’re not getting shared, protected data, why spend $5,000 per server for VSAN? That would let you buy a much bigger SSD and you could just use it directly.

The good news
While I don’t think this benchmark has any resemblance to a real-world application load, we did learn a couple of positive things about VSAN from it.

First, the fact VMware could reach 2 million IOPS is impressive on its face, even if it took tweaking the configuration to absurd levels. It’s more significant as an indication of how efficient VSAN’s cache is at handling read I/Os. Intel DC S3700 SSDs are rated at 75,000 read IOPS, so the pool of 32 SSDs could theoretically deliver 2.4 million IOPS. That VMware managed to layer VSAN on top and still deliver 2 million is a good sign.

The second is how linearly VSAN performance scaled from 253 K IOPS with four nodes to 2 million with 32. Of course, most of the I/O was local from a VM to its host’s SSD, but any time a scale-out storage system can be over 90 percent linear in its scaling, I’m impressed.

The read-write benchmark
VMware also released the data for a more realistic read-write benchmark. This test used a 70 percent read / 30 percent write workload of 4K IOPS. While 70/30 is a little read-intensive for me -- most OLTP apps are closer to 60/40, and VDI workloads are more than 50 percent write -- I’m not going to quibble about it. Also more realistic is the failures to tolerate, which was now set to 1.

As previously mentioned, I think FTT=1 is too little protection for production apps, although it would be fine for dev and test. Using FTT=2 would increase the amount of replication performed for each write, which should reduce the total IOPS somewhat.

Again, VMware used a single workload with a very small dataset relative to the amount of flash in the system, let alone the total amount of storage. In this case, each host ran one VM that accessed 32 GB of data. Running against less than 10 percent of the flash meant not only that all the I/Os are to SSD but that the SSDs always have plenty of free space to use for newly written data.

Again, the system was quite linear in performance, delivering 80,000 IOPS with four nodes and 640,000 with 32 nodes, or 20,000 IOPS per node. Average latency was around 3 ms, so better than any spinning disk could deliver, but a bit higher than I’d like for all I/Os from SSD.

Twenty thousand IOPS per node is respectable, but considering that the VM was tuned using multiple paravirtualized SCSI adapters and that, once again, most of the I/O was to the local SSD, not that impressive.

The benchmark I'd like to see
As a rule of thumb, I like to see hybrid storage systems deliver 90 percent or more of the system’s random I/Os from flash. For most applications that do random I/Os, that means the system has to have about 10 percent of its usable storage as flash.

Even with IOmeter’s limitations as a benchmark -- primarily that it doesn’t ordinarily create hotspots but spreads data across the entire dataset under test -- VMware could have come up with a more realistic benchmark to run.

I would like to see, say, 20 VMs per host accessing a total of 80 percent of the total useable capacity of the system. The VMs would use varying size virtual disks, with some VMs accessing large virtual disks with throttles on the number of I/Os they generate by way of the number of outstanding I/Os and burst/delay settings on IOmeter.

A simpler test would be to simply use a 60/40 read/write mix over a dataset 120 percent to 200 percent the size of the flash in the system. Even that would come closer to showing the performance VSAN can deliver in the real world than testing against 10 percent of the flash.