Storage

01:00 AM
Howard Marks
Howard Marks
Commentary
Connect Directly
Twitter
RSS
E-Mail
50%
50%

VMware's VSAN Benchmarks: Under The Hood

VMware touted flashy numbers in recently published performance benchmarks, but a closer examination of its VSAN testing shows why customers shouldn't expect the same results with their real-world applications.

When VMware announced the general availability of VSAN in March, it reported some frankly astounding performance numbers, claiming 2 million IOPS in a 100 percent read benchmark. Wade Holmes, senior technical marketing architect at VMware, revealed the details of VMware's testing in a blog post. Since, as I wrote recently, details matter when reading benchmarks, I figured I’d dissect Wade’s post and show why customers shouldn’t expect their VSAN clusters to deliver 2 million IOPS with their applications.

The hardware
In truth, VMware’s hardware configuration was in line with what I expect many VSAN customers to use. The company used 32 Dell R720 2u servers with the following configuration:

  • 2x Intel Xeon CPU E5-2650 v2 processors, each with 8 cores @ 2.6 GHz
  • 128 GB DDR3 Memory.
  • 8x 16 GB DIMMs @ 1,833 MHz
  • Intel X520 10 Gbit/s NICs
  • LSI 9207-8i SAS HBA
  • 1x 400GB Intel S3700 SSD
  • 7x 1.1TB 10K RPM SAS hard drives

I tried to price this configuration on the Dell site, but Dell doesn’t offer the LSI card, 1,833 MHz RAM, or the Intel DC S3700 SSD. Using 1,600 MHz RAM and 1.2 TB 10K SAS drives, the server priced out at $11,315 or so. Add in the HBA, an Intel SSD that’s currently $900 at NewEgg, and some sort of boot device, and we’re in the $12,500 range.

When I commented on the testing on Twitter, some VMware fanboys responded that they could have built an even faster configuration with a PCIe SSD and 15 K RPM drives. As I'll explain later, the 15 K drives probably wouldn’t affect the results, but substituting a 785 GB Fusion-IO ioDrive2 for the Intel SSD could have boosted performance. Of course, it would have also boosted the price by about $8,000. Frankly, if I were buying my kit from Dell, I would have chosen its NVMe SSD for another $1,000.

Incremental cost in this config
Before we move on to the performance, we should take a look at how much using the server for storage would add to the cost of those servers:

Since VSAN will use host CPU and memory -- less than 10 percent, VMware says -- this cluster will run about the same number of VMs as a cluster of 30 servers that were accessing external storage. Assuming that external storage is connected via 10 Gbit/s Ethernet, iSCSI, or NFS, those 30 servers would cost around $208,000. The VSAN cluster, by comparison, would cost $559,680, making the effective cost of VSAN around $350,000. I thought about including one of the RAM DIMMs in the incremental cost of VSAN and decided that since these servers use eight DIMMs per channel, that would be a foolish configuration.

That $350,000 will give you 246 TB of raw capacity. Since VSAN uses n-way replication for data protection, using two total copies of the data -- which VMware calls failures to tolerate=1 -- would yield 123 TB of useable space. As a paranoid storage guy, I don’t trust two-way replication as my only data protection and would use failures to tolerate=2 or three total copies for 82 TB of space. You’ll also have 12.8 GB of SSDs to use as a read cache and write buffer (essentially a write-back cache).

So the VSAN configuration comes in at about $4/GB, which actually isn’t that bad.

The 100 percent read benchmark
As most of you are probably aware, 100 percent read benchmarks are how storage vendors make flash-based systems look their best. While there are some real-world applications that are 90 percent or more reads, they are usually things like video streaming or websites, where the ability to stream large files is more important than random 4K IOPS. 

For its 100 percent read benchmark, VMware set up one VM per host with each VM’s copy of IOmeter doing 4 KB I/Os to eight separate virtual disks of 8 GB each. Sure, those I/Os were 80 percent random, but with a test dataset of 64 GB on a host with a 400 GB SSD, all the I/Os will be to the SSD, and since SSDs can perform random I/O as fast as they can sequential, that’s not as impressive as it seems.

Why you should ignore 2 million IOPS
Sure, VMware coaxed two million IOPs out of VSAN, but it used 2 TB of the system’s 259 TB of storage to do it. No one would build a system this big for a workload that small, and by keeping the dataset size so small compared to the SSDs, VMware made sure the disks weren’t going to be used at all...

Howard Marks is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage ... View Full Bio
Previous
1 of 2
Next
Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
MarciaNWC
50%
50%
MarciaNWC,
User Rank: Strategist
4/23/2014 | 11:08:24 AM
Details matter
Howard's analysis shows how details really matter. It's so easy for vendors to tout great results, but you have to take a close look at what produced those results. 
Susan Fogarty
50%
50%
Susan Fogarty,
User Rank: Strategist
4/23/2014 | 11:48:50 AM
Re: Details matter
Marcia, you're right -- customers need to read between the lines when evaluating any type of vendor claims. But I'm not sure that anyone really takes these banchmarks seriously, anyway. Readers, what do you think?
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Ninja
4/23/2014 | 4:57:45 PM
Benchmarks will be benchmarks, no matter how faint the resemblance...
I like Howard Marks' sense of realism as he inspects a vendor's benchmark. "VMware could have come up with a more realistic benchmark to run," goes without saying. So could Oracle and a dozen other vendors when it comes to benchmarking. In may experience, the benchmark is optimized to make the vendor look good, no matter how faint the resemblance to the customer's planned use of the product.
Andrew Binstock
50%
50%
Andrew Binstock,
User Rank: Apprentice
4/23/2014 | 5:10:30 PM
VMware Benchmarks
For many years, VMware has promulgated its own benchmarks. At times, these have been used by the larger industry, especially in the early days of benchmarking server loads. I remember tracking them when I was benchmarking servers at InfoWorld. Although we never ran VMware's benchmarks (we tended to prefer those from SPEC, which were consortium-created), we certainly heard from server vendors about their discomfort when being rated by VMware. It was difficult to know whether this was just standard vendor griping or whether the benchmarks did indeed disfavor certain designs, but my sense was that it was the latter. I don't think this was intentionally done by VMware. Rather it illustrates the difficulty of creating useful benchmarks that can be run on multiple architectures and deliver results that can be compared fairly.
Howard Marks
50%
50%
Howard Marks,
User Rank: Moderator
4/24/2014 | 3:12:37 PM
Re: Benchmarks will be benchmarks, no matter how faint the resemblance...
The sad truth is that for vendors benchmark results are frequently about marketing goals not educating the market.  Announcing that you've hit the 2 or 20 megaIOP level before any of your competitors will get you some space on websites and blogs.  Even better it gives your sales force, and fanbois, something to brag, tweet and concact customers about.

 

I just wish they would also publish some benchmarks that are closer to the way customers might actually use their products.

 

 -Howard
Lorna Garey
50%
50%
Lorna Garey,
User Rank: Ninja
4/24/2014 | 4:08:25 PM
Re: Benchmarks will be benchmarks, no matter how faint the resemblance...
It seems like it matters less that the benchmark be completely real-world than that all similar products operate with comparable benchmarks, so that IT can compare apples to apples. Could a company get hold of (or replicate) these benchmarks to run them on their own setups with all short-listed products?
Gallifreyan
50%
50%
Gallifreyan,
User Rank: Apprentice
5/6/2014 | 1:44:14 PM
Re: Benchmarks will be benchmarks, no matter how faint the resemblance...
I agree that vendor benchmarks are going to be slanted toward the vendor. That's where independent testers come in of course. 

Maybe it's time for a venue that encourages real life benchmarks from real life end users. We used to have that on the bogomips level for Linux but that was even more useless than typical vendor benchmarks. :) 
Slideshows
Cartoon
Audio Interviews
Archived Audio Interviews
Jeremy Schulman, founder of Schprockits, a network automation startup operating in stealth mode, joins us to explore whether networking professionals all need to learn programming in order to remain employed.
White Papers
Register for Network Computing Newsletters
Current Issue
Video
Twitter Feed