Optimizing iSCSI & VMware

Any serious VMware admin should read 'A Multivendor Post to help our mutual iSCSI customers using VMware' for tips on iSCSI networks hosting VMware ESX hosts

Howard Marks

February 11, 2009

4 Min Read
Network Computing logo

1:15 PM -- EMC blogger Chad Sakac, who blogs as the virtualgeek, put together a multi-vendor dream team of VMware's Andy Banta, Dell/EqualLogic's Eric Schott, NetApp's Vaugh Stewart, EMC's David Black, and HP/Lefthand's Adam Carter. They, along with assorted unsung heroes at their companies, put together a great 4,000-word blog post complete with, as Arlo Guthrie would say "27 color glossy pictures with circles and arrows" -- or, in plain English, digital versions of the kind of diagrams you and I draw on cocktail napkins to explain things.

I've boiled down the highlights, and my own experience, here. But any serious VMware admin should read the full post, including the comments. It is called "A 'Multivendor Post' to help our mutual iSCSI customers using VMware."

For those looking for the Reader's Digest condensed version, here are a few rules of thumb for iSCSI networks hosting VMware ESX hosts.

1) Set up a dedicated path for storage traffic. The last thing you want is for regular user traffic of any kind to interfere with storage traffic. Share a 1 Gbit/s Ethernet port between iSCSI -- or NFS for that matter -- traffic, and user access to the server and you may end up oversubscribing the port, loosing packets, and taking at best a big performance hit as TCP times out and retries or data loss occurs.

I don't mean just a vLAN, but separate host server ports, switches, and storage array ports. While you're at it, your switches should have non-blocking internal fabrics, and support flow control and rapid spanning tree. Even more than the per-session flow control and other lossless features enhanced data center Ethernet promises, I'd just like to see an end to spanning tree so Layer 2 switches can use the bandwidth of all their inter-switch links. You should be able to find these features on most enterprise-class switches. After all, even D-Link's 24-port, $3,000 DGS-3426P fits the bill.Those of you that have already upgraded to 10 Gbit/s Ethernet can skip this recommendation, but remember to turn on flow control and RSTP/Portfast on ISLs and disable spanning tree altogether, including filtering spanning tree PDUs, on server and array-facing ports.

2) Because VMware ESX 3.5 doesn't support multiple connections per target session or multiple sessions per target, it will bottleneck at 160 Mbit/s, accessing a single target regardless of how much bandwidth you throw at it. Also, VMware's iSCSI multipathing only provides failover for any given session, not load balancing, so multiple Ethernet connections -- even if the links are aggregated -- to the same target won't add performance.

The answer is to use more targets. With EqualLogic and Lefthand's implementation of iSCSI, each LUN is a target so creating multiple LUNs, say one for each 1-4 VMs, will balance the connections across multiple links. With Clariions and NetApp filers, each physical interface is a target with multiple LUNs behind it so you can manually load balance. It could be different for other systems; check with your vendor for how to create multiple targets.

ESX 4 should support multiple sessions per target, mostly solving the problem

3) Jumbo frames, if they're supported and enabled end to end, are a good thing -- but they only have a marginal effect on performance and CPU utilization, up to 5 percent. They're most helpful on large block transfers like backups and don't help database apps much at all.4) iSCSI HBAs similarly don't provide the huge performance or CPU utilization boost vendors claim, and old-line FC storage guys believed they would. They are the only way to boot the server from SAN, so if you'd rather pay $1,000 for 2 HBA ports than buy two 73 GB drives for the server, go ahead. Since VMware itself makes host servers somewhat stateless, I wouldn't worry about boot from SAN.

Note that HBAs aren't very popular and most (70 percent or more) iSCSI VMware servers use software initiators.

5) For high performance, use software initiators in the guest OS through dedicated Ethernet ports and virtual switches. This allows you to use Microsoft's excellent multipath IO and initiator that will load balance and use multiple connections/sessions to run faster than shared connections.

This also lets you manage your iSCSI storage the same way on your physical and virtual servers, including Microsoft clusters. It also lets you use array-based snapshots, which through VSS or scripts are more than crash consistent.

On the down side, you lose ESX snapshots and the things that rely on them like SRM and maybe VCB for these VMs. If were talking about database servers, including Exchange, where the backup process does log maintenance or database validation that VCB can't do, you'll want to backup through a host agent anyway.— Howard Marks is chief scientist at Networks Are Our Lives Inc., a Hoboken, N.J.-based consultancy where he's been beating storage network systems into submission and writing about it in computer magazines since 1987. He currently writes for InformationWeek, which is published by the same company as Byte and Switch.

About the Author(s)

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights