1:15 PM -- EMC blogger Chad Sakac, who blogs as the virtualgeek, put together a multi-vendor dream team of VMware's Andy Banta, Dell/EqualLogic's Eric Schott, NetApp's Vaugh Stewart, EMC's David Black, and HP/Lefthand's Adam Carter. They, along with assorted unsung heroes at their companies, put together a great 4,000-word blog post complete with, as Arlo Guthrie would say "27 color glossy pictures with circles and arrows" -- or, in plain English, digital versions of the kind of diagrams you and I draw on cocktail napkins to explain things.
I've boiled down the highlights, and my own experience, here. But any serious VMware admin should read the full post, including the comments. It is called "A 'Multivendor Post' to help our mutual iSCSI customers using VMware."
For those looking for the Reader's Digest condensed version, here are a few rules of thumb for iSCSI networks hosting VMware ESX hosts.
1) Set up a dedicated path for storage traffic. The last thing you want is for regular user traffic of any kind to interfere with storage traffic. Share a 1 Gbit/s Ethernet port between iSCSI -- or NFS for that matter -- traffic, and user access to the server and you may end up oversubscribing the port, loosing packets, and taking at best a big performance hit as TCP times out and retries or data loss occurs.
I don't mean just a vLAN, but separate host server ports, switches, and storage array ports. While you're at it, your switches should have non-blocking internal fabrics, and support flow control and rapid spanning tree. Even more than the per-session flow control and other lossless features enhanced data center Ethernet promises, I'd just like to see an end to spanning tree so Layer 2 switches can use the bandwidth of all their inter-switch links. You should be able to find these features on most enterprise-class switches. After all, even D-Link's 24-port, $3,000 DGS-3426P fits the bill.