Those of you that have already upgraded to 10 Gbit/s Ethernet can skip this recommendation, but remember to turn on flow control and RSTP/Portfast on ISLs and disable spanning tree altogether, including filtering spanning tree PDUs, on server and array-facing ports.
2) Because VMware ESX 3.5 doesn't support multiple connections per target session or multiple sessions per target, it will bottleneck at 160 Mbit/s, accessing a single target regardless of how much bandwidth you throw at it. Also, VMware's iSCSI multipathing only provides failover for any given session, not load balancing, so multiple Ethernet connections -- even if the links are aggregated -- to the same target won't add performance.
The answer is to use more targets. With EqualLogic and Lefthand's implementation of iSCSI, each LUN is a target so creating multiple LUNs, say one for each 1-4 VMs, will balance the connections across multiple links. With Clariions and NetApp filers, each physical interface is a target with multiple LUNs behind it so you can manually load balance. It could be different for other systems; check with your vendor for how to create multiple targets.
ESX 4 should support multiple sessions per target, mostly solving the problem
3) Jumbo frames, if they're supported and enabled end to end, are a good thing -- but they only have a marginal effect on performance and CPU utilization, up to 5 percent. They're most helpful on large block transfers like backups and don't help database apps much at all.Howard Marks is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage ... View Full Bio