You need multiple switches for truly critical applications, and you must connect each server and disk array to at least two switches. (While you're at it, make sure your interswitch links have enough bandwidth to carry all the traffic from your servers to disk arrays in the event of a disk array-to-switch link failure, too).
Click to enlarge in another window
ISCSI vendors and various storage pundits make a big deal over the need for using jumbo frames in an iSCSI SAN. Back in the dark ages when Ethernet was a half-duplex shared media network, the maximum frame size of 1,500 bytes ensured no one station could monopolize the network. Since most host operating systems read and write data from their disks in clusters of 4 KB (the NTFS default) or larger if your system is using standard 1,500-byte frames, most iSCSI data transfers require multiple frames. Multiple frames means TCP/IP stack overhead in the CPU because the data is divided into multiple packets, checksums are calculated for each, and packets must be reassembled at the far end. Small packets also soak up network bandwidth because more time is spent in interframe gaps, frame headers and checksums relative to real data.
The good news is most enterprise Gigabit Ethernet equipment supports jumbo frames to some extent. We've found that enabling jumbo frames can speed up iSCSI performance by about 5 percent, while reducing server CPU utilization by 2 percent to 3 percent with standard or smarter NICs. Because TOE (TCP off-load engine) cards or HBAs (host bus adapters) already do off-loading, the CPU savings from jumbo frames is a wash when the frames are used with a TOE or HBA, though the jumbo frames should still speed up performance.