Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Harnessing vSphere Performance Benefits For NUMA: Part 2

In my last post I talked a great deal about native support for NUMA in vSphere on enabled Opteron and Nehalem processor platforms. NUMA is a strong technology in and of itself, but it really starts to shine when teamed with other supporting technologies. In this post, I'm going to cover the details of integrating next generation networking and interrupt technologies to improve storage and networking performance. I'll focus on MSI-X, but future posts will cover VMDq, RSS and Multiple Queue's in more detail, as all these technologies work closely to distribute load to multi-core CPUs and dramatically improve system performance.

MSI-X (Message Signaled Interrupts eXtended) is a new technology that replaces standard MSI as an interrupt technology. At a basic level, MSI allows PCI based devices to grab control of CPU resources through the use of an interrupt. MSI supports up to 32 of these interrupts, more than sufficient for original single cored systems. With the advent of the current age of multi-core processing, however, more interrupts have become necessary to redirect needed requests for CPU to multiple cores in a load balanced fashion (MSI-X supports 2048.) This change allows interrupts to be managed more smoothly by the system and to be handled simultaneously thereby improving latencies and lowering CPU utilization.

In NUMA systems, MSI-X is used to direct the interrupt request back to the NUMA node and processor which initiated the I/O processing. This turns into a direct performance advantage as it lowers interrupt latency and takes advantage of cache locality. MSI-X is advantageous in any situation where a card based device requires CPU time in order to process I/O. This means that MSI-X support in cards and peripherals makes a huge difference in both networking and storage performance for vSphere hosts, where the typical bottleneck is the host's connection to the outside world.

The latest generation of HBA's are particularly strong in this respect as this benchmark by ThirdIO shows. In order to take advantage of MSI-X, you must be running Windows Server 2008/Vista/Windows 7 or Linux Kernel 2.6.24 or later. The card in question must also support MSI-X. In Fibre Channel HBAs, both Emulex and Qlogic are producing MSI-X compatible cards. In networking, Intel's 82575 dual port gigabit adapter and 82598 dual port 10 gigabit Ethernet adapter both support MSI-X, as well as VMDq, RSS and multiple transmission queues, making it an easy choice.

MSI-X plays a large role in improving vSphere networking performance, but with other technologies in play it's difficult to quantify exactly how much improvement is attained. In any event, make sure that your servers are using VMXNET version 3 virtual network interfaces to provide support for MSI-X and other new performance enhancements.

  • 1