Data centers

05:32 PM
Jasmine  McTigue
Jasmine McTigue
Commentary
50%
50%

Harnessing vSphere Performance Benefits For NUMA: Part 2

In my last post I talked a great deal about native support for NUMA in vSphere on enabled Opteron and Nehalem processor platforms. NUMA is a strong technology in and of itself, but it really starts to shine when teamed with other supporting technologies. In this post, I'm going to cover the details of integrating next generation networking and interrupt technologies to improve storage and networking performance. I'll focus on MSI-X, but future posts will cover VMDq, RSS and Multiple Queue's in m

In my last post I talked a great deal about native support for NUMA in vSphere on enabled Opteron and Nehalem processor platforms. NUMA is a strong technology in and of itself, but it really starts to shine when teamed with other supporting technologies. In this post, I'm going to cover the details of integrating next generation networking and interrupt technologies to improve storage and networking performance. I'll focus on MSI-X, but future posts will cover VMDq, RSS and Multiple Queue's in more detail, as all these technologies work closely to distribute load to multi-core CPUs and dramatically improve system performance.

MSI-X (Message Signaled Interrupts eXtended) is a new technology that replaces standard MSI as an interrupt technology. At a basic level, MSI allows PCI based devices to grab control of CPU resources through the use of an interrupt. MSI supports up to 32 of these interrupts, more than sufficient for original single cored systems. With the advent of the current age of multi-core processing, however, more interrupts have become necessary to redirect needed requests for CPU to multiple cores in a load balanced fashion (MSI-X supports 2048.) This change allows interrupts to be managed more smoothly by the system and to be handled simultaneously thereby improving latencies and lowering CPU utilization.

In NUMA systems, MSI-X is used to direct the interrupt request back to the NUMA node and processor which initiated the I/O processing. This turns into a direct performance advantage as it lowers interrupt latency and takes advantage of cache locality. MSI-X is advantageous in any situation where a card based device requires CPU time in order to process I/O. This means that MSI-X support in cards and peripherals makes a huge difference in both networking and storage performance for vSphere hosts, where the typical bottleneck is the host's connection to the outside world.

The latest generation of HBA's are particularly strong in this respect as this benchmark by ThirdIO shows. In order to take advantage of MSI-X, you must be running Windows Server 2008/Vista/Windows 7 or Linux Kernel 2.6.24 or later. The card in question must also support MSI-X. In Fibre Channel HBAs, both Emulex and Qlogic are producing MSI-X compatible cards. In networking, Intel's 82575 dual port gigabit adapter and 82598 dual port 10 gigabit Ethernet adapter both support MSI-X, as well as VMDq, RSS and multiple transmission queues, making it an easy choice.

MSI-X plays a large role in improving vSphere networking performance, but with other technologies in play it's difficult to quantify exactly how much improvement is attained. In any event, make sure that your servers are using VMXNET version 3 virtual network interfaces to provide support for MSI-X and other new performance enhancements.

Jasmine McTigue is principal and lead analyst of McTigue Analytics and an InformationWeek and Network Computing contributor, specializing in emergent technology, automation/orchestration, virtualization of the entire stack, and the conglomerate we call cloud. She also has ... View Full Bio
Previous
1 of 2
Next
Comment  | 
Print  | 
More Insights
Hot Topics
6
8 Gotchas Of Technology Contracting
Craig Auge, Partner, Vorys,  7/17/2014
5
10 Handy WiFi Troubleshooting Tools
Ericka Chickowski, Contributing Writer, Dark Reading,  7/22/2014
4
Guide: The Open Compute Project and Your Data Center
James M. Connolly, Editor in Chief, The Enterprise Cloud Site,  7/21/2014
White Papers
Register for Network Computing Newsletters
Cartoon
Current Issue
2014 Private Cloud Survey
2014 Private Cloud Survey
Respondents are on a roll: 53% brought their private clouds from concept to production in less than one year, and 60% ­extend their clouds across multiple datacenters. But expertise is scarce, with 51% saying acquiring skilled employees is a roadblock.
Video
Slideshows
Twitter Feed