Jasmine McTigue

Network Computing Blogger


Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

Harnessing vSphere Performance Benefits For NUMA: Part 2

In my last post I talked a great deal about native support for NUMA in vSphere on enabled Opteron and Nehalem processor platforms. NUMA is a strong technology in and of itself, but it really starts to shine when teamed with other supporting technologies. In this post, I'm going to cover the details of integrating next generation networking and interrupt technologies to improve storage and networking performance. I'll focus on MSI-X, but future posts will cover VMDq, RSS and Multiple Queue's in more detail, as all these technologies work closely to distribute load to multi-core CPUs and dramatically improve system performance.

MSI-X (Message Signaled Interrupts eXtended) is a new technology that replaces standard MSI as an interrupt technology. At a basic level, MSI allows PCI based devices to grab control of CPU resources through the use of an interrupt. MSI supports up to 32 of these interrupts, more than sufficient for original single cored systems. With the advent of the current age of multi-core processing, however, more interrupts have become necessary to redirect needed requests for CPU to multiple cores in a load balanced fashion (MSI-X supports 2048.) This change allows interrupts to be managed more smoothly by the system and to be handled simultaneously thereby improving latencies and lowering CPU utilization.

In NUMA systems, MSI-X is used to direct the interrupt request back to the NUMA node and processor which initiated the I/O processing. This turns into a direct performance advantage as it lowers interrupt latency and takes advantage of cache locality. MSI-X is advantageous in any situation where a card based device requires CPU time in order to process I/O. This means that MSI-X support in cards and peripherals makes a huge difference in both networking and storage performance for vSphere hosts, where the typical bottleneck is the host's connection to the outside world.

The latest generation of HBA's are particularly strong in this respect as this benchmark by ThirdIO shows. In order to take advantage of MSI-X, you must be running Windows Server 2008/Vista/Windows 7 or Linux Kernel 2.6.24 or later. The card in question must also support MSI-X. In Fibre Channel HBAs, both Emulex and Qlogic are producing MSI-X compatible cards. In networking, Intel's 82575 dual port gigabit adapter and 82598 dual port 10 gigabit Ethernet adapter both support MSI-X, as well as VMDq, RSS and multiple transmission queues, making it an easy choice.

MSI-X plays a large role in improving vSphere networking performance, but with other technologies in play it's difficult to quantify exactly how much improvement is attained. In any event, make sure that your servers are using VMXNET version 3 virtual network interfaces to provide support for MSI-X and other new performance enhancements.


Page:  1 | 2  | Next Page »


Related Reading


More Insights


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013



TechWeb Careers