Intel's VMDq On ESX = Broad Market Goodness

I missed Intel's 10GbE & IOV news. Read on for why you shouldn't make the same mistake.

Joe Hernick

March 6, 2008

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

I missed Intel's 10GbE and IOV news. Read on for why you shouldn't make the same mistake. Intel has leveraged its I/0 acceleration technology in VMs since 2006; the VMworld demonstration of Virtual Machine Device queues (VMDq) gives its 10-Gb cards better alignment with virtualized switching inside the ESX host. Aligning queues yields lower latency and better real-world performance. Intel also get bragging rights for first 10-Gigabit Ethernet (10GbE) iSCSI support on ESX.

I met with Shefali Chini and Steven Schultz from Intel's LAN access Division to discuss VMDq and all things 10GbE. The Intel team is working with VM vendors to improve performance by dedicating virtual I/O pathways based on VM guest requirements.

The Intel cards work much of this magic in hardware, where most competitors rely on software for prioritization and queuing of packets. Offloading packet sorting to the NIC yields a performance boost for throughput versus host-based VMM queuing; ESX host CPUs also get a break. While VMware's NetQueue takes full advantage of Intel's VMDq, VirtualIron and Citrix/Xen hosts are still pending support. Both can run Intel's 1 Gb and 10 Gb NICs just fine, but they won't get the performance boost until early next year. Intel is working with the Xen community to integrate VMDq functionality with a 1Q '09 target date. (Yes, it is working with MS on Hyper-V, too...)

Performance numbers promise good things; Intel's benchmarks on an 8-way host running 8 VMs demoed a throughput increase from 4 Gbps to 9.2 Gbps with VMDq enabled. Numbers go up to 9.5 Gbps by tweaking packet size. While I always take vendor benchmarks with a grain of salt, real world numbers will likely see a dramatic boost on I/O-congested ESX servers.

Why all the fuss around IOV at VMworld? The answer is simple: we're all pushing against the bottleneck of 1Gb NICs for multiguest boxes. Necessity has spawned a variety of solutions, but most are either Band-Aids or greatly increase complexity of management... dedicated bridged cards per VM, parallel storage architectures using dedicated NICs and/or HBAs, performance management tools, etc.While companies like Xsigo are suggesting we leapfrog conventional solutions and jump to a full IOV model, Intel sees 10GbE as viable for the long term in conventional and virtualized servers. I like where Xsigo is going -- unfortunately, one of my servers has InfiniBand slots. Intel is betting most shops will stick with good-ol' Ethernet (incrementally faster, of course) as the platform of choice for data and storage networking. Fibre Channel is here to stay? FCoE yields great numbers on 10GbE. iSCSI? 'Nuff said. Dozens of VMs on one host? VMDq makes sense to me.

Intel and Neterion are on the forefront of mainstream I/O optimization for VMs. Every HW vendor, every virt platform provider, heck, every chipmaker in this market, is working to make their stuff work better in increasingly congested virtualized environments. As a guy that bought one of the first Bay Networks fast-E switches back in the early '90s, I know there will be the occasional trip or hiccup as vendors rush to market. VMware's TAP program and the open Xen community should help mitigate any growing pains around IOV.

Every box in our test lab is running 1GbE NICs at the moment ... we haven't had the pressure to push our testing to the 10Gbps range. Looks like I'll have a busy summer.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights