The Inhibitors To I/O Virtualization

In my entry "I/O Virtualization: When, Not If," I discussed some of the merits of I/O Virtualization (IOV) and the value in offloading I/O responsibilities from the hypervisor with SR-IOV (single route IOV). While SR-IOV and IOV may seem like great strategies, there are some inhibitors that need to be overcome. The first is OS or hypervisor support, and the second is dealing with the disruptions to the network and storage infrastructure when first implemented.

George Crump

August 12, 2010

3 Min Read
Network Computing logo

In my entry "I/O Virtualization: When, Not If," I discussed some of the merits of I/O Virtualization (IOV) and the value in offloading I/O responsibilities from the hypervisor with SR-IOV (single route IOV). While SR-IOV and IOV may seem like great strategies, there are some inhibitors that need to be overcome. The first is OS or hypervisor support, and the second is dealing with the disruptions to the network and storage infrastructure when first implemented.

When it comes to SR-IOV enabled cards, the primary challenge is when will the operating systems and hypervisors support the technology? To take advantage of all that SR-IOV has to offer, this support needs to be there. As we discussed, IOV gateways (IOG) in large part solve this problem by presenting SR-IOV cards as "regular" cards to the servers that attach to them. Vendors may forgo SR-IOV altogether and develop their own multi-host cards for their gateways, so they don't have to wait for OS, hypervisor support or SR-IOV itself. They will have to develop the card itself, and potentially a OS driver, or require an IOG.

If the IOG is the way to go, then the bigger challenge is implementing IOG itself. As we discussed in our article "What is I/O Virtualization" this is infrastructure and the gateway device does need to be in between the storage, the networks and the servers they are connecting to. Partly, this is just a reality of infrastructure where changes are slow to take place. Certainly steps can be taken to minimize downtime associated with implementing the IOG by leveraging the existing high availability in the environment. The change over to the IOG can be made one link at a time.

The other inhibitor to IOV goes beyond the speed at which infrastructure changes, though. Some of the initial forays into the I/O market compounded the problem by introducing a new type of card that is installed in the server and a new connection methodology from the server to the gateway. Often this was either PCIe or Infiniband. While the I/O performance of these solutions was ideal, it did involve installing a new class of connection into the environment. For some servers it is reasonable to assume they needed the advanced I/O performance of these technologies, but for many others it did not. What was needed was a less disruptive way to bring IOV to servers with more modest I/O requirements.

The next step in IOV is to leverage a connection technology that is already widespread in the data center. Ethernet is the most likely candidate. While today it would be limited to a 10GbE connection speed, over the next few years that will increase significantly. The advantage of leveraging Ethernet is that the infrastructure is already in place, and the move to 10GbE is already happening in many servers. As the administrators are installing 10GbE cards, why not also pick one that can support IOV? This will allow maximum flexibility when dealing with the I/O demands placed on the infrastructure by server virtualization as well as give flexibility when choosing future storage and network protocols. Moving to Virtualized I/O can be somewhat disruptive, choosing the right time to make that move makes it less so. The right time may very well be as you upgrade the environment to 10GbE.

About the Author(s)

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights