Data centers

03:15 PM
George Crump
George Crump
Commentary
50%
50%

I/O Virtualization: When, Not If

I/O Virtualization (IOV) is an I/O card-sharing technology that lets multiple servers share multiple cards across a single, high-speed cable segment. The general purpose of IOV is to make it easier to share bandwidth among servers in a rack. The cards to be shared are placed in a gateway, and the servers connect to that gateway. Cards are typically shareable on a per-port basis. For example, a quad-port Ethernet card could be assigned to four different servers. The ports or cards can be quickly

I/O Virtualization (IOV) is an I/O card-sharing technology that lets multiple servers share multiple cards across a single, high-speed cable segment. The general purpose of IOV is to make it easier to share bandwidth among servers in a rack. The cards to be shared are placed in a gateway, and the servers connect to that gateway. Cards are typically shareable on a per-port basis. For example, a quad-port Ethernet card could be assigned to four different servers. The ports or cards can be quickly assigned and re-assigned to the connecting servers, providing some hot-swap like functionality to PCIe. IOV is still in its infancy, but it is destined to become a standard component of a data center architecture.

There will be cost savings associated with IOV. For example, a single pair of cards in a server can perform multiple I/O functions and can replace several single-function cards per server. It also eliminates the need for a redundant card of each type in each server. Instead, a single card can act as a "hot spare" held within the I/O gateway and be assigned to the hosts if another card inside the gateway fails. Things get interesting when a single port on a card can be shared across multiple hosts. To accomplish this, it may mean adopting Single Root I/O Virtualization (SR-IOV), a specification from the PCI-SIG, the industry group that manages the PCI standard. Vendors may also create their own cards that can be shared. The result is that a 10GB Ethernet card or 16GB Fibre Channel card will be shareable across multiple servers, with each getting chunks of that bandwidth. The challenge for SR-IOV is the time it will take to come to market. Vendors that develop their own cards may have these capabilities sooner.

I believe IOV becomes a "when not if" technology because of the value it brings to the virtualized environment. Today, an I/O card in a virtual host is managed by the hypervisor. Unless you limit your VMs per physical host to the number of physical NIC ports available on the host, the hypervisor has to create virtual NICs. As a result, the hypervisor has to be involved in each I/O to understand what VM the I/O is intended for. This consumes host resources and limits the potential of getting full bandwidth from the NIC. With IOV, the inspection process can be offloaded from the hypervisor. It allows the cards to be virtualized and present themselves to the hypervisor as individual NICs that can be hard-assigned to specific VMs, minimizing hypervisor interaction and maximizing resources.

IOV will also give the IT administrator the ability to add and remove I/O resources to servers as needed, a term we call Infrastructure Bursting. With per-host virtual machine densities reaching the twenties and thirties, planning the I/O needs for those systems becomes more challenging. Predicting peak load times may be impossible. IOV lets you dynamically add I/O resources to physical hosts and VMs within them when peak times occur, and then re-assign them elsewhere when the need passes, basically bursting the infrastructure for a short time. All it takes is a spare card in the gateway to be assigned when I/O becomes an issue in a particular server.

The ability to offload the virtualization task from the host and to dynamically add temporary bandwidth to a host makes IOV a compelling technology, and something that will likely become prevalent in larger data centers. The inhibitors to IOV are the connection styles and the amount of disruption, like any other infrastructure, that it may cause. We'll cover those inhibitors in a future entry. For more on IOV, see this entry.

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, ... View Full Bio
Comment  | 
Print  | 
More Insights
Cartoon
Slideshows
Audio Interviews
Archived Audio Interviews
Jeremy Schulman, founder of Schprockits, a network automation startup operating in stealth mode, joins us to explore whether networking professionals all need to learn programming in order to remain employed.
White Papers
Register for Network Computing Newsletters
Current Issue
2014 Private Cloud Survey
2014 Private Cloud Survey
Respondents are on a roll: 53% brought their private clouds from concept to production in less than one year, and 60% ­extend their clouds across multiple datacenters. But expertise is scarce, with 51% saying acquiring skilled employees is a roadblock.
Video
Twitter Feed