Virtualization Primer: Paravirtualization
PV, as it's known, increases both virtual machine and hypervisor performance.
August 18, 2011
If you think paravirtualization is when you have your avatar jump out of an airplane using your Wii, you’re behind the times. With virtual machine version 8 and vSphere 5, VMware introduced and tweaked new paravirtualized guest adapters for a variety of interface types.
Why is that important? First-generation hypervisors tricked guest operating systems into believing they were running on top of real hardware rather than virtual, a ploy that modern hypervisors retain. This deception was necessary to ensure that early hypervisors would be compatible with operating systems that predated virtualization, and it was largely responsible for the broad compatibility of virtualization with a variety of applications, even very early in the game.
Eventually, though, virtualization developers seized on the idea that a guest operating system that knew it was running on a hypervisor could possibly communicate with the hypervisor directly, heightening both virtual machine and hypervisor performance. Thus was born paravirtualization, or PV.
PV requires a guest operating system that is both aware of the underlying hypervisor and capable of making direct calls to it. Various vendors have called their PV implementations different things, but the basic concept is the same across all platforms, including Microsoft's Hyper-V, VMware, and Xen.
The real magic of PV kicked in, however, when processor vendors Advanced Micro Devices and Intel started producing hardware virtualization-assisted CPUs. Prior to PV and hardware-assisted virtualization, a guest virtual machine generally made a call to an emulated virtual device when it needed outside input/output.
Generally, these I/O-sharing features are the most difficult to implement from a hypervisor code standpoint. The VT-x (Intel) and AMD-V (AMD) instruction sets facilitate direct use of devices by hypervisor-managed VMs. Though both implementations are very different, modern hypervisors manage them the same way -- via an abstraction layer.
VT-x and AMD-V both support DMA/interrupt remapping, which provides direct access to PCI devices and is sometimes called PCI pass-through. Furthermore, the PCI-SIG SR-IOV specification outlines the capabilities of current PCI to deliver multiple virtual devices via a single physical PCI interface.
Basically, the magic of PV happens in two places. A modified operating system running on the guest makes direct calls to hardware using a special PV API. Then the hardware-assisted virtualization processor facilitates these calls and makes it easy for the hypervisor to map them to physical resources without diminishing IT's control over the machine.
At present, all major virtualization vendors support PV, and both the major processor vendors support virtualization hardware assist. In general, while the two processor specifications are widely different, they are similar enough to be managed by a common abstraction layer. And that’s good for IT.
About the Author
You May Also Like