Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Implementing Virtualization: Page 2 of 7

This major advance eliminates ring transition issues for OSes, reduces virtualization overhead, and supports virtualization of any OS without the need for kernel or run-time modifications. Intel and AMD have chosen slightly different ways of accomplishing this minor miracle, but the good news is that, even though they're not completely interchangeable, all virtualization vendors have committed to working with both technologies.

Intel was first out of the gate with its VT-x, which creates ring -1 and offers a new set of instructions to set up, manage and exit VMs and handle memory management. VT-x shares a number of similarities with AMD's AMD-V (formerly Pacifica) chip-assist technology. In processors with chip-assist, the hypervisor resides in ring -1 and creates a VM control structure to support each new VM. This provides a mechanism to launch, resume and exit VMs as needed and acts as the framework for context-switching between the VMM and spawned VMs.
Many virtual machines and their OS stacks can reside in ring 0 without contention; control of these VMs—called VMXes by Intel and SVMs (secure VMs) by AMD—is managed similarly on either chip. More importantly, allowing guest OSes to reside in ring 0 eliminates the challenges presented by ring transitions. Because a number of instructions are location-sensitive and designed to transition only between rings 0 and 3, if the VOS is located somewhere other than ring 0, a key process may unpredictably fail, or not fail when it should. Now, with VMs located safely in ring 0 where nature intended, the software mechanisms that were required to intercept and correct problems caused by a VOS running in the wrong ring are no longer an issue. When errors occur in guest VMs, the processor has the ability to switch control to the protected VMM, which can either resolve the problem and return control to the VM or terminate it without affecting other VMs running on the system.

But here's where Intel and AMD diverge: Because Intel processors use an external memory controller, the new VT-x processor modifications don't offer virtual-memory-management capabilities. This means that software will still be needed to handle address translation between physical and virtual memory resources. It's not an optimal solution, but it is functional.

In contrast, AMD's processors have the memory controller right on the chip, so AMD-V virtualization technology introduces unique new instructions that enable memory modes and features exclusive to its design. Most of these instructions deal with handling of the MMU (memory management unit), which provides memory allocation. Under virtualization, the MMU has a lot to keep track of when tasked with mapping multiple OSes, running multiple applications linked to physical memory addresses. AMD-V offers advanced memory features, including Tagged Translation Look-Aside Buffers, which increase performance of this table of recently accessed memory pages by tagging them specifically to their VMs. There is also a Paged Real Mode that supports programs requiring real-mode addressing in a paged virtual environment.

Perhaps the most interesting feature is AMD's support for NPTs (nested page tables). As opposed to Intel's software approach, NPT support allows each VM much greater control over its internal memory management by providing hardware-independent, virtual CR3 memory registers for each VM. Though using NPTs increases the number of memory lookups by passing control through the VMM, NPT eliminates the software layer needed in VT-x. This promises to provide substantially higher VM memory performance by keeping memory management where it belongs—in hardware. The speed boost of NPT will be most evident in memory-intensive applications, especially in environments with many VMs in residence.

CPU and private VMM memory management issues are only a part of the virtualization puzzle; the next huge challenge for hardware manufacturers will be to improve the interaction and security of memory with shared I/O devices. This is a two-part problem, and perhaps the toughest challenge falls on I/O hardware developers, which need to create devices capable of sharing access across multiple VMs. The problem today is that modern storage, network and graphics devices are capable of offering only a single interface to the OS. This means that software must be used to handle IRQ, memory and timer functions for systems with multiple VMs until I/O hardware is capable of supporting multiple, functional interfaces.