Chip Changes Propel Virtualization

Vendors have been touting the massive potential for datacenter virtualization for some time. Now, new x86 processors from Intel and AMD are paving the way. But is a world of

July 13, 2006

21 Min Read
Network Computing logo

Think you can't teach an old dog new tricks? Consider x86 processors. Though the core instruction set hasn't changed in nearly 20 years, AMD and Intel continue to add major improvements, including 64-bit extensions for increased memory addressing, additional instructions for handling graphics and enhancements in floating-point math. And, perhaps most important for companies pondering server virtualization, the latest x86 advances from AMD and Intel introduce chip-level virtualization-assist technologies that could finally make virtualization live up to the hype.

The biggest challenge thus far has been making x86 virtualization happen at all. Architecturally, the platform was never designed to support multiple OSs concurrently, meaning virtualization vendors were forced to overcome both hardware and software limitations to allocate and manage processor, memory and I/O resources. VMware has traditionally dominated this arena, not only because it was first, but because it was able to overcome these hardware issues while providing a workable management environment for handling the problems inherent in large-scale virtualization.

Now that virtualization features in next-generation AMD and Intel processors are paving the way for efficient, hypervisor-based virtualization of x86 systems, the emphasis can shift to making the process more reliable. Though based on different approaches, choosing among hypervisor technologies from VMware, Microsoft and the open-source Xen may be less crucial than addressing the management challenges presented by large-scale virtualization. Eventually, the real market winners will be vendors that offer the best capabilities for translating our physical environments into more productive virtual environments. But first, we'll need a little help from our friends, the processor vendors.

Old Problem, New SolutionsOf the two major server virtualization options available today, hypervisor-based server virtualization presents more problems than OS partitioning. With OS partitioning, the host OS provides access to all hardware resources, eliminating many of the issues inherent in hypervisors but limits users to the host OS. Hypervisor-based virtualization offers the flexibility of bare-metal support for multiple OSs but creates numerous technical challenges--related to the allocation of CPU, memory and I/O resources--that have required a lot of software to fix. Fortunately, AMD and Intel have come up with new hardware remedies for these sticky problems.

In a normal x86 operating environment, the OS runs at protected ring 0. In virtualization without processor-assist, ring 0 is instead needed to run the VMM (virtual machine monitor) or hypervisor to manage hardware resources for VMs and their VOSs (virtual OSs). The challenge, then, for CPU virtualization was finding a way to make the OS function properly in a location other than ring 0. To solve this problem, chip-assisted virtualization allows for a new, super-privileged and protected ring -1 for the VMM. This new location lets VOSs peacefully coexist in ring 0--with communications redirected to ring -1--without knowing they share physical resources with other OSs on the same system.

AMD Opteron

Click to enlarge in another window

Intel Xeon ProcessorClick to enlarge in another window

This major advance eliminates ring transition issues for OSs, reduces virtualization overhead, and supports virtualization of any OS without the need for kernel or run-time modifications. AMD and Intel have chosen slightly different ways of going about this, but the good news is that, even though they're not completely interchangeable, virtualization vendors have committed to working with both technologies.

Intel was first out of the gate with its VT-x, which creates ring -1 and offers a new set of instructions to set up, manage and exit VMs, as well as handle memory management. VT-x shares a number of similarities with AMD's AMD-V (formerly Pacifica) chip-assist technology. In processors with chip-assist, the hypervisor resides in ring -1 and creates a VM control structure to support each new VM. This provides a mechanism to launch, resume and exit VMs as needed and acts as the framework for context-switching between the VMM and spawned VMs.

Many virtual machines and their OS stacks can reside in ring 0 without contention; control of these VMs--called VMXs by Intel and SVMs (secure VMs) by AMD--is handled similarly on either chip. More important, allowing guest OSs to reside in ring 0 eliminates the challenges of ring transitions. Because a number of instructions are location-sensitive and designed to transition only between rings 0 and 3, if the VOS is located somewhere other than ring 0, a key process may fail unpredictably, or not fail when it should. Now, with VMs safely in ring 0, the software mechanisms required to intercept and correct problems caused by a VOS running in the wrong ring are no longer of concern. When errors occur in guest VMs, the processor has the ability to switch control to the protected VMM, which can resolve the problem and return control to the VM or terminate it without affecting other VMs on the system.

But here's where AMD and Intel diverge: Because Intel processors use an external memory controller, the new VT-x processor modifications don't offer virtual-memory-management capabilities. This means that software will still be needed to handle address translation between physical and virtual memory resources. It's not an optimal solution, but it is functional.In contrast, AMD's processors have the memory controller right on the chip, so AMD-V virtualization technology introduces unique new instructions that enable memory modes and features exclusive to its design. Most of these instructions deal with handling of the MMU (memory management unit), which provides memory allocation. Under virtualization, the MMU has a lot to keep track of when tasked with mapping multiple OSs, running multiple applications linked to physical memory addresses. AMD-V offers advanced memory features, including Tagged Translation Look-Aside Buffers, which increase performance of this table of recently accessed memory pages by tagging them specifically to their VMs. There is also a Paged Real Mode that supports programs requiring real-mode addressing in a paged virtual environment.

Perhaps the most interesting feature is AMD's support for NPTs (nested page tables). As opposed to Intel's software approach, NPT support allows each VM much greater control over its internal memory management by providing hardware-independent, virtual CR3 memory registers for each VM. Though using NPTs increases the number of memory lookups by passing control through the VMM, NPTs eliminate the software layer needed in VT-x. This promises substantially higher VM memory performance by keeping memory management where it belongs--in hardware. The speed boost will be most evident in memory-intensive applications, especially in environments with many VMs in residence.

AMD's virtualization strategy shows greater performance potential than Intel's, but AMD-V won't be available until the first generation of Rev.F Opterons ships later this year. Intel's VT-x technology is available today on the newest generation of Xeon processors, but server vendors are just starting to release BIOS revisions that will support these new features. All future hypervisor-based virtualization from Microsoft and vendors that use Xen will be dependent on this chip-assist technology, but VMware's full hardware virtualization approach will continue to support legacy systems without chip-assist.

Unplugging the I/O Bottleneck

CPU and private VMM memory management issues are only a part of the virtualization puzzle; the next huge challenge for hardware manufacturers will be to improve the interaction and security of memory with shared I/O devices. This is itself a two-part problem, and perhaps the tougher task falls to I/O hardware developers, who need to create devices capable of sharing access across multiple VMs--today's storage, network and graphics devices are capable of offering only a single interface to the OS. This means that software must be used to handle IRQ, memory and timer functions for systems with multiple VMs until I/O hardware is capable of supporting multiple, functional interfaces.From a processor standpoint, the challenge was to provide a processor-level framework for sharing devices, now and into the future, and AMD and Intel have created very similar specifications to address this. Both specs were released in the spring, and virtualization vendors have already signed on to support both.

We expect that AMD will be first to market, with its IOMMU (I/O memory mapping unit) technology, which offers additional instructions to support hardware virtualization. These new features--designed to improve DMA (direct memory access) mapping and access for hardware devices--replace the current mechanism for graphics addressing, and support direct control of devices by a VM while enabling direct access to user-space I/O within a VM. Intel's VT-d (Virtualization Technology for Directed I/O) standard is also focused on the problems of direct device access and memory protection. Like IOMMU, VT-d provides a framework for direct communication between VMs and I/O devices.

These features will have only a nominal impact on virtualization at first, because the other half of the solution for I/O virtualization is still not here--at present, there are no I/O devices capable of managing shared VM access to hardware resources. In fact, there isn't even a standard in place for device sharing across PCI, and it will likely be two to four years before we see common, device-based hardware solutions for I/O virtualization. Until then, virtualization vendors will need to provide an abstraction layer that supports shared access to device drivers for storage, network and other interrupt-driven devices.

Everyone's a Winner

Of the three main hypervisor technologies, newer offerings from Xen and Microsoft will get the biggest leg up from chip-assisted virtualization. These new processor-level capabilities eliminate many of the stumbling blocks VMware spent years resolving with impressively clever software work-arounds. As hardware virtualization improves, it's becoming easier to take creative approaches to these problems, and the focus is moving away from the hypervisor itself toward performance and management concerns.

We evaluated three of the newest full server virtualization offerings, environments capable of concurrently supporting Windows Server 2003 and Linux and focused on providing and managing a unified server resource pool across multiple physical machines. For the open-source Xen hypervisor, we picked Virtual Iron as a representative because of its full-service approach to Xen virtualization.

VMware

The current market leader in virtualization is VMware's enterprise product, ESX Server. This VM model employs a "service console" loaded on each physical machine to administer and manage the actions of the hypervisor, as well as provide support for a management agent. VMware uses a binary translation methodology to provide a common hardware platform, which means that software is placed between physical and virtual devices to manage resources and to "trap and translate" OS error conditions that would normally cause the VM to crash. This approach solved the ring translation problems of legacy x86 hardware management and supported the use of any x86-compatible OS without modification, but at a price: The downside of all that flexibility is that software emulation of hardware services incurs a performance hit; how big a hit can vary dramatically. Estimates range from 10 percent to 30 percent depending on the application and to whom you're speaking. Fortunately, VMware has also benefited from chip-assist technology. ESX Server 3.0 has undergone a number of enhancements to improve performance and take advantage of new processor features.ESX Server currently dominates the market for enterprise-class, multi-OS server virtualization and is backed by a mature portfolio of enterprise-class management tools offering centralized administration, live virtual server migration, automated resource scheduling, distributed file services, consolidated backups and advanced protection for high-availability environments. VMware virtual services also are designed to integrate well in data centers already using high-end management systems, such as IBM's Tivoli and Hewlett-Packard's OpenView.

Xen by Virtual Iron

The open-source Xen hypervisor started out as an interesting project in the University of Cambridge Computer Laboratory, and in only three years industry buzz has grown from a ripple to a tsunami (see the history). The first versions of Xen were targeted at the Linux community and were based on a para-virtualization model that required the Linux kernel to be specifically--and painstakingly--modified to run on the Xen hypervisor. This was a one-way conversion--modified kernels could no longer run on conventional hardware without the Xen hypervisor. Para-virtualization also made it impossible to run Windows on early versions of Xen because Microsoft wasn't about to let Windows be modified.

In December 2005 the Xen development team released Xen 3.0, the first version of its freeware hypervisor that supported chip-assist technology and could host any unmodified OS with the help of either VT-x or AMD-V. The impact was huge: Elimination of the need for para-virtualization and allowing Windows to run in a Xen environment side by side with Linux and Solaris. But what's been missing from Xen is the vast array of enterprise-class support tools offered by VMware.

When you get right down to it, Xen is only a hypervisor, so we looked for an experienced virtualization company that has chosen to adopt and build on Xen technology. Virtual Iron Software has been around since 2003 and originally offered Linux virtualization using its own VFe hypervisor technology. With the introduction of Xen 3.0, Virtual Iron scrapped VFe, embraced open source and focused development on what it considered most important: management of the virtual environment.Rather than loading its management-system software on each physical server, as VMware does, Virtual Iron places a small, bare-metal version of the Xen hypervisor on each system and incorporates a separate, dedicated server to provide resource management services to all systems in the hardware pool. The Virtual Iron management server automatically builds an inventory of all physical devices in the attached servers and enables creation, resource allocation and deployment of virtual servers anywhere within the virtual infrastructure. With features like policy-driven workload management, dynamic capacity provisioning, extremely fast VM migration, and advanced tools for reporting and analysis, Virtual Iron is looking to offer the enterprise-class management tools that Xen-based systems formerly lacked.

Microsoft

The 2004 release of Virtual Server 2005 marked Microsoft's entry into server virtualization, but the product received mixed reviews, in part because it lacked good management tools and services. Now back in public beta format as Virtual Server 2005 R2, the product has been seriously revamped; it's obvious from the long list of improvements that Microsoft heeded the cries of IT pros. VSR2 will require installation of a stripped-down, core version of Microsoft Server to manage a virtualization stack and provide device support for guest OSs. The core OS lets VSR2 use all hardware devices normally supported by Windows Server and creates a uniform hardware platform for spawned VMs.

This latest version of VSR2 was designed to take full advantage of VT-x and AMD-V instructions. Also new in this release is the System Center Virtual Machine Manager, a standalone application chock full of centralized management tools offering simplified system migration, intelligent provisioning, scriptable automation and rapid system-recovery capabilities. The emphasis of VSR2 is Windows-specific virtualization, but it looks like Microsoft has learned to work and play well with others--at least in this area--and has begun to provide VM add-ins and technical support for Linux guest OSs. VSR2 also supports VMs running older versions of the Windows Server platform, such as Server 2000 and NT 4.0, a real boon for those chained to legacy applications.

Looking AheadFrom a practical standpoint, all these vendors are moving in the right direction, and we think the proper emphasis is being placed on the problems associated with converting physical environments, dynamic resource allocation, seamless migration of VMs and protecting VMs from one another. But virtualization also raises some application-level issues that don't come up in conventional server environments. For example:

» Virtual SMP: Virtual Iron's Xen 3.0-based offering leads the way with the ability to support VMs with up to 32-way SMP, followed by Microsoft VSR2 with eight-way and VMware ESX Server with four-way VSMP capabilities. Of course, you're limited by the number of physical processors available on the host system itself, so 32-way SMP may seem like overkill. But with quad-core processors projected for the near future, it's not so far-fetched to anticipate and prepare for the need.

» Extended 64-bit Support: Although few applications take full advantage of the extended memory addressing of x86-64 systems, virtualization is one way of making the best use of the new generation of high-powered servers. Both Virtual Iron and Microsoft offer full support for concurrent use of 32- and 64-bit OSs and applications, but VMware is still working on improving support for similarly mixed environments.

» Resource Pooling: Often used synonymously with clustering, these offerings are designed to assimilate growing pools of physical systems and provide support for seamless, even automated, migration of VMs when required. This is different from clustering, where dozens of processors in multiple physical systems can be assigned to a single task. Resource pooling, instead, presents dozens of processors as an assignable pool, but applications are not allowed to span multiple physical machines. VMware and Virtual Iron offer the most mature support for resource pooling for high-availability applications, but Microsoft also offers HA capabilities with the addition of Microsoft Cluster Services.

It's worth mentioning that in all cases, seamless HA/VM migration is dependent on the availability of a shared storage environment between source and target systems--not a requirement for typical virtualization applications. In a perfectly virtualized world, processing, memory, storage and networking would exist as a large, unified resource that could be doled out as needed. Unfortunately, this utility data-center nirvana will be some time in coming.But We Want It Now ...

If you're faced with a compelling need to virtualize the servers in your data center today, VMware still has the strongest combination of hardware support, OS flexibility and enterprise-class management capabilities. But if you don't have to make a move immediately, don't. The virtualization market is in a state of flux because of these new chip-assist technologies, and the opportunities they offer for alternative virtualization options will only improve your choices.

We believe AMD-V will offer a better approach than VT-x to processor virtualization, in part because of the advanced technology that will be available in AMD's Opteron processors.

Likewise, we think the streamlined approach offered by the Xen hypervisor may prove to offer better performance than the heavier software solutions from either VMware or Microsoft, but it's really too early to tell. Virtual Server 2005 R2 is now in late beta and should be available shortly after the release of Longhorn in early 2007. The public beta of Xen-based Virtual Iron 3.0 should be available around the time you read this and is expected to become GA by year's end.

Clearly, VMware will face stiff market challenges, in part because of the leg up that both Xen and Microsoft will receive from VT-x and AMD-V. Though the company denies it was a response to Xen, VMware released several free virtualization products in 2005: VMware Server, VMware Player and VMTN Virtual Appliance. More recently, it introduced VMWare Infrastructure 3, a suite of value-conscious virtualization products that combine ESX Server with its most popular management features. Available in Starter, Standard and Enterprise versions, these services are being economically packaged for the first time, perhaps in an effort to change the perception that VMware is the most expensive route to server virtualization.Whatever the reason, VMware still offers the most experience of any company in virtualization today, and this will be a tough hand to beat when all the players step up to the table next year. Virtualization management and VM performance will become the major differentiators, and it's becoming clear that a head-to-head, real-world comparison will be in order once all the pieces are in place. We can't wait.

See the Light: Sun is often overlooked as a source for virtualization, but since the release of its Solaris 10 for the x86 platform, the company has been quietly offering free virtualization in the form of Solaris Containers. Like partitions, Solaris Containers are multiple virtual instances of the Solaris host OS that require extremely low overhead. Solaris Containers support a number of impressive features, including Solaris Zones for advanced resource management, and Solaris Fault Manager, which offers predictive self-healing and automated migration of at-risk Containers.

FYI: Approximately 20 percent to 25 percent of midsize businesses are using virtualization today, and by 2007, 40 percent will reduce their server population by at least 20 percent through virtualization, according to Gartner.

FYI: In a recent Forrester survey, larger firms were more likely to be aware of and implementing server virtualization. One-third of Global 2000 firms--those with 20,000 or more employees--were already using virtualization, and 13 percent planed to pilot within 12 months.

Steven Hill is a Network Computing technology editor. He previously held positions as a department manager, imaging systems administrator and independent consultant. Write to him at [email protected].

Alternatives to Full-Server Virtualization

A number of companies have opted to use OS partitioning for virtualization. Products like SWsoft Virtuozzo and PolyServe Database Utility are targeted at high-performance virtualized applications, such as databases, because of the decreased overhead afforded by hosted virtualization. They offer the same management and provisioning capabilities of full-server virtualization and provide a protected clone of the host OS environment with unique DLLs, registry entries, I/O resources and memory maps for applications to use.

For ultra standardized environments and some highly transactional applications, OS partitioning offers an extremely efficient method of sandboxing multiple iterations of the host OS. Partitioning can offer better memory utilization and result in much smaller virtual images but, on the other hand, all spawned versions of the host OS will be identical to the original in all aspects. For many companies, part of the goal is the creation of a single, unified resource pool that allows the use of many OSs interchangeably. Still, partitioning may be a viable alternative to full-server virtualization for smaller and less complex environments.

Virtualization TipsIf you're considering virtualization now, heed these hints:

» Know your application environment: CPU utilization is only part of the story. Factor in disk access, memory and network activity as well. A server running at 15 percent CPU utilization could well be regularly hitting the network at 60 percent or 70 percent, and even though you'll have access to many virtual NICs, a physical NIC will still have the same limitations.

» Don't skimp on horsepower: Choosing top-end processors for virtualization hosting almost doubles the number of VMs a given server could support, and could also result in a 53 percent to 77 percent better TCO (total cost of ownership) over a three-year period, according to an April study by the Edison group.

» Cover your tail: In the traditional, physical data center there's a 1:1 relationship between systems and applications, so if a server fails, you lose only one app. If you lose a server hosting a dozen VMs it will impact a substantially larger group of users, so it pays to make sure your resource pool has sufficient failover capabilities to protect mission-critical applications.

» Don't forget software licensing: When you run seven virtual instances of Server 2003 on VSR2, how many licenses will you need? Or, when you increase your Oracle database from four to five virtual processors, what will that do to your license agreement? The combination of multicore processors and virtualization is making a mess of licensing, something to keep in mind as you entertain dreams of hardware cost savings.» Think big, start small: Virtualization can be easy to do on an ad hoc basis, and many companies have chosen to start by virtualizing older servers as they age out. But don't forget that the real key to effective virtualization is management, so make sure your evaluation plan leaves room for growth.

Executive Summary

Next year is shaping up to be a really big year for virtualization--new x86 processors from Intel and AMD, combined with brisk competition between VMware, Microsoft and companies using the open-source Xen hypervisor are already revving up the market.

Features in next-generation Intel and AMD processors eliminate many of the hardware issues that have stymied virtualization vendors, meaning all those smart R&D people can finally turn their attention to making server virtualization enterprise-ready by addressing performance and management problems. We examine x86 roadmaps from AMD and Intel, and discuss approaches to hypervisor technologies by VMware, Microsoft and Xen. We offer tips for those planning a virtualization initiative, in "Virtualizations Tips" we suggest alternatives to full-server virtualization. We also chose to profile Virtual Iron Software because of its commitment to Xen virtualization.

Eventually, the real winners will be overburdened data centers that can transition from a physical environment to a more flexible, virtual one. But if you can, hold off for just a while: While VMware's ESX Server is still the strongest offering, this market is on the move. It promises to be a fascinating ride.0

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights