By now, the virtualization vision has been so hyped that its concepts have been burned into our collective consciousness. Soon to be gone are the days when running a new application meant buying a new server, sticking a bunch of storage on it, and loading up the new application. The new order calls for flexible, virtualized network processing and storage that can self-allocate to meet the needs of any workload without so much as a click of a mouse.
As applications are divorced from their hardware and the resources of the data center are sliced and diced in any way necessary, the theory goes that utilization will rise, processing will speed up, and the data center will become the nimble, cost-effective business driver everyone had hoped for. In the perfect data center, each application runs on a virtual machine (VM) sized perfectly for it. If the application's resource needs grow, the VM simply grows with it.
As Linux becomes the heir apparent to proprietary Unix offerings, it's these virtualization capabilities that will be the measure of whether or not Linux is fit for the data center throne. But just how fit is Linux for the crown, and who's driving its progress?
Three different approaches to Linux virtualization have taken the limelight. The first and most well-known is VMware's, which calls for full virtualization and allows for such unique capabilities as running Windows and Linux on the same server. The Xen open-source project takes an approach called paravirtualization, in which the kernel is modified so that the OS knows it's running virtualized. While you won't be running Windows alongside Linux with Xen just yet, you'll see some performance advantages over the VMware approach.