Virtualization's Promise And Problems

The virtual desktop and data center are here, but for this technology to continue taking over the enterprise, I/O and security issues need to be addressed.

May 17, 2008

14 Min Read
Network Computing logo

Virtualization is all over the data center and is rapidly moving onto users' desktops. As it spreads out, it's cutting hardware upgrade costs, simplifying administration from central servers, and giving users the desktops they prefer.

To continue its sweep through the enterprise, the technology needs to overcome I/O performance problems caused by running a lot of virtual machines on one server. Once the I/O issues are solved, virtualization will be more useful for both production server and end-user applications. However, in order to function well in both these areas, virtualization security must be improved as well.

Despite looming I/O and security issues, server virtualization is well established. According to Forrester Research, it has reached a "tipping point," with 23% of businesses having at least two years experience implementing the technology, and by 2009, more than half (51%) are expected to have that level of experience. Today, 24% of servers have been virtualized, says Forrester analyst Frank Gillett, and by 2009, 45% are expected to be.

InformationWeek Reports

NEXT UP: THE DESKTOP
There are several approaches to desktop virtualization, with no one emerging as the best. But with this technology, once you commit to an approach, it's hard to reverse course, so companies are moving cautiously. One thing that's spurring them on is the potential savings, which can match or exceed that in the data center, says Sumit Dhawan, senior manager for Citrix Systems' desktop product marketing group.Experienced desktop virtualization vendors such as Citrix, VMware, Virtual Iron, and Hewlett-Packard offer a range of options. They can generate virtual desktops on central servers and let thousands of end users access them there or stream them to end-user machines, though this is more resource intensive. They also can generate virtualized applications and offer them as software as a service or stream just what's needed to users on demand.


Arnett has tested the virtual desktop waters but isn't ready to let his entire company jump in


Arnett has tested the virtual desktop waters but isn't ready to let his entire company jump in

Sun Microsystems joins the vendor lineup, with the ability to interpret Microsoft's RDP networking protocol into user presentations on top of VMware, providing users with Solaris, Linux, and thin-client options.

But the trick to successful implementation of virtual desktops isn't which technology you pick, say two early adopters, but rather starting out with small, well-defined groups of users and developing a plan as you go.

Tony Arnett, senior systems engineer at Pentair Water Pool and Spa, has been testing virtual desktops with various groups of users for more than a year. For each target group, he builds a customized desktop, or "golden image," of a virtual machine suited to that group's needs. A golden image for accounting will have different applications and perhaps a different version of Windows than one for sales or manufacturing, though, he says, for testing purposes, he's made the images "very vanilla."

Arnett has implemented the virtual desktops on a set of three high-availability servers running VMware's ESX hypervisor and Virtual Desktop Infrastructure 3, with its tools for generating and managing VMs. Users get a Wyse V10L thin-client machine, a diskless presentation device that links to VMware's Connection Server. They self-install the thin client, connecting it to the virtual desktop using Connection Server, which manages users' access to VMs through the company's identity management system, Microsoft Active Directory.Arnett figures if he can successfully automate the provisioning process for the first 10 users, then a hundred can easily follow.

DIG DEEPER

A BETTER DESKTOP

Think there's no rush to consider desktop virtualization? Just take a look at Merrill Lynch's plans.

Download this InformationWeek Report

>> See all our Reports <<

Initially, he built a desktop for 10 IT staffers, tested it for a few weeks, and then tore it down, saving the golden image. Then he built one for 10 technical support workers, tested it, and tore it down, likewise saving the core image. He's gone through the same process with manufacturing as well as shipping and receiving, but he's not yet ready to roll out virtual desktops to the company as a whole.

Arnett has limited the test groups to 10 so he doesn't get flooded with 50 users needing information and connections at the same time. So far, the tests have been "controlled and methodical, and the desktops have worked well," he says.

Arnett is still figuring out exactly which end users and how many of them will make the switch permanently. "Quite a few departments would be perfect candidates," he says of the 1,400-employee company.THIN-CLIENT FLEXIBILITYAt Cincinnati Bell, Jeff Harvey also turned to thin clients as he equips the first 800 of what he expects will eventually be about 3,300 virtual desktop users at the telecommunications provider. Over the next two quarters, he's giving most of the initial group--750 call center employees--Sun Ray thin clients from Sun.

Those users are switching from PCs running Windows 2000; the company needs to migrate them to a new platform as Windows 2000 approaches the end of Microsoft support. "We had no choice," Harvey says. Rather than buy everyone a big new PC, Cincinnati Bell opted for virtualized desktops generated by VMware's Virtual Infrastructure 3 and tapped Sun's Virtual Desktop Infrastructure to convert the Microsoft Terminal Services protocol into the thin-client presentation.In doing so, the company has gained a measure of flexibility. "Different departments have different needs. Why give them all a 9-Gb desktop?" Harvey says, referring to the power user's requirement level.

With Sun VDI, he can leave handfuls of workers on legacy applications, such as Lotus Notes running under Windows 2000 or 2003, without forcing them to upgrade to a new version under Windows XP. On the other hand, software quality-assurance testers can get Vista in virtual machines so they can test new software to be sure it's Vista compatible. Even though most of Cincinnati Bell has upgraded to Windows XP, Harvey expects there will be some Vista users in the future.

Both Arnett and Harvey say savings from using virtual desktops comes from users being provisioned automatically. Using thin clients also provides cost-savings since the software running them can be easily upgraded and their life expectancy is twice that of most PCs since they don't have moving parts to wear out.

Thin clients are priced at $300 plus $250 per user for the server hardware to host the software (20 users per $5,000 server, minimum), according to IDC's study Thin Computing ROI: The Untold Story. That's not so different from a new PC until you factor in the longer life span. The real thin-client savings, IDC says, is in the 93% reduction in configuration and management costs, along with a 72% decrease in help desk calls.

The low cost is a factor in Cincinnati Bell's decision to go with desktop virtualization, Harvey says. Among the problems he still has to solve is how to provide customized desktops to his more sophisticated users, such as those in engineering. He's expecting that the many options available from desktop virtualization will make that problem easier to tackle.

Impact Assessment: Virtualization's Leading Edge

(click image for larger view)UNCORK THE I/O BOTTLENECKWith the first push toward virtualizing servers in the data center, the number of virtual machines per server was a conservative four, five, or six, depending on the applications. Then administrators found they could safely run seven or more applications per server, using 80% of total server capacity, a huge improvement over the average 5% to 15% of unvirtualized server capacity.

But running all those virtual servers puts a strain on a piece of hardware's I/O capacity. There's the traffic coming and going to the network, not to mention the data being loaded in blocks from other applications or a back-end database. Too much I/O overwhelms the server's channels, leading to backlogs and idle CPUs as they wait for data.

The solution is to virtualize server I/O. That is, turn normally fixed and static I/O channels, host bus adapters, and network interface cards into more dynamic resources whose capacity can expand and contract based on virtual server needs. If I/O virtualization could be achieved, it would resolve a persistent problem server administrators have as they stack virtualized applications on the same hardware. Until virtualized I/O becomes commonplace, applications with heavy or fluctuating I/O demands aren't being virtualized, lest they end up causing I/O backups.

Two early solutions have emerged and more are sure to follow. Startup Xsigo off-loads I/O traffic to an attached appliance that virtualizes it (see diagram, p. 20). The approach requires replacing standard HBAs and NICs on the server with Xsigo custom cards and investing in the Xsigo appliance. Pricing starts at $30,000.

Xsigo's appliance can generate up to 16 usable channels of I/O, feeding storage traffic to a Fibre Channel network or LAN traffic to an Ethernet network. It also can monitor workloads and assign more capacity to the VMs that need it most. Virtualizing I/O helps balance the virtual machine workload, letting applications that generate heavy I/O traffic during the night work alongside other applications that only experience occasional spikes in activity.The virtual I/O appliance approach also reduces network cabling in the data center and lets IT administrators buy smaller, more energy-efficient servers that have fewer networking ports and bulky HBAs and NICs.

Virtualizing I/O makes data centers more efficient and balances I/O for virtual machine workloads, says Ray Lane, former president of Oracle and an investor in Xsigo. "Inflexible architectures contribute to the low resource utilization and waste scarce power, space, and cooling resources," Lane says.Another way of virtualizing I/O is done in the standard HBA or NIC, without resorting to an add-on appliance. Industry group PCI-SIG came up with the SR-IOV standard, which virtualizes high-speed, 10-Gbps Ethernet for future NICs and HBAs.

A non-SR-IOV network interface card would be allocated to a virtual machine or set of VMs on a server and represent a fixed resource of static capacity, say 1 Gbps. Neterion's SR-IOV-enabled X3100 Series of adapters can generate up to 16 useable channels and dynamically allocate them to the VMs based on need. That capability can even guarantee the full 10-Gbps capacity of the NIC to one virtual machine, if its service-level agreements give it priority, rather than having a limited, fixed amount of capacity specified.

"For a big database backup, 1 Gbps of capacity is no longer enough," says Neterion CEO Dave Zabrowski. "We're trying to consolidate needed capacity into a single pipe." Under most scenarios, however, the 16 channels would be serving multiple VMs simultaneously.

The driver for Neterion's 10-Gbps Xframe adapters is included in VMware's ESX hypervisor, allowing it to allocate traffic between ESX-generated VMs and the Neterion card. The Neterion adapters are built for Fujitsu Computer Products of America, HP, IBM, and Sun servers.

diagram: Virtualizing: I/OHYPERVISORS AT RISK
Challenges posed by desktop virtualization and server I/O will be resolved as virtualization works its way into the computing fabric of the enterprise. But as it spreads, virtual security becomes increasingly important since intruders could find a way to leap from a VM they infiltrate to the hypervisor itself, opening up sensitive data, message traffic, and the resources of the whole system to an attack.

Core Security Technologies, a network security software company, showed how this could happen in its lab earlier this year. VMware client virtualization software, including VMware Player, VMware ACE, and VMware Workstation, has a Shared Folder feature that lets it write to a file on the host's operating system, where other clients can share its contents. Under some circumstances, the shared folder could be used to plant a virus or Trojan program on the host's operating system, Core Security engineers said. VMware issued a critical security advisory to customers after the exposure was aired.

Mature Or Still Changing?

APPLICATION VIRTUALIZATION has reached the peak of its maturity; won't change significantly over the next 10 years.
NETWORK VIRTUALIZATION has had significant success in the enterprise and is unlikely to change much over the next 10 years.
DESKTOP VIRTUALIZATION is in early stages of development but is growing quickly; will mature into next phase of adoption over the next three years.
HYPERVISORS from VMware, Citrix Systems, and Microsoft, along with versions from Sun and Oracle, are doing well and will evolve into a more advanced stage over the next three years.
VIRTUAL APPLIANCES have caught on as a way for vendors to ship trial software but are only slowly being adopted as a means of implementing new apps in the enterprise. They should make progress in that direction over the next three to five years.
Data: Forrester Research's TechRadar: Infrastructure Virtualization, Q2 2008, by Galen Schreck

VMware has since published the VMsafe API that lets third-party security suppliers build products that monitor and protect the hypervisor from such a threat. Twenty vendors are working on virtualization security products using the VMsafe API. One of them, Apani Networks, is designing a way to extend the security zones that its EpiForce product creates in a corporate network to servers running VMs. EpiForce subdivides the network, giving each segment a security zone rating that it enforces. It can impose a much more granular level of security for virtual machines by checking user privilege and requiring encryption of data flowing from VMs that handle sensitive data.

Apani is working on making the EpiForce approach available dynamically so that VMs would be assigned to the appropriate security zone as they're created, says George Tehrani, the company's senior technology director. VMware's VMsafe API lets Apani give the Virtual Infrastructure 3 console the ability to assign EpiForce security policies and update them along with its other management functions, he says. VMsafe "will unify the management console, resulting in both time and cost savings" in administering virtual machines, he says. Instead of having an Infrastructure 3 console and a security console, all functions will be managed through Infrastructure 3.

VMware's security API makes sense, says Bruce McCorkendale, a distinguished engineer at Symantec, which is using the VMsafe API to extend its products to VMs. Building security products that monitor the hypervisor gives security software makers "a higher privilege perspective" than the intruders they're watching out for, he says. The corporate network is relatively flat in terms of privilege: Anyone who can assume or spoof a server administrator's role has a chance to get in. The hypervisor perspective is more like that of the watchman in the tower: He can see others before they see him.Through the hypervisor, security specialists can apply "an unprecedented level of instrumentation over a virtual machine"; such isolating and monitoring is harder to implement over physical servers, McCorkendale says.WHO TO TRUSTCitrix, owner of XenSource, doesn't have a VMsafe-type plan, but its hypervisor, Xen, contains security features that were derived from IBM's experience in virtualization. IBM Research produced sHype hypervisor security cloaking and donated it to the Xen open source project; sHype is slated to be built into Xen and Citrix's products.

An sHype-equipped hypervisor knows which virtual machines can be trusted to share data with other VMs and which can't. SHype monitors the VM components, recording "a unique fingerprint" of their correct configuration and then watching for any changes. As long as the configuration remains the same, it's a trusted resource.

If a running application suddenly takes on a new bit of functionality, because of an intruder or other cause, sHype detects the modification and changes its status to an untrusted component. The same principle applies to the guest operating system running a VM; operating systems are frequently an avenue of attack for intruders.

"We use trusted computing technology to measure the integrity of the running components," said Ron Perez, an IBM Research senior manager. The hypervisor is told which virtual machines may trust each other as they're fired up. It then watches to ensure that each of those VMs remains trustworthy.

In a management console, sHype shows virtual machines that can talk to each other in the same color. "A blue machine may talk to another blue machine, but a blue machine must never be allowed to talk to a red machine," Perez says. This approach leads to very strong isolation guarantees, he says.VMsafe, sHype trusted computing concepts, and other measures are ensuring that virtualization continues to spread throughout the enterprise and, with proper management, will thrive there. As virtualized desktops link with virtual servers in the data center, it will be important that each element of the infrastructure is planned to work with the others and managed effectively.

If it's done any other way, then Forrester Research's "tipping point," instead of proclaiming virtualization's rapid adoption, could come to mean something else entirely: the point where the adoption rate grew beyond IT's ability to control it.

Continue to the sidebar:
Virtualization's Uneasy Alliance

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights