Virtualization: Technology Architecture For The 21st Century

It's a new year, but IT sings the same old blues: Do more with less, and be well-positioned for the next big thing. Sage advice from on high. Oddly enough,

February 17, 2004

14 Min Read
NetworkComputing logo in a gray background | NetworkComputing

It's a new year, but IT sings the same old blues: Do more with less, and be well-positioned for the next big thing. Sage advice from on high. Oddly enough, one of the best prospects for meeting this milestone turns out to be an old friend in new clothes: virtualization, or rather, its latest incarnation. The principles of virtualization-a technique that abstracts (or virtualizes) functionality and management functions from dedicated physical devices-have been around for decades: Various time-tested forms include VLANs and storage RAIDs, as well as the virtual machine products on mainframes and Intel platforms such as VMware and Connectix. Long the province of server farms and storage, virtualization is beginning to move from the mainframe into both the data center and the network in a serious way.

Driven by the never-ending need to consolidate data center resources, IT is struggling to wrest more performance out of its servers, which are typically serial underachievers, often added ad-hoc to accommodate new users or applications. These usually utilize a third or less of their total CPU or I/O, while others max out on occasion.

More servers mean more cabling; greater space, power, and cooling demands; and a tendency toward decentralization, spreading infrastructure across greater distances. Each device must be configured individually and as part of the LAN. And the more servers, the more hands to run them, and the greater the security risk from intentional or inadvertent breaches of administrative protocol.

That's where virtualization comes in. Nascent virtual switches, or devices, promise to logically network a wide variety of devices, including firewalls, switches, routers, load balancers, VPN and Secure Sockets Layer (SSL) accelerators, and caches.

FEATURE SETA number of start-ups have announced virtual switch-style products. Foremost is Inkra Networks (www.inkra.com), which seems to have the lead in wins, partnerships (with EDS and IBM), and marketing. Focused on data center virtualization, the company's Virtual Service Switch is the standard bearer in breadth of feature set and management capabilities. Ranch Networks (www.ranchnetworks.com) offers similar performance, but geared toward the wiring closet. Other vendors include Nauticus Networks (www.nauticusnet.com), ArrayNetworks (www.arraynetworks.net), and NetScaler. (Note that not all vendor switches are purpose-built for the technology; some provide add-on functionality to their usual security and load-balancing features.)

These switches are an evolution from chassis-based switches or devices that can integrate new modules or blades, allowing IT to add or change feature sets or consolidated appliances that integrate multiple functions into the same device (such as firewalling, SSL acceleration, and Web switching). A virtual switch shares some of these attributes, but manages its components very differently.

Specifically, these switches perform three tasks. First, they create "logical racks" out of their internal resources or service components (such as firewalling, caching, and routing), so that an instance of a server farm, application, or user group service or session can be created, managed, and monitored on a single device. Dedicated devices or servers can be phased out in favor of assigning secure "sandboxes" to each user group so that many simultaneous, logically partitioned services or sessions can reside on a single box without affecting another service.

Normally, network infrastructure must be reconfigured when new servers or applications are added, including setting new policies and testing, which entails some risk and considerable staff-hours. With virtualization, however, these new services can be mapped to a logical rack without any changes to the physical infrastructure. Similarly, software modules can be updated by the batch or individually through the logical rack, as each rack can run different instances of the software.This reduces the number of servers required, as well as any load-balancing devices that front-end them, while maintaining high availability. The key is service partitioning: If one session goes down, sandboxing prevents the failure from taking the rest of the sessions on the device with it. In such cases, the switch must contain the failure, remove or "clean up" the problem, and restart the service. Each instance is backed up with multiple logical configuration instances in the switch.

While important, these are lower-level functions compared to virtualization's biggest asset, one that elevates the technology over server blades and consolidated appliances: the ability to dynamically allocate resources across various components (or functions) to perform computing-intensive tasks, according to changing user and network constraints. Virtualized devices can request the processing, memory, buffering, bandwidth, and throughput resources of the switch to handle and prioritize the most pressing network chores at any given moment, as well as limit the processing and resources used. These resource allocations can be prioritized in a very granular, hierarchical way for each service instance. The IT staff only needs to overprovision one element: the virtualized switch. While consolidated appliances can also dynamically allocate CPU between their components, they don't partition resources into logical racks with full security and isolation between user groups.

"As servers are deployed to new applications, the creation of a new logical rack within the virtualized infrastructure switch can potentially eliminate the need to actually change the cabling or add and remove physical network devices," writes senior Burton Group analyst William Terrill in a recent report entitled "Network Infrastructure Virtualization" (see Resources on page 50). "By generating a logical rack that contains the needed routers, firewalls, load balancers, and other devices, and by setting the internal topology to match the required data flow, new functionality can be implemented in a very short time."

No need to say goodbye to your old gear, either: The virtualization switch can work with other devices that may provide different functionality, or more of the same. Traffic can move out of the switch on a given port into the necessary devices and back to the same logical rack through a different port, all without changing the physical topology of the network or otherwise altering the data flow.

THE PAYOFFSo what's the upshot? "In conjunction with the new generation of blade servers and optimized SANs [Storage Area Networks], a new data center that supports dozens of applications and servers can be created in the floor space previously dedicated to a single large wiring closet," says Terrill, who envisions a day when these devices can also help configure a remote duplicate data center. An IT manager could export a data center infrastructure template to a remote facility so that the primary configuration is pushed out to the branches, even if the second site deploys different hardware. For example, the template may stipulate a configuration range that can use multiple CPU levels depending on that secondary site's infrastructure, while still maintaining "good enough" processing to fulfill business objectives.

More immediately, virtualization is seeing some action in the storage space, where it's used to carve out logical server farms and SAN gear. But storage has proven relatively easy to handle so far and doesn't necessarily mean IT is ready to take the plunge into other applications.

When IT is ready, however, the first out of the gate will likely be some form of multilayer security deployed throughout the LAN or data center. For instance, Inkra defines what it calls "defense in depth" as both deep and pervasive, incorporating multiple layers that reinforce the perimeter with additional firewalls and Intrusion Detection and Prevention Systems (IDS/IPSs). This compartmentalization approach should close backdoor holes, isolate key assets, and identify and contain attacks away from other virtualized processes.These switches may help to simplify the tough security configuration tasks that keep most IT staff putting out fires. According to Inkra, virtualization should increase security device utilization and cut down the number of firewalls, SSL accelerators, VPN gateways, and IDS/IPSs needed. Customized, multilayer security should be easier and faster to deploy and modify, ostensibly in near real-time, without having to take a device offline. Finally, consolidated security means less cost, effort, and chance for manual error, since resources can be centralized and configuration tasks that would otherwise take days with multiple single-function devices can be automated through the logical rack.

For example, according to an Inkra white paper, a single IT operator can logically assign a firewall policy to a user-specific security threat in near real-time (or "instantly," as Inkra claims). Or that operator can respond immediately to an increase in SSL transactions on a retail Web application by deploying, or "dropping," an SSL accelerator behind the user's firewall.Such maneuvers would allow some form of dynamic automation to establish and run service instances for a wide variety of applications. Automation demands extremely precise and accurate user policies, prioritized according to extremely fluid business needs. If IT takes the time to refine these templates, resources can be more easily devoted to things such as grid or utility computing.

GIVE SOME TO GET SOME

Aside from actual firepower and proof of concept (read: isolating service instances), management is the most important issue to evaluate. The virtual switch should allow multiple users to mange the system and grant user access rights for each service instance as defined by policy roles.

Policies that prioritize processing in each logical rack must be set hierarchically or in relation to each other so that the switch's resources go to the most important users when simultaneous demands tax the total available resources of the virtual switch. Constraints can be set by the total bandwidth used, the amount of processing power allotted to each logical rack, or the number of simultaneous connections taking place.

The switch provides an excellent view of the overall network topology. Since the switch contains the network components that create each virtual service, it can provide robust performance and operational information at the component level, logical rack level, and device level. It can also help with IP addresses: Once the device's IP and origin server addresses are configured, the switch propagates these settings to each subsystem, eliminating manual re-entry chores and simplifying services that require address reconfiguration.Of course, each service component in the logical rack still needs to be managed separately, probably with existing management tools. IT groups tasked with distinct chores-such as security and load balancing-retain control of their assigned functions and are isolated from each other virtually, though each team is dealing with the same physical device. Because of the higher-level expertise required, the virtual switch will probably be configured by a different team than those in charge of the individual components, allowing current IT task assignments to remain in place.

As for linking to your legacy equipment, "The main problem with infrastructure virtualization is that it really doesn't work with old gear that I'm aware of," Burton's Terrill notes. "The old equipment can be utilized and tied into the virtualized device, but it remains outside of the overall management. The data flows from the old components once they arrive at the virtualized device, can be distributed as needed, but that is different from having those older devices truly participate in the virtualization." A partial solution is to run data from legacy devices in both directions to the virtual switch so that it appears to be controlled by the virtual device. This isn't a proven tactic, however, and still requires each legacy device, such as a cache or a firewall, to be configured separately. A virtual switch can link to common management platforms such as OpenView or NetView via SNMP and make the data from each virtual component appear as if it's from a separate device.

It's unclear how IT will feel about having to learn another new proprietary management system, or verify that the switch works with the WAN and the current enterprise management system. "The fact that different IT groups will need to configure and manage their own components within the logical rack(s) as they would with individual devices, and that these components are likely to have very different management tools than what are currently being used, would also indicate that the initial learning curve will be quite steep," says Terrill in his report.

The switch poses another knotty problem for IT: Who will control the configuration of each logical rack across application and department lines? Staff may need additional training or expertise to understand each component's part in the greater network scheme.BEST PRACTICES

Moving to a virtualized environment isn't an all-or-nothing proposition. A slow migration that allows for depreciation of assets and thorough testing is a viable option.

Transitioning to that environment will force IT to determine which servers or infrastructure devices to virtualize and which to keep in physical form. The question boils down to which physical devices are still productive and which ones can be absorbed into the virtual switch. IT may be uncomfortable taking a mission-critical device off-line and virtualizing it, so it's best to start with new projects that are kept apart from legacy gear and applications.

IT may need to tackle financial issues first. Though virtual switches generally cost less than multiple point products, justifying Return on Investment (ROI) is still a problem. "A single highly visible project would be the best way to begin implementation," advises Terrill. "If a customer is just creating a disaster recovery center, that would be an excellent place to begin to virtualize. The ability to quickly change the infrastructure and computing environment through the use of templates makes these kinds of environments ideal for a virtualized data center."

It may be difficult to reach a clear ROI figure with these switches. Some of the return may be "soft," in the form of reduced cabling, for instance, or the added simplicity of creating many logical racks that only carry a few applications instead of a sprawling mass of interconnected programs that may cause service spikes. "In this latter case, the rules for the ACLs, firewalls, load balancers, etc. can be extremely simple and easy to test: The traffic for the HR applications used exclusively by HR can be isolated and controlled much more easily when you don't have to worry about the rules for the e-commerce site, partners, developers, etc.," says Terrill.The bottom line? Take the plunge, but start small. "Don't try and execute cohesive, enterprise-wide virtualization today," says Corey Ferengul, a Meta Group analyst. "Keeping it to the network is a reality, however. Keep a clear eye on security policy and identity management. Don't expect centralized, enterprise-wide control. Do it in specific domains and plan for some level of management complexity."

FUTURE SHOCK The virtualization market is still a bit hazy in terms of cohesive guidelines and direction. Virtual switches are proprietary, and except for some Fibre Channel management standards and a new server management initiative by the Distributed Management Task Force, there are no standards to define interoperability or baseline functionality.

"A key missing piece is the integration of all vendors' integration technologies," says Meta's Ferengul. "Microsoft will be embedding virtualization; HP is doing the same. Now when I go with third-party vendors, they want you to use their technology. We need a way to pull embedded virtualization together and manage them centrally, as well as support across networks, servers, and storage."

The Burton report also predicts that storage virtualization, the most reality-tested application, won't achieve relative maturity until 2005 or 2006 due to the difficulty of creating truly comprehensive business rules, process management, and related storage policies or life-cycle data management. However, the evolving Storage Management Initiative Specification (SMI-S) will likely add support for Internet SCSI (iSCSI) storage virtualization and management, standardizing some of the higher levels of virtualization.

Moreover, there are vague rumblings about extending virtualization not just across similar devices (server to server), but also across different kinds of devices (for example, using a virtualized switch to dynamically allocate processing between the switch and another linked server or device). But this may be a distant dream. It's better to evaluate virtualization on its merits today, perhaps factoring in some probable standards development over the next year or two, than to hold out for slam-dunk capabilities that spread CPU over the entire LAN. Still, it's a nice thought, and in a few years we could see some limited steps in that direction, moving us ever closer to the kind of policy-based automation that will ultimately make or break widespread utility computing.

Doug Allen, senior editor, can be reached at [email protected].Pros And Cons

Chassis-based switch/routers

Pro: Can add or change specific functions by module or blade

Con: Don't offer dynamic CPU allocation between components

Consolidated appliancesPro: Simplify component configuration and physical management tasks - Allow for dynamic CPU allocation between components

Con: Can't segment CPU resources into logically isolated service instances

Virtualization

Pro: Simplify and consolidate multidevice management

Con: May require extensive IT retraining to configure logical racks along user policy guidelines

Resources

William Terrill's report, "Network Infrastructure Virtualization," can be found at www.burtongroup.com/networkvirtualization.

For specifics on virtualization deployment and applications, check out a white paper on Inkra Networks' site called "Virtualization: A Practical Guide." Go to www.inkra.com.

For more on server consolidation from an ROI and implementation perspective, see "Server Consolidation: Why Less is More," by Steven Schuchart Jr., in the June 13, 2003 issue of Network Computing. The article is available online at www.nwc.com.0

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights