Virtualization Eliminates Uncertainty

Users can implement virtualization without breaking the bank

September 15, 2004

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Thus far in our series focused on dispelling the myths that surround virtualization and the new data center – we’ve defined how virtualization technologies work and discussed the current stage of utility computing’s development. (see Utility Computing: Where Is It At? and Let's Get Virtual.) In this final installment we address the final concern voiced by readers during the NDCF Webinar, Building a Next-Gen Data Center: Can we implement virtualization technology with our current IT budgets?

Virtualized computing is the first stage in achieving a true utility computing model. But first, what is virtualized computing? Essentially, virtualization is the pooling of IT resources in a way that masks the physical nature and boundaries of those resources from users. This allows companies to meet logical resource needs with fewer physical resources – in other words, do more with less.

Once the networking, compute, and storage systems have been virtualized, the data center is ready for the next step: on-demand computing. Today, this can be achieved by adding an additional layer of management that automates across the different types of virtualized equipment. In order to migrate from on-demand to utility computing, however, it is necessary to add fine-grained billing services that allow organizations to charge end-users based on the usage of network resources.

Utility computing is not going to arrive with a “big bang.” The migration to a full utility computing model is incremental, and so are the costs. It is not advisable for any organization to launch into a multimillion-dollar, all-or-nothing, utility computing project, and it is not necessary to convert your entire data center to a virtualized model in one fell swoop. Instead, pick one project at a time, and begin the virtualization process by replacing old equipment and existing systems. Starting with smaller deployments of virtualization services allows you to become familiar with the technology and, once you’re comfortable, expand the deployment into other service areas.

RealNetworks Inc.’s deployment of virtualization technology is a great example of using an incremental, phased approach. When RealNetworks first decided to deploy virtualization, they were in need of updated load balancing services. They decided to deploy virtualized load balancing first, with plans to add virtualized SSL acceleration and firewall capabilities in the future.In many cases, the network is an ideal place to start deploying virtualized technology, because the network itself is an additive environment. In other words, if more firewall capability is needed, you add an additional firewall into the existing mix of network appliances. Thus, the additive nature of the network allows you to deploy virtualized firewalls, load balancing, VPNs, SSL, and intrusion detection as needed, which does not require the replacement of any equipment. As you gain familiarity with the technology, you can eventually use virtualized services to replace aging equipment when it is time to swap that equipment out of the network.

The migration to a virtualized data center is not an all-or-nothing proposition. By deploying virtualization in phases, organizations can familiarize themselves with the technology on smaller projects before expanding the deployment. A phased approach to virtualization also enables organizations to control the cost of deployment on an incremental basis, therefore eliminating the uncertainty surrounding the affordability of virtualization in the data center.

— Dave Roberts, VP Strategy and Co-Founder, Inkra Networks

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights