• 07/31/2014
    9:06 AM
  • Rating: 
    0 votes
    Vote up!
    Vote down!

Guide To Server Virtualization

New to server virtualization or need a refresher course? This overview explains the technology that has changed the lives of IT admins and CFOs.

The terms "virtualization" and "virtual machine" have been lingua franca for the better part of a decade, tossed around freely in every corner of the IT world. If you didn't really know what they meant, you just nodded knowingly to get through the meeting.

If you're already savvy in the area of virtualization, forgive us as we refresh our memories with a look at the core concepts. However, feel free to look through the recent resources below and get caught up on virtualization developments from the past couple of years.

Virtualization's origins
Actually, virtualization set down its roots more than 40 years ago, and certainly joined the computing lexicon in the 1970s when IBM introduced its VM (virtual machine) as sort of an overlay operating system. In the simplest sense, VM allowed an IBM mainframe to host two operating systems at the same time, allowing the users and applications employing each to think they were running on a dedicated machine.

Today, virtualization is a concept applied to servers, networks, desktops, and storage. But it is server virtualization that drove the idea forward -- and, no, you don't need a $12 million mainframe to do it.

As the idea of shared, networked computing resources (servers) advanced, it was common practice to dedicate a single server to an operating system and the applications that it supported, even if that software only used a tiny fraction of the hardware's capability. On the other hand, if the workload maxed out the hardware, the only solution ensuring uptime was to buy a bigger server and reinstall all of the software on the new platform.

Virtualization techniques initially focused on that hardware utilization challenge, giving IT administrators the flexibility to run applications on whichever hardware platform was available, even across multiple servers.

So some of the jobs running on a nearly maxed-out system could be shifted easily to the underused system as needed by letting those applications run in virtual machines without users knowing which hardware it was running on.

Virtualization's benefits
Server virtualization provided IT with the ability to support applications as demand grew, and to consolidate other applications from underutilized servers. Cost savings on hardware and software licenses and ease of management were the key driving factors. As server virtualization has evolved, not just with software but with specially designed hardware enabling virtualization, benefits beyond cost have emerged.

Yes, virtualization still aids consolidation and load sharing. It also can support scaling for global growth of the company, disaster recovery strategies, use of VMs for development and test, integration of digital telecommunications (VoIP phone) applications, and -- with the growth of software-defined networking -- virtual networks on the same types of servers that host email and office applications.

Today, IT managers can choose from a variety of software options to implement server virtualization, with hypervisors -- which create and manage virtual machines -- and other management tools. Key software players and platforms include VMware, Microsoft, Red Hat, the Linux-based KVM, and Citrix with its open-source Zen technology. In addition, hardware companies such as Intel are designing their processors with virtualization capabilities.

Need more details on server virtualization? Browse these resources:

Figure 1: Server Virtualization at a Glance
Source: Intel
Source: Intel


Good Info

I liked the section on others closing in on VMWare's lead.  I think that a lot of admins are still unaware of how far along others have come, especially Microsoft.  I think that for the cost, Microsoft has the best value when it comes to Windows-based networks.

Re: Good Info

Server virtualization has become a necessity. No more server rooms full of hardware. Having 10 or 20 servers running between 2 Host machines is the way to go. We use Hyper V and it serves our needs.

Abstraction, Virtualization and Future

Server virtualization as author mentioned in the article started as an idea to run multiple operation systems on a single hardware and running multiple application on the same system but then evolved and now the applications such as hot migration/vmotion , DRS so on is very important use case.

We can see this in the main field of IT , for example MPLS initial reason was performance but today mostly VPN and traffic engineering or fast reroute is the main use cases.

Or BGP is an internet routing protocol , still it is but also BGP is used as control plane mechanisms for many overlay technologies such as MPLS VPNs , EVPN , Multicast VPNs so on.

Network virtualization also will get the same benefits, it will be used initially as an abstraction and will give multi-tenancy capability but in the future we will see clonning, snaphotting and moving the entire network elements / state from one place to another seamlesly.

Re: Abstraction, Virtualization and Future

"Network virtualization also will get the same benefits, it will be used initially as an abstraction and will give multi-tenancy capability but in the future we will see clonning, snaphotting and moving the entire network elements / state from one place to another seamlesly"

This is what Network Virtualization is already about, where have you been?


Re: Abstraction, Virtualization and Future

This is the idea, the question who is giving a right tool for implementation?. NSX is close to idea in my opinion but with the all new tools which are not mature and known and also need many interaction with other component. Yes I am mentinoning host based overlays, Vxlan,STT,NvGRE ,Geneve so on. Thanks for your time to read my comment by the way

Re: Abstraction, Virtualization and Future

Mus, I also think Orhan was speaking for the bulk of networking pros out there, who are not using network virtualization or NSX on anything like a regular basis. It may exist, but percentage-wise, not that many engineers are working with it yet, and it's not an everyday task like spinning up a VM is for many sysadmins these days.

Re: Abstraction, Virtualization and Future

I guess something may have got lost in translation, the functionality spoken of is most prevalent in dynamic data centres, I found it rather odd to see it described as functionality that may be incorporated in the future, especially when it's a feature that exists in most SDN products available in the market today.

Re: Abstraction, Virtualization and Future

Mus, gotcha, you are thinking on broader terms -- I always tend to have the enterprise audience in mind :)

More specifically x86 Server Virtualization

As rightly pointed out IBM pioneered virtualization back in the 1960's on it's Mainframes, but also many UNIX midrange machines were inherently virtual in nature as well.

Ironically, it was IBM that provided VMware with the break it needed to crack the Enterprise market before being acquired by EMC.

When you consider that the cost of an IBM mainframe today, aka System z starts from about $60k (nowhere near the $12M mentioned in the article), the real irony becomes the number of converged infrastructure bundles coming onto the market largely based on VMware that can easily top $1M.