Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Rollout: Vizioncore vRanger Professional 3.2: Page 2 of 5

As your virtualized environment grows, however, you'll need to get more cost effective while relieving ESX servers of the load incurred by backup agents as they bang away at your VMs and ESX cluster.

This is where vRanger comes into play.

Like other products in this market, vRanger uses VMware's VCB, the backup enabler for ESX, to conduct its backup business by allowing third-party applications access to VMs. Together, vRanger and VCB move backups onto a proxy server outside the ESX cluster. This frees resources on production servers that would otherwise be used to host agents and store snapshots. Using a process called "I/O Intercept," data arriving at the proxy server is compressed into memory and written straight to the storage location, without using temp space on the proxy. The proxy bears the backup burden, freeing ESX hosts to serve up VMs.

vRanger relies on VCB to interact with target VMs and uses your VirtualCenter implementation to gather information about your ESX cluster and VMs. You'll first designate a physical server in your enterprise as your VCB Proxy server. This is where you'll install VMware's VCB client and vRanger, in that order. Ensure each ESX server in your cluster acknowledges that VCB is licensed and enabled.

Do yourself a favor and develop a thorough understanding of VCB's functionality, infrastructure needs, and architectural requirements prior to diving in to any third-party backup tool. You'll save yourself and your storage administrators a slew of time. Specifically, in an ESX cluster set up with SAN storage in the back end, the VCB proxy needs to see the same LUNs as your ESX hosts do. While VMware and Vizioncore documentation state that your VMFS LUNs also need to have the same ID for both ESX and the proxy—essentially being able to see the volumes in the proxy's Windows Computer Management—our tests show this isn't completely necessary. We simply connected the proxy server to the same SAN fabric to which the cluster was attached, and it worked just fine. This means you don't have to expose your VMFS volumes directly to your proxy, which has the potential for disaster if an uninformed administrator were to try to mount the volumes to the proxy.The Inner Workings