Why? Moving to a virtualized environment for new applications is easy; moving legacy applications and services is tough. Given the unique application-by-application analysis that is required to move environments to a virtual environment, whether a hypervisor in the data center or a private or public cloud, many enterprises struggle with a psychical-to-virtual migration.
In this multipart series, we present a cookbook approach to the steps and tools that are required to successfully migrate application from a traditional physical data center environment to a virtual one, based on a migration we conducted for a national telecommunications company. We will also highlight some of the pitfalls along the way and how to overcome them.
The client's migration environment consists of a Cisco network with several 7609s and a handful of 3750G switches configured with multiple VLANs to separate management, user and server traffic. The server farm is made up of over 80 Dell servers hosting mostly dual-socket, eight-core systems with average memory and disk space. Most of these servers are 3 years old or older. More than 100 applications run on theses servers. They are a mix of standard COTS applications such as SharePoint, Oracleand Citrix; several file servers; print servers; and workstations for remote access. There are also custom applications on these servers, many of which are not fully documented.
Many of the COTS applications running on the systems are multi-tiered, and the logical connections of these tiers resides in the heads of the client's IT staff (in other words, we have limited documentation to rely on as we determine a path for this P2V migration). Most storage resides on the servers, but the client does have a NetApp array connected to servers via FiberChannel.
P2V is not a slam-dunk for every organization, so it's important to analyze whether moving to a virtual environment makes sense. If so, you must also determine how the migration will affect the overall architecture of the data center. We will not discuss all the details of our client's business case, but instead focus on just a few of the key decisions.
First, the client's data center was maxing out. Although the client saw an increasing demand for services and applications, its rack space and HVAC thresholds were nearing their limits. Given these constraints, virtualization was a sensible option. For certain workloads, the ability to virtualize CPU, RAM, disk and network connections would let the client consolidate multiple physical servers into a single machine. When utilization spikes, or there is a modest progression of the workload, virtualized platforms would also let the client tune key attributes to compensate for the higher demand, instead of having to purchase a bigger server or simply accept the performance hit and deal with disgruntled users.
The high availability (HA) and disaster recovery architecture was not ideal. The client had a number of third-party tools to enable cold-standby systems, but they didn't offer the service level the client wanted to provide to customers. Like other companies that have gone down the virtualization road, the client wanted to take advantage of the possibilities that emerge when you decouple applications from hardware.
For instance, many hypervisor vendors, including VMware and Microsoft, allow a systems or software administrator to move running virtual machines from one server to another, minimizing or even negating downtime for maintenance and outages. Capabilities like virtual machine HA protect against physical machine failures, and resource checks ensure capacity is available for a possible restart of the VM in case of a hardware failure. Centralized management services provided by the virtual platform, and by some hardware vendors, give a consolidated view into all server and virtual machines. This can streamline administrative tasks like troubleshooting, configuration, cloning and patch management.
Next Page: Drawing a Map