Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

The Physical-to-Virtual Cookbook Part 1

While many organizations have signed on to SaaS or piloted cloud computing deployments, lots of businesses continue to run critical production enterprise services in traditional data centers with applications running on dedicated hardware.

Why? Moving to a virtualized environment for new applications is easy; moving legacy applications and services is tough. Given the unique application-by-application analysis that is required to move environments to a virtual environment, whether a hypervisor in the data center or a private or public cloud, many enterprises struggle with a psychical-to-virtual migration.

More Insights


More >>

White Papers

More >>


More >>

In this multipart series, we present a cookbook approach to the steps and tools that are required to successfully migrate application from a traditional physical data center environment to a virtual one, based on a migration we conducted for a national telecommunications company. We will also highlight some of the pitfalls along the way and how to overcome them.

The Environment

The client's migration environment consists of a Cisco network with several 7609s and a handful of 3750G switches configured with multiple VLANs to separate management, user and server traffic. The server farm is made up of over 80 Dell servers hosting mostly dual-socket, eight-core systems with average memory and disk space. Most of these servers are 3 years old or older. More than 100 applications run on theses servers. They are a mix of standard COTS applications such as SharePoint, Oracleand Citrix; several file servers; print servers; and workstations for remote access. There are also custom applications on these servers, many of which are not fully documented.

Many of the COTS applications running on the systems are multi-tiered, and the logical connections of these tiers resides in the heads of the client's IT staff (in other words, we have limited documentation to rely on as we determine a path for this P2V migration). Most storage resides on the servers, but the client does have a NetApp array connected to servers via FiberChannel.

Why P2V?

P2V is not a slam-dunk for every organization, so it's important to analyze whether moving to a virtual environment makes sense. If so, you must also determine how the migration will affect the overall architecture of the data center. We will not discuss all the details of our client's business case, but instead focus on just a few of the key decisions.

First, the client's data center was maxing out. Although the client saw an increasing demand for services and applications, its rack space and HVAC thresholds were nearing their limits. Given these constraints, virtualization was a sensible option. For certain workloads, the ability to virtualize CPU, RAM, disk and network connections would let the client consolidate multiple physical servers into a single machine. When utilization spikes, or there is a modest progression of the workload, virtualized platforms would also let the client tune key attributes to compensate for the higher demand, instead of having to purchase a bigger server or simply accept the performance hit and deal with disgruntled users.

The high availability (HA) and disaster recovery architecture was not ideal. The client had a number of third-party tools to enable cold-standby systems, but they didn't offer the service level the client wanted to provide to customers. Like other companies that have gone down the virtualization road, the client wanted to take advantage of the possibilities that emerge when you decouple applications from hardware.

For instance, many hypervisor vendors, including VMware and Microsoft, allow a systems or software administrator to move running virtual machines from one server to another, minimizing or even negating downtime for maintenance and outages. Capabilities like virtual machine HA protect against physical machine failures, and resource checks ensure capacity is available for a possible restart of the VM in case of a hardware failure. Centralized management services provided by the virtual platform, and by some hardware vendors, give a consolidated view into all server and virtual machines. This can streamline administrative tasks like troubleshooting, configuration, cloning and patch management.

Next Page: Drawing a Map

Page:  1 | 2  | Next Page »

Related Reading

Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013

TechWeb Careers