Joe Onisick

Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

Forget Multiple Hypervisors

The concept of managing multiple hypervisors in the data center isn't new--companies have been doing so or thinking about doing so for some time. Changes in licensing schemes and other events bring this issue to the forefront as customers look to avoid new costs. VMware recently acquired DynamicOps, a cloud automation/orchestration company with support for multiple hypervisors, as well as for Amazon Web Services. A hypervisor vendor investing in multihypervisor support brings the topic back to the forefront.

The arguments for multiple hypervisors are typically cost-based. If price weren't a factor, very few data centers would run anything other than VMware. VMware is the 800-pound gorilla in the virtualization market, with a slew of robust features that are time-tested with production workloads. That being said, cost is an issue--and VMware is at the top end of that scale.

More Insights


More >>

White Papers

More >>


More >>

The other side of the cost argument for multiple hypervisors involves leverage and lock-in. The leverage argument is based on pitting hypervisor vendors against one another in a pricing war to maximize discounts. From a lock-in perspective, the thinking is that using multiple hypervisors helps companies avoid being locked in and, therefore, beholden to one vendor.

On the surface, these arguments are sound and a multihypervisor environment makes sense. The problem is the arguments exist in a vacuum, and when you expand your view to the big picture, they begin to fall apart. When each argument is separated from its isolated base, we see that a multihypervisor environment is rarely the better choice.

The cost argument is the most common, so we’ll start there. This argument is based on hypervisor licensing and support contract costs, and is therefore capex related. There's no arguing that capital can be saved by choosing less expensive hypervisors for the workloads that don’t require advanced management or reliability features. Opex is where the issue lies. At a minimum, utilizing two hypervisors requires two separate IT skill sets, deployment methods and management models. This is an ongoing expense that will quickly outweigh the capex savings. You'll need data relevant to your business to back this up: salary data for the responsible admins, training costs to ramp staff up on new systems, additional hire requirements, and so on.

While the cost argument makes sense from an isolated capex perspective, the leverage argument holds little weight on its own. Actually deploying a multihypervisor environment based on gaining vendor leverage isn't necessary. If required, vendors can be pitted against one another during the sales cycle without deploying more than one product--think playing poker. For the most part, hypervisor vendors are well aware that the competition has become stiff, and pricing plays a big role.

Like the leverage argument, deploying multiple hypervisors to avoid lock-in holds little to no weight. It’s true that using multiple hypervisors prevents you from being beholden to one vendor, but you still incur the additional operational costs of managing them both. Additionally, you’ll still suffer the same pain if you choose to ditch one of the vendors completely, although on a possibly smaller scale. The virtual machines and/or apps will have to be moved onto whichever hypervisor is staying around. You’d also need to bring in a new hypervisor to maintain a multihypervisor-to-avoid-lock-in environment. After all, if such an environment made sense before, why would it have changed?

The most common deployment for multiple hypervisors is VMware as the production system, with Microsoft Hyper-V or Citrix XenServer running test/development systems. VMware is chosen for the performance and features needed for production, while another hypervisor is chosen to lower cost in another environment. Applications are developed and tested on one system, then migrated to the production system. This is a dangerous idea. In the Marines we used a saying: "Train like we fight, fight like we train." This applies well outside of combat, too. A system thoroughly tested on one hypervisor has not been properly vetted for another. Testing should be done on identical systems, down to firmware revision.

Regardless of which hypervisor is chosen, most environments will incur the least cost and gain the most overall benefits from a single hypervisor deployment. Standardizing on a single hypervisor reduces complexity and configuration points and, therefore, opex. Ensure that you look at the big picture when making hypervisor decisions, as it’s easy to get wrapped up in myopic views that lead to poor decisions.

Disclaimer: In my primary role I work closely with several hypervisor vendors. This post is not intended as an endorsement for any of those vendors or products mentioned.

Joe Onisick is the Founder of Define the Cloud. You can follow his angry rants at or on Twitter @jonisick.

Related Reading

Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013

TechWeb Careers