Data centers

10:36 AM
George Crump
George Crump
Commentary
50%
50%
Repost This

Dealing With VMware's I/O Challenges

One of the key themes at VMworld this week is dealing with the I/O challenges that a physical host loaded up with a dozen or more virtual machines places on the storage and the storage infrastructure. This is caused by consolidating hundreds of I/O friendly stand alone systems into a few dozen hosts. While virtualization reduces the number of physical servers, it now makes every server an I/O nightmare.

One of the key themes at VMworld this week is dealing with the I/O challenges that a physical host loaded up with a dozen or more virtual machines places on the storage and the storage infrastructure. This is caused by consolidating hundreds of I/O friendly stand alone systems into a few dozen hosts. While virtualization reduces the number of physical servers, it now makes every server an I/O nightmare.

Dealing with these I/O challenges can be addressed at several layers in the virtual environment. One layer is the infrastructure itself. The obvious suggestion here is to just make it faster. Companies proposing 10GbE and 8GB Fibre cards and switches are in full force at the show. Those cards are also getting smarter with the ability to sub-divide or prioritize their bandwidth on an as-needed basis to specific virtual machines.

Also gaining in popularity is I/O Virtualization (IOV). As we discussed in our recent article "Using Infrastructure Bursting To Handle Virtual Machine Peaks," IOV provides the ability to shift I/O resources as needed beyond the virtual machines on a single host and provide that capability across physical hosts. While IOV is sometimes looked at as a cost savings mechanism by sharing bandwidth across multiple physical hosts, it also provides data center flexibility. This allows you to virtually move bandwidth as needed between physical servers without having to touch those servers.

The second area that needs to be contended with all is the storage system itself and there are two concerns here. First, how fast can the storage mechanisms--the disk or solid state storage--respond to the I/O demand? Second, how much of the I/O can the storage controller handle? This is an area where a lot of confusion can be caused by walking the trade show floor. Adding solid state storage to an array does not solve all your problems.

There are four questions to ask as you look for faster storage to address your I/O challenges. First, are my physical hosts generating enough I/O to justify a move to solid state or a faster storage mechanism? Thanks to virtualization, it's more likely that you can, but you need to be sure.

Second, can my infrastructure transport that data fast enough to put pressure on the storage? See the above discussion on infrastructure I/O, but this is not limited to having an 8GB FC or 10GbE environment. If you have enough 4GB FC or even 1GbE connections, it can put pressure on the storage.

Third, can my storage controller/NAS head support the I/O rates that I am transferring? This may be more critical than the underlying storage itself. If the controller that is receiving all of this data can't process it quickly enough, it does not matter how fast the underlying storage is.

The final question is when all of the above questions are answered "yes," how much and what type of storage should I add to my storage system? Until you can move that to and through the storage system, worrying about SSD or 15K SAS or anything else is a waste of time. You can either address all these components individually or all at once by improving network bandwidth, storage processing capabilities and storage device speed all at once in a single system.

Performance problems are going to be the new reality in server virtualization. As servers are consolidated, so is the performance demand. Understanding how to deal with these challenges is a critical component in increasing VM density and driving even more cost out of the data center.

Comment  | 
Print  | 
More Insights
More Blogs from Commentary
SDN: Waiting For The Trickle-Down Effect
Like server virtualization and 10 Gigabit Ethernet, SDN will eventually become a technology that small and midsized enterprises can use. But it's going to require some new packaging.
IT Certification Exam Success In 4 Steps
There are no shortcuts to obtaining passing scores, but focusing on key fundamentals of proper study and preparation will help you master the art of certification.
VMware's VSAN Benchmarks: Under The Hood
VMware touted flashy numbers in recently published performance benchmarks, but a closer examination of its VSAN testing shows why customers shouldn't expect the same results with their real-world applications.
Building an Information Security Policy Part 4: Addresses and Identifiers
Proper traffic identification through techniques such as IP addressing and VLANs are the foundation of a secure network.
SDN Strategies Part 4: Big Switch, Avaya, IBM,VMware
This series on SDN products concludes with a look at Big Switch's updated SDN strategy, VMware NSX, IBM's hybrid approach, and Avaya's focus on virtual network services.
Hot Topics
3
Converged Infrastructure: 3 Considerations
Bill Kleyman, National Director of Strategy & Innovation, MTM Technologies,  4/16/2014
2
Heartbleed's Network Effect
Kelly Jackson Higgins, Senior Editor, Dark Reading,  4/16/2014
White Papers
Register for Network Computing Newsletters
Cartoon
Current Issue
Video
Slideshows
Twitter Feed