George Crump


Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

The Inhibitors To I/O Virtualization

In my entry "I/O Virtualization: When, Not If," I discussed some of the merits of I/O Virtualization (IOV) and the value in offloading I/O responsibilities from the hypervisor with SR-IOV (single route IOV). While SR-IOV and IOV may seem like great strategies, there are some inhibitors that need to be overcome. The first is OS or hypervisor support, and the second is dealing with the disruptions to the network and storage infrastructure when first implemented.

When it comes to SR-IOV enabled cards, the primary challenge is when will the operating systems and hypervisors support the technology? To take advantage of all that SR-IOV has to offer, this support needs to be there. As we discussed, IOV gateways (IOG) in large part solve this problem by presenting SR-IOV cards as "regular" cards to the servers that attach to them. Vendors may forgo SR-IOV altogether and develop their own multi-host cards for their gateways, so they don't have to wait for OS, hypervisor support or SR-IOV itself. They will have to develop the card itself, and potentially a OS driver, or require an IOG.

If the IOG is the way to go, then the bigger challenge is implementing IOG itself. As we discussed in our article "What is I/O Virtualization" this is infrastructure and the gateway device does need to be in between the storage, the networks and the servers they are connecting to. Partly, this is just a reality of infrastructure where changes are slow to take place. Certainly steps can be taken to minimize downtime associated with implementing the IOG by leveraging the existing high availability in the environment. The change over to the IOG can be made one link at a time.

The other inhibitor to IOV goes beyond the speed at which infrastructure changes, though. Some of the initial forays into the I/O market compounded the problem by introducing a new type of card that is installed in the server and a new connection methodology from the server to the gateway. Often this was either PCIe or Infiniband. While the I/O performance of these solutions was ideal, it did involve installing a new class of connection into the environment. For some servers it is reasonable to assume they needed the advanced I/O performance of these technologies, but for many others it did not. What was needed was a less disruptive way to bring IOV to servers with more modest I/O requirements.

The next step in IOV is to leverage a connection technology that is already widespread in the data center. Ethernet is the most likely candidate. While today it would be limited to a 10GbE connection speed, over the next few years that will increase significantly. The advantage of leveraging Ethernet is that the infrastructure is already in place, and the move to 10GbE is already happening in many servers. As the administrators are installing 10GbE cards, why not also pick one that can support IOV? This will allow maximum flexibility when dealing with the I/O demands placed on the infrastructure by server virtualization as well as give flexibility when choosing future storage and network protocols. Moving to Virtualized I/O can be somewhat disruptive, choosing the right time to make that move makes it less so. The right time may very well be as you upgrade the environment to 10GbE.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.


Related Reading


More Insights


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013



TechWeb Careers