Storage

02:00 PM
George Crump
George Crump
Commentary
50%
50%

Thinly-Provisioned Bandwidth

Thin-provisioning of storage came about to address the issue of having to hard-allocate storage capacity to servers. This leads to more efficient utilization of a resource that many data centers have plenty of, but prior to thin-provisioning, most of it was allocated but not used. As we move into an era of increased datacenter bandwidth, we are going to need the ability to do the same thing. Think about it. Today most bandwidth is hard allocated to a single server.

Thin-provisioning of storage came about to address the issue of having to hard-allocate storage capacity to servers. This leads to more efficient utilization of a resource that many data centers have plenty of, but prior to thin-provisioning, most of it was allocated but not used. As we move into an era of increased datacenter bandwidth, we are going to need the ability to do the same thing. Think about it. Today most bandwidth is hard allocated to a single server.

Most data centers today have 1GB Ethernet and 2GB or 4GB fibre connectivity. Most are looking to move to either 10GB Ethernet and either 8GB fibre or 10GB FCoE. However, when I speak to companies that can monitor infrastructure utilization, one of the first things they will point out is that, as a whole, today's bandwidth, especially on the fibre side, is typically less than 10 percent. Basically, a few servers require that 4GB fibre pipe, most do not.

The other factor is that many bandwidth upgrades come as a result of a manufacturing reality that made 4GB fibre the same price as 2GB. Even if you only needed 2GB, you either couldn't get it or it wouldn't make sense. The same situation will be the case for many datacenters, even though you don't need 10GB Ethernet or 8GB fibre/10GB FCoE, you're going to get it anyway because it's as cheap or cheaper than the older technology. Net this out and we have a bandwidth allocation issue. There will be a few servers in the environment that can take advantage of the increased bandwidth while most can't. If we could thinly-provision that bandwidth by allocating only what a given server needs at that moment in time, we could dramatically reduce costs to those servers. This, to a large degree, is the problem that I/O virtualization suppliers are looking to address with their solutions.

In its first phase, I/O virtualization inserts an I/O consolidation gateway at the top of a rack. A single interface card or two for redundancy is placed in the server and then connected to the top of rack gateway. Various I/O cards (IP, Storage and others) are then placed in the gateway to be shared across all the severs in the rack. The I/O card then becomes a shared resource across the rack. There are differences in the way vendors are connecting these I/O gateways to the servers. We've seen PCIe, Ethernet and Infiniband used so far. There are also differences in the way vendors are managing QoS and bandwidth contention.

I/O virtualization or provisioning is going to have the same attacks hurled at it as thin-provisioning did, the most common being, what if all the servers need I/O at the same time? In fairness, bandwidth is not as static as storage is. You know when 10TBs of data is going to be loaded in, and it takes time to do that. Bandwidth is more dynamic, but it's still similar to how thin-provisioned storage systems provide monitoring and reporting of actual utilization. Bandwidth provisioning should provide the same information to alert you when the environment will exceed capacity. I/O virtualization could be a tremendous cost-saver in the data center. It will allow the move to higher speed and higher-priced protocols today, but as the prices come down later, the user has the choice of just continuing driving cost out or moving to the next fastest technology.

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, ... View Full Bio
Comment  | 
Print  | 
More Insights
Slideshows
Cartoon
Audio Interviews
Archived Audio Interviews
Jeremy Schulman, founder of Schprockits, a network automation startup operating in stealth mode, joins us to explore whether networking professionals all need to learn programming in order to remain employed.
White Papers
Register for Network Computing Newsletters
Current Issue
Video
Twitter Feed