External IOV and Data Center Politics
I, like some others, have been concerned that the convergence of storage and data networking could in many organizations end up bogged down as the storage and networking teams fight it out over turf. Some may decide that reducing from two cables each for 10gigE and Fibre Channel to just two 10gig cables isn't worth the trouble but that's means duplicate infrastructures end to end which will hit the budget pretty hard. External I/O virtualization could be the answer.
October 16, 2009
I, like some others, have been concerned that the convergence of storage and data networking could in many organizations end up bogged down as the storage and networking teams fight it out over turf. Some may decide that reducing from two cables each for 10gigE and Fibre Channel to just two 10gig cables isn't worth the trouble but that's means duplicate infrastructures end to end which will hit the budget pretty hard. External I/O virtualization could be the answer.
The problem with converged networking from a political point of view is ownership. The converged adapters and switches have to be run by someone and neither the storage nor networking group wants to give up ground to the other.
I can hear the usual arguments in my sleep. The storage guys think the network folks are either incompetent or to blas?? about outages and network loss. They look at how spanning tree re-convergence takes the net down for a few minutes or the .002% packet loss the network guys think is acceptable and see the corrupt data that would come if that level of service was applied to a SAN.
The network guys think the storage guys live in a dream world where 1000 nodes is a very large network and the storage guys get to spend many times as much for everything as they do. They're sure if anyone let them spend half what the network guys spend they'd have a problem free network too.
Now the truth is each group has skills and management systems that should be monitoring the converged switch. Today the network team uses OpenView or some other SNMP manager and NetFlow to monitor the network while the storage team has SANview or some other Fibre Channel management system doing the same on their net. One big difference is the SAN team considers the HBAs in servers part of their domain while the network team leaves NIC configuration to the server folks.Both sides argue against external IOV whether the connection is Intiniband as in Xsigo's I/O director or extended PCIe in the solutions from Next I/O, Virtensys and Aprius saying there's no reason for an additional fabric to manage in the datacenter.
I counter that it's not another company wide, or even data center wide, fabric. It's part of the server infrastructure just like blade chassis, internal RAID controllers and DAS JBODs. Since the IO cabinet takes standard cards, or in XSigo's case cards that use standard drivers, and those cards are shared across the servers in the rack treat them as cards.
That should make everyone unhappy.
Read more about:
2009About the Author
You May Also Like