Virtual I/O? Not A Bad Idea

Xsigo Systems has a novel approach to I/O hassles in virtualized server environments, and it seems to make a heck of a lot of sense.

Joe Hernick

October 16, 2007

3 Min Read
Network Computing logo

I had an interesting chat with John Toor and Ariel Cohen from Xsigo Systems late last week. Xsigo has a novel approach to I/O hassles in virtualized (and traditional) server environments. Virtualize everything (network connections, HBAs) for a host, aggregate all the bits via one InfiniBand adapter, run the traffic from your servers over one cable to an ???I/O Director??? box with massive internal data fabric, and then connect everything on the back end (gigE, FC, iSCSI, copper, optical, you name it) to the I/O Director. You know what? Virtualizing I/O makes a heck of a lot of sense... Charlie Babcock from InformationWeek touched on this shortly after Xsigo???s Sept. 10 press release, and the concept has been scratching at the back of my brain for a bit now. I hate managing I/O. I hate cabling. I know it???s petty, but I really hate troubleshooting connectivity/constraint issues; Xsigo's solution speaks to the part of me that believes there has to be a simpler way.

We all know that managing data and SAN connections for even mildly complex host platforms can be a hassle. Managing large-scale connectivity requirements for a large ESX or XenEnterprise environment requires careful planning to reduce resource constraints at every potential bottleneck. What do you do if you need additional or dedicated network bandwidth for a new VM? Add another NIC. Need better disk I/O? How about another FC HBA in the mix, with all the assorted configuration hassles that come with densely packed rigs?

So here???s the simplify pitch from Xsigo: Take all your InfiniBand-capable enterprise-class servers, and replace all existing I/O cards with one $300 host channel adapter. Use two for redundancy. Their HCAs are rated at 10 Gbps with plans for faster bandwidth on the horizon. Replace the traditional rat???s nest with one home-run cable back to the I/O Director box. You get 15m for copper and 300m for optical runs, and each box supports 24 server connections. The I/O director can be configured with up to 15 10gigE, 2x4GbFC, or 4xgigE modules to connect to network and data resources. GUI, CLI, or open API for configuring virtual NICs and virtual HBAs (up to 32 per physical server) as well as management of vNICs and vHBAs across the installation. Vetted support from Hewlett-Packard, IBM, ESX 3.5, and Xen on Red Hat R5 and SUSE. QoS per vNIC & vHBA. Much rejoicing.

The catch? $30K base up to a max config price of $300K. This isn???t a play for the small end of the market. Xsigo???s cost justification numbers of 50% capital savings vs. traditional architecture makes sense on paper when you include removal of edge switches and in-server gear. Can you make this pitch in your existing shop? It???d be a tough sell. Is it worth upgrading your year-old servers to an HP, Dell, or IBM InfiniBand platform? Probably not. Is it worth investigating for a planned site expansion or pending large-scale refresh? Yup. I'd do it if I could justify the case. I'd look extra hard if I was planning a big ESX rollout; vNIC and vHBA configs will follow a VMotioned VM from box to box...

Xsigo???s board is a who???s who from Oracle, Sun, and Juniper. It appears to be well-backed by Kleiner Perkins, Khosla Ventures, and Greylock Partners. It's been working as a skunkworks for years, it has 100 bodies employed around the globe, and it seems hungry. In the best way possible. Xsigo only has one customer site in production, with ???dozens??? in the channel for evaluation. If it can deliver on the goods, my bet is that it will have a solid base as 2008 comes to a close.

Read more about:

2007

About the Author(s)

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights