Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Alas, Poor Virtensys, I Knew Virtual I/O Horatio

I must admit I was one of those folks who was intrigued by the idea of I/O virtualization. I led sessions at conferences exploring the various ways one could connect servers and peripherals to each other. The very idea that I could share expensive resources like RAID controllers and network connections from a shared pool seemed like a path to the flexibility I always wanted. Apparently, most of you disagreed, as at least one I/O virtualization pioneer, Virtensys, bit the dust this week. I don't think virtual IO will ever go mainstream, so I am sticking with 10-Gbps Ethernet and iSCSI.

Of course, the whole thing brought back the early days of the LAN industry when we installed ARCnet and Cheapernet LANs so users could share expensive peripherals like hard disks and laser printers. The I/O virtualization vendors from Apruis to Xsigo all promised to give us access to peripherals from Ethernet and Fibre Channel ports to RAID controllers, and of course their storage and GPUs, while sharing the cost across multiple servers.

These vendors were trying to bring the promise of the PCI SIG's I/O virtualization standards to market. The PCI SIG developed standards for how multiple processes, or even multiple servers, could share resources on I/O cards. SR-IOV, the standard for sharing resources between multiple processes on a single server, has gotten lukewarm reception in the industry, with important players like VMware still not fully supporting it. MR-IOV, which allows multiple servers to share I/O cards, never took off because the I/O card vendors realized supporting MR-IOV could mean selling fewer cards.

Virtensys, Aprius and NextIO all worked on building a solution that would let users put any PCIe I/O card into their I/O concentrators. Virtensys and NextIO used low-cost ($200) PCI extension cards to connect to their concentrators, where Apruis moved to 10-Gbps Ethernet for the server-concentrator connection, which was neat but raised the cost of each connection.

The last I/O virtualization vendor, Xsigo, kept its focus on what most customers actually needed--scalable, manageable 10-Gbps Ethernet and Fibre Channel connectivity at the right price. While it may be cool to share a RAID controller and allocate its logical drives to a group of servers, SAN technology does that and allows multiple servers to access the same volume at the same time to support clustering and VMotion.

By using 40-Gbps InfiniBand and/or 10-Gbps Ethernet for the connections to its I/O Director, Xsigo can put IB or Ethernet switches between the I/O Director and the servers. One I/O Director can support 250 servers, and a cluster four IO Directors can support 1,000 servers. That's a significant number of servers over the 16 servers Virtensys could support with a single system. NextIO similarly concentrated on just making IOV work at rack scale.

Virtensys was founded in 2006 as a spinoff from Xyratex and burned through about $40 million in venture funds over its short life. In October, Virtensys and Micron announced plans to share Micron SSDs over the Virtensys systems. Last week Micron picked up the assets, primarily intellectual property, and staff of Virtensys. While details of the deal are being kept secret, word on the street is that the purchase price was more on the order of a sack of magic beans than the $160 million the VCs would have considered a win.

Rumors also indicate that Aprius has been absorbed by Fusion-IO for a song. I tried contacting the folks I've worked with at Virtensys and Aprius, but have gotten no response.

While losing two or four players isn't good for the remaining players, there is a market for their gear at telcos, hosting providers and other organizations that run large, highly virtualized environments with a high rate of change. Hopefully Micron will come up with a PCIe SSD sharing system. Till then, it's 10-Gbps Ethernet and iSCSI for me.

Disclosure: I've followed all the companies mentioned here for a few years. I'm sure drinks, meals and promotional tchotchkes were involved, but that is the extent of business I have done with them.