Alas, Poor Virtensys, I Knew Virtual I/O Horatio

I must admit I was one of those folks who was intrigued by the idea of I/O virtualization. I led sessions at conferences exploring the various ways one could connect servers and peripherals to each other. The very idea that I could share expensive resources like RAID controllers and network connections from a shared pool seemed like a path to the flexibility I always wanted. Apparently, most of you disagreed, as at least one I/O virtualization pioneer, Virtensys, bit the dust this week.

Howard Marks

January 25, 2012

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

I must admit I was one of those folks who was intrigued by the idea of I/O virtualization. I led sessions at conferences exploring the various ways one could connect servers and peripherals to each other. The very idea that I could share expensive resources like RAID controllers and network connections from a shared pool seemed like a path to the flexibility I always wanted. Apparently, most of you disagreed, as at least one I/O virtualization pioneer, Virtensys, bit the dust this week. I don't think virtual IO will ever go mainstream, so I am sticking with 10-Gbps Ethernet and iSCSI.

Of course, the whole thing brought back the early days of the LAN industry when we installed ARCnet and Cheapernet LANs so users could share expensive peripherals like hard disks and laser printers. The I/O virtualization vendors from Apruis to Xsigo all promised to give us access to peripherals from Ethernet and Fibre Channel ports to RAID controllers, and of course their storage and GPUs, while sharing the cost across multiple servers.

These vendors were trying to bring the promise of the PCI SIG's I/O virtualization standards to market. The PCI SIG developed standards for how multiple processes, or even multiple servers, could share resources on I/O cards. SR-IOV, the standard for sharing resources between multiple processes on a single server, has gotten lukewarm reception in the industry, with important players like VMware still not fully supporting it. MR-IOV, which allows multiple servers to share I/O cards, never took off because the I/O card vendors realized supporting MR-IOV could mean selling fewer cards.

Virtensys, Aprius and NextIO all worked on building a solution that would let users put any PCIe I/O card into their I/O concentrators. Virtensys and NextIO used low-cost ($200) PCI extension cards to connect to their concentrators, where Apruis moved to 10-Gbps Ethernet for the server-concentrator connection, which was neat but raised the cost of each connection.

The last I/O virtualization vendor, Xsigo, kept its focus on what most customers actually needed--scalable, manageable 10-Gbps Ethernet and Fibre Channel connectivity at the right price. While it may be cool to share a RAID controller and allocate its logical drives to a group of servers, SAN technology does that and allows multiple servers to access the same volume at the same time to support clustering and VMotion.

By using 40-Gbps InfiniBand and/or 10-Gbps Ethernet for the connections to its I/O Director, Xsigo can put IB or Ethernet switches between the I/O Director and the servers. One I/O Director can support 250 servers, and a cluster four IO Directors can support 1,000 servers. That's a significant number of servers over the 16 servers Virtensys could support with a single system. NextIO similarly concentrated on just making IOV work at rack scale.

Virtensys was founded in 2006 as a spinoff from Xyratex and burned through about $40 million in venture funds over its short life. In October, Virtensys and Micron announced plans to share Micron SSDs over the Virtensys systems. Last week Micron picked up the assets, primarily intellectual property, and staff of Virtensys. While details of the deal are being kept secret, word on the street is that the purchase price was more on the order of a sack of magic beans than the $160 million the VCs would have considered a win.

Rumors also indicate that Aprius has been absorbed by Fusion-IO for a song. I tried contacting the folks I've worked with at Virtensys and Aprius, but have gotten no response.

While losing two or four players isn't good for the remaining players, there is a market for their gear at telcos, hosting providers and other organizations that run large, highly virtualized environments with a high rate of change. Hopefully Micron will come up with a PCIe SSD sharing system. Till then, it's 10-Gbps Ethernet and iSCSI for me.

Disclosure: I've followed all the companies mentioned here for a few years. I'm sure drinks, meals and promotional tchotchkes were involved, but that is the extent of business I have done with them.

About the Author

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights