Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

What's Right And Wrong About IOV On The Motherboard: Page 2 of 2

The cost savings of Xsigo's implementation of RoCEE at the server are offsetat expansion chassis where IT will need to run custom modules and not generally available PCI cards. Though Xsigo generally doesn't like to talk about pricing for their IOV Director modules, their solution is seen as being on the more costly side.

What the industry needs to do is to locate all standard PCI boards in the external card cages. By doing so, server manufacturers will be able to eliminate all expansion slots in their servers enabling a much thinner and cooler server while dramatically increasing server density in the rack. What's more, IOV deployments will be made more cost effective since organizations will be able to use off the shelf cards with no change to drivers or silicon.

Aprius claims its technology will enable such ultra thin servers today. With Aprius, an adapter in the server encapsulates PCIe in Ethernet frames and sent to the external card cages housing off-the-shelf PCIe cards. You get the benefit of IOV on the motherboard without the high costs associated with proprietary hardware.

The problem is that while Xsigo's solution might be expensive, Aprius has yet to ship product. There are other players--Next I/O and Virtensys, for example--but these solutions tend to be based around the MultiRoot-IOV (MR-IOV) standard, which never took off. Without an affordable SR-IOV solution based around Ethernet that leverages the investment in the existing host resource, IOV will continue to play best in very large VMware implementations or in greenfield installations. That's a shame, because the benefits of IOV means this technology has so much more to offer,