Are InfiniBand's Days Numbered?

The emergence of RDMA over Ethernet endangers InfiniBand's future as a storage interface.

Jim O'Reilly

February 10, 2014

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

InfiniBand (IB) has been something of an anomaly for years now. While it's clearly the fastest storage interface, it's mainly supplied only by Mellanox with Intel providing some products. The reason for this is that IB services a few niches where ultimate performance and low latency matter, such as high-performance computing (HPC) and financial systems. The broader markets are serviced by Fibre Channel, which is gradually being supplanted by Ethernet, due to cost and performance.

IB technology leads the storage industry, with 56Gb/sec links available, which leads Fibre Channel’s 16Gb/sec product by a wide margin. InfiniBand is priced fairly aggressively against Fibre Channel, but the latter’s entrenched position a few years back slowed IB’s acceptance in the market, coupled with ease of deployment issues.

With HPC, financial systems, and big data growing at good rates, it would seem that IB should enjoy a glowing future, but there are dark clouds on the horizon.

IB gets its performance from Remote Direct Memory Access (RDMA), which allows an application to write data directly into the memory of the target system, without having in-memory copies and I/O stack software to slow it down. That technology is now appearing in the Ethernet market, provided by Chelsio and, surprisingly, Mellanox itself.

Chelsio and Mellanox have rival architectures, with Mellanox’ RoCE (RDMA over Converged Ethernet) requiring a lossless Ethernet infrastructure, though it might perform slightly faster than iWARP from Chelsio. Both of the Ethernet solutions are available in 40Gb/sec configurations; Mellanox also supports a 56Gb/sec capability.

Performance benchmarking indicates that Ethernet RDMA performs close enough to IB that the advantage enjoyed by IB in the past is negated.

Both Ethernet solutions at 40Gb/sec are relatively new, though both companies have years of experience in RDMA designs. The issue is projecting whether we will see Ethernet take business from IB, or if IB will sail serenely onward.

[Read Jim O'Reilly's thoughts on the rise of 10GbE in "Will 2014 Be The Year Of 10 Gigabit Ethernet?"]

Mellanox is firmly on both sides of the debate, with a versatile solution that can be tailored to either; it should thrive whichever way the decision goes. At the same time, Mellanox's recent emphasis on Ethernet and its creation of RoCE indicate the company's sense of the market.

With a version of Fibre Channel over Ethernet (FCoE) and cheap alternatives such as iSCSI providing good performance for block I/O, the SAN space is moving to Ethernet and Fibre Channel looks to be fading. Also, there is massive expansion of object storage, again using Ethernet interfaces. The net result is that in about five years, Ethernet will be the strongly dominant storage interface.

That isolates IB, and the fact that Ethernet is close in performance and latency removes the strongest raison d’etre for IB. Ethernet should enjoy the benefits of huge volumes, and has the advantage of many vendors driving its evolution. As a result, IB looks also to be a candidate for Ethernet convergence.

The next step in the Ethernet community appears to be 100Gb/sec using four connections per link. I suspect IB will have a product with those parameters, but the cost of PHY development likely means that there won’t be a “140Gb/sec” IB product. This is probably the cross-over point where Ethernet is preferable to IB.

The Fibre Channel folks are talking up 32Gb/sec and beyond, but their schedule will give them a slower product than 40Gb/sec Ethernet/IB in maybe two years. To get to anything like parity with Ethernet and IB, Fibre Channel has to go to multiple connections per link, and that is even further out. And even when it gets there, there’s that pesky RDMA advantage that both Ethernet and IB enjoy. Enough said.

There will still be IB (and Fibre Channel) around a decade from now, since the industry is surprisingly conservative and technologies have passionate adherents, but the writing is on the wall, and Ethernet looks likely to own storage at some point.

About the Author

Jim O'Reilly

President

Jim O'Reilly was Vice President of Engineering at Germane Systems, where he created ruggedized servers and storage for the US submarine fleet. He has also held senior management positions at SGI/Rackable and Verari; was CEO at startups Scalant and CDS; headed operations at PC Brand and Metalithic; and led major divisions of Memorex-Telex and NCR, where his team developed the first SCSI ASIC, now in the Smithsonian. Jim is currently a consultant focused on storage and cloud computing.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights