Storage

12:53 PM
Jim O'Reilly
Jim O'Reilly
Commentary
Connect Directly
LinkedIn
RSS
E-Mail
50%
50%
Repost This

Are InfiniBand's Days Numbered?

The emergence of RDMA over Ethernet endangers InfiniBand's future as a storage interface.

InfiniBand (IB) has been something of an anomaly for years now. While it's clearly the fastest storage interface, it's mainly supplied only by Mellanox with Intel providing some products. The reason for this is that IB services a few niches where ultimate performance and low latency matter, such as high-performance computing (HPC) and financial systems. The broader markets are serviced by Fibre Channel, which is gradually being supplanted by Ethernet, due to cost and performance.

IB technology leads the storage industry, with 56Gb/sec links available, which leads Fibre Channel’s 16Gb/sec product by a wide margin. InfiniBand is priced fairly aggressively against Fibre Channel, but the latter’s entrenched position a few years back slowed IB’s acceptance in the market, coupled with ease of deployment issues.

With HPC, financial systems, and big data growing at good rates, it would seem that IB should enjoy a glowing future, but there are dark clouds on the horizon.

IB gets its performance from Remote Direct Memory Access (RDMA), which allows an application to write data directly into the memory of the target system, without having in-memory copies and I/O stack software to slow it down. That technology is now appearing in the Ethernet market, provided by Chelsio and, surprisingly, Mellanox itself.

Chelsio and Mellanox have rival architectures, with Mellanox’ RoCE (RDMA over Converged Ethernet) requiring a lossless Ethernet infrastructure, though it might perform slightly faster than iWARP from Chelsio. Both of the Ethernet solutions are available in 40Gb/sec configurations; Mellanox also supports a 56Gb/sec capability.

Performance benchmarking indicates that Ethernet RDMA performs close enough to IB that the advantage enjoyed by IB in the past is negated.

Both Ethernet solutions at 40Gb/sec are relatively new, though both companies have years of experience in RDMA designs. The issue is projecting whether we will see Ethernet take business from IB, or if IB will sail serenely onward.

[Read Jim O'Reilly's thoughts on the rise of 10GbE in "Will 2014 Be The Year Of 10 Gigabit Ethernet?"]

Mellanox is firmly on both sides of the debate, with a versatile solution that can be tailored to either; it should thrive whichever way the decision goes. At the same time, Mellanox's recent emphasis on Ethernet and its creation of RoCE indicate the company's sense of the market.

With a version of Fibre Channel over Ethernet (FCoE) and cheap alternatives such as iSCSI providing good performance for block I/O, the SAN space is moving to Ethernet and Fibre Channel looks to be fading. Also, there is massive expansion of object storage, again using Ethernet interfaces. The net result is that in about five years, Ethernet will be the strongly dominant storage interface.

That isolates IB, and the fact that Ethernet is close in performance and latency removes the strongest raison d’etre for IB. Ethernet should enjoy the benefits of huge volumes, and has the advantage of many vendors driving its evolution. As a result, IB looks also to be a candidate for Ethernet convergence.

The next step in the Ethernet community appears to be 100Gb/sec using four connections per link. I suspect IB will have a product with those parameters, but the cost of PHY development likely means that there won’t be a “140Gb/sec” IB product. This is probably the cross-over point where Ethernet is preferable to IB.

The Fibre Channel folks are talking up 32Gb/sec and beyond, but their schedule will give them a slower product than 40Gb/sec Ethernet/IB in maybe two years. To get to anything like parity with Ethernet and IB, Fibre Channel has to go to multiple connections per link, and that is even further out. And even when it gets there, there’s that pesky RDMA advantage that both Ethernet and IB enjoy. Enough said.

There will still be IB (and Fibre Channel) around a decade from now, since the industry is surprisingly conservative and technologies have passionate adherents, but the writing is on the wall, and Ethernet looks likely to own storage at some point.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Tier1Storage
50%
50%
Tier1Storage,
User Rank: Apprentice
2/19/2014 | 2:28:09 AM
re: Are InfiniBand's Days Numbered?
Jim, nice overview of the market, protocols, and RDMA. The Mellanox VPI adapters and switches already support both 40Gb Ethernet and FDR 56Gb InfiniBand. Also my personal observation is that in big clouds, everything is Ethernet or InfiniBand--no Fibre Channel. This is because only Ethernet and InfiniBand allow a large scale converged network.
Disclosure: I work for Mellanox Technologies.
John Lauro
50%
50%
John Lauro,
User Rank: Apprentice
2/16/2014 | 2:27:37 AM
re: Are InfiniBand's Days Numbered?
56IB and 40GBE share a lot of components, and so lower prices of 40GBE will likely lower the prices of IB too. The silicon is lighter weight on IB (and much ligher weight than ethernet TOE), and so it's likely not to be undercut IB, only catch up with.
joreilly925
50%
50%
joreilly925,
User Rank: Apprentice
2/13/2014 | 12:34:30 AM
re: Are InfiniBand's Days Numbered?
@soldack, I expect 10 and 40G kit prices to fall drastically over the next two years. At some point they'll cross over and undercut IB by a large margin. IB and RoCE TOEs will be competing against native 10G on the motherboard and that will move the equation in favor of Ethernet (a $10 board feature versus a $400 TOE)
soldack
50%
50%
soldack,
User Rank: Apprentice
2/12/2014 | 5:10:51 PM
re: Are InfiniBand's Days Numbered?
What about the switch cost and performance? This is where IB has really beaten up ethernet and FC. Ethernet guys are just getting into handling arbitrary fat tree topologies that IB subnet managers and switches handle very well. iScsi is a non-starter for anything that needs performance unless you use an offload device to help it out or you are willing to give up a lot of host CPU. Anything that was depending on FC probably could not take the performance and CPU hit of iSCSI without an offload board. For Ethernet to take over they have to get switch and adapter prices down. They also need to get more Ethernet attached storage. This is how FC has hung on so long; they have FC all the way from host adapters to switches to the disk arrays to the drives themselves. IB has some of that. Ethernet does not have as much as even IB here.
kmarko
50%
50%
kmarko,
User Rank: Apprentice
2/11/2014 | 2:27:42 AM
re: Are InfiniBand's Days Numbered?
Yes, it's called iWARP and implemented in Intel's server adapters.
http://www.intel.com/content/w...
MarciaNWC
50%
50%
MarciaNWC,
User Rank: Apprentice
2/10/2014 | 10:04:52 PM
re: Are InfiniBand's Days Numbered?
Hi Jim -- Has Intel made any moves in the Ethernet space similar to Mellanox?
More Blogs from Commentary
Infrastructure Challenge: Build Your Community
Network Computing provides the platform; help us make it your community.
Edge Devices Are The Brains Of The Network
In any type of network, the edge is where all the action takes place. Think of the edge as the brains of the network, while the core is just the dumb muscle.
Fight Software Piracy With SaaS
SaaS makes application deployment easy and effective. It could eliminate software piracy once and for all.
SDN: Waiting For The Trickle-Down Effect
Like server virtualization and 10 Gigabit Ethernet, SDN will eventually become a technology that small and midsized enterprises can use. But it's going to require some new packaging.
IT Certification Exam Success In 4 Steps
There are no shortcuts to obtaining passing scores, but focusing on key fundamentals of proper study and preparation will help you master the art of certification.
Hot Topics
1
Scale-Out Storage Has Limits
George Crump, President, Storage Switzerland,  4/21/2014
White Papers
Register for Network Computing Newsletters
Cartoon
Current Issue
Video
Slideshows
Twitter Feed