Mellanox on FCoE

Mellanox believes there are two technologies designed for convergence: Ethernet and InfiniBand

Frank Berry

April 20, 2009

4 Min Read
Network Computing logo

FCoE and CEE represent the unification of networking technology and what is going to be a chaotic convergence of three vendor communities that previously served neatly ordered data, storage, and server networking markets. I recently asked three questions to find out if there is consensus in the industry about the adoption of FCoE. Today we look at the response from Mellanox, the InfiniBand technology pioneer, dominant InfiniBand market leader, and one of the early technology leaders in the race for converged networking using 10-Gbit/s CEE.

Is convergence on FCoE going to happen? Yes, but Mellanox believes there's not one, but two technologies designed for convergence: Ethernet and InfiniBand. John Monson, vice president of marketing for Mellanox, deftly explained how FCoE is designed to deliver on the promise of convergence because data centers can support most applications with 10-Gbit/s fabric speed, and most applications are not latency-sensitive. And, Monson says Mellanox is clear they expect capabilities such as RDMA for low-latency Ethernet will be added to CEE.

But Monson was quick to point out that some fundamentals are changing and will lead to the use of InfiniBand as a converged fabric. For example, Mellanox sees the average number of cores per server growing to 32. The use of substantially denser compute nodes will move the bottleneck from the server to the network. In data centers running I/O-intensive applications such as large database CRM, or latency-sensitive financial applications, only InfiniBand will deliver the converged network performance needed to support these environments.

My opinion: Mellanox is one of the first vendors to develop CEE-based networks and has a headstart on many other vendors in terms of RDMA technology and real-world experience with their products. I also believe that InfiniBand offers four times more bandwidth and 10 times lower latency than 10-Gbit/s Ethernet. If the economy hasnt killed the budgets of the big-spending financial institutions and industrials, there will be a place for the distinctly superior performance of converged InfiniBand fabrics in enterprise data centers that deploy clusters, support the biggest and baddest transaction-oriented applications, and use IT to deliver millions in competitive advantage for their companies.

How is Mellanox positioned to support the transition to converged fabrics? Monson explained the company strongly believes in I/O-agnostic fabric convergence, and its products are designed to support unified data, storage, and sever networking on InfiniBand as well as Ethernet. My opinion is Mellanox must continue working with leading users in the enterprise data center and continue moving quickly to leverage its strong OEM relationships in order to establish a presence outside of high-performance technical computing and into high-performance business computing. I believe enterprise data centers can use what may be the most advanced converged fabric gateway based on its ability to provide extremely low-latency bridging between InfiniBand, Ethernet, and Fibre Channel. Mellanox ConnectX and BridgeX products support useful Data Center Bridging enhancements that are not widely available such as priority flow control, enhanced transmission, and congestion management.Mellanox expects it will take some time for converged storage networks to become mainstream. Monson added that Mellanox expects Fibre Channel and iSCSI to comprise the majority of customer SAN infrastructures. Monson also reminded me that Enhanced Ethernet is still going through the standards process, and it takes a number of years for a standard to be widely implemented, especially in the very conservative storage market. I'm intrigued by both the potential of Mellanox to gain a foothold in the mainstream Ethernet market and by the potential of InfiniBand as a second converged fabric. The technology has a big performance advantage, a foothold in technical computing, plus the new million-IOPS SSDs and incredibly dense multi-core servers need a faster interconnect. I say InfiniBand is a technology player in converged networks with a role somewhere between a high-end HPC niche and the mainstream enterprise data center.

Frank Berry is CEO of IT Brand Pulse, a company that surveys enterprise IT managers about their perceptions of vendors and their products. Berry is a 30 year veteran of the IT industry, including senior executive positions with QLogic and Quantum.

InformationWeek Analytics has published an independent analysis of the challenges around enterprise storage. Download the report here (registration required).

About the Author(s)

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights