Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Mellanox on FCoE

FCoE and CEE represent the unification of networking technology and what is going to be a chaotic convergence of three vendor communities that previously served neatly ordered data, storage, and server networking markets. I recently asked three questions to find out if there is consensus in the industry about the adoption of FCoE. Today we look at the response from Mellanox, the InfiniBand technology pioneer, dominant InfiniBand market leader, and one of the early technology leaders in the race for converged networking using 10-Gbit/s CEE.

Is convergence on FCoE going to happen? Yes, but Mellanox believes there's not one, but two technologies designed for convergence: Ethernet and InfiniBand. John Monson, vice president of marketing for Mellanox, deftly explained how FCoE is designed to deliver on the promise of convergence because data centers can support most applications with 10-Gbit/s fabric speed, and most applications are not latency-sensitive. And, Monson says Mellanox is clear they expect capabilities such as RDMA for low-latency Ethernet will be added to CEE.

But Monson was quick to point out that some fundamentals are changing and will lead to the use of InfiniBand as a converged fabric. For example, Mellanox sees the average number of cores per server growing to 32. The use of substantially denser compute nodes will move the bottleneck from the server to the network. In data centers running I/O-intensive applications such as large database CRM, or latency-sensitive financial applications, only InfiniBand will deliver the converged network performance needed to support these environments.

My opinion: Mellanox is one of the first vendors to develop CEE-based networks and has a headstart on many other vendors in terms of RDMA technology and real-world experience with their products. I also believe that InfiniBand offers four times more bandwidth and 10 times lower latency than 10-Gbit/s Ethernet. If the economy hasnt killed the budgets of the big-spending financial institutions and industrials, there will be a place for the distinctly superior performance of converged InfiniBand fabrics in enterprise data centers that deploy clusters, support the biggest and baddest transaction-oriented applications, and use IT to deliver millions in competitive advantage for their companies.

How is Mellanox positioned to support the transition to converged fabrics? Monson explained the company strongly believes in I/O-agnostic fabric convergence, and its products are designed to support unified data, storage, and sever networking on InfiniBand as well as Ethernet. My opinion is Mellanox must continue working with leading users in the enterprise data center and continue moving quickly to leverage its strong OEM relationships in order to establish a presence outside of high-performance technical computing and into high-performance business computing. I believe enterprise data centers can use what may be the most advanced converged fabric gateway based on its ability to provide extremely low-latency bridging between InfiniBand, Ethernet, and Fibre Channel. Mellanox ConnectX and BridgeX products support useful Data Center Bridging enhancements that are not widely available such as priority flow control, enhanced transmission, and congestion management.

  • 1