Ten Gigabit Ethernet finally hit its stride in 2013, and waves of innovation are rapidly advancing the technology. A new quad-link 40 Gigabit Ethernet can even spread data over four 10 Gigabit links, supporting applications like big data and mobile broadband.
These products are beyond the research stage, and can apply to the entire infrastructure. Forty Gigabit Ethernet is available in copper and low-cost short-haul fiber versions, making it ideal for inter-switch backbones and for connecting fast storage appliances such as all-flash arrays.
The new Ethernet spectrum includes RDMA technology in 40 Gigabit Ethernet families. This means Ethernet can now accomplish much of the performance edge InfiniBand has enjoyed for years. Admittedly, IB still has some advantages over Ethernet, and may be the optimal choice for users pushing the limits. Mellanox has capitalized on this with a software-definable NIC/switch combination that can run either protocol, allowing users to compare operation in both environments.
Protocol-wise, Ethernet already hosts a Fibre Channel derivative, FCoE, as well as iSCSI. These provide an alternative to Fibre Channel itself. iSCSI already runs on 40 Gigabit Ethernet. Other storage protocols run natively on Ethernet, including FTTP, NAS, and object storage.
Fibre Channel, on the other hand, has been losing ground to Ethernet-compatible alternatives. Fibre Channel drives are no longer available in the mainstream. A protocol that once went all the way through an array to the drive now ends at the inlet to the array box.
In the connectivity stakes, the Fibre Channel committee chose a doubling strategy, which allowed performance leapfrogging over Ethernet at the time. We had 4 Gigabit Fiber Channel when Ethernet could only muster 1 Gigabit. But the rules to the game have changed. Fiber Channel and Ethernet are now tied to the same physical layer structures, simply because of the huge cost of physical layer development and the resources needed to create and test them.
Here, the doubling idea works against Fibre Channel. Eight Gigabit Fibre Channel hit the market a year after 10 Gigabit Ethernet, and 16 Gigabit Fibre Channel is just getting going. Ethernet, on the other hand, has leapt forward to 40 Gigabit, which has already gone mainstream.
Fibre Channel's answer is a 32 Gigabit link speed. QLogic says it will release NICs in 2015, while switches will need to wait until 2016. That amounts to general availability of late 2016 for working product, giving 40 Gigabit Ethernet a four-year lead.
Ethernet isn't standing still. Single-link native 40 Gigabit Ethernet and 100 Gigabit products are already in testing, and a multi-lane 100 Gigabit product is available. The near-term landscape for 100 Gigabit is still a bit confused, however, with a number of incompatible alternatives available. This suggests that the market needs time to stabilize. Serious volumes won't ship until 2015.
To add to the complexity of comparison, a four-lane 128 Gigabit Fibre Channel is mooted for the 2016 timeframe, although it may be pushed into 2017.
All of this puts Ethernet in the catbird seat. A lead of four years at the 40 Gigabit level, and at least two years for 100 Gigabit speeds clearly favors industry convergence on Ethernet. InfiniBand and Ethernet with RDMA are picking off the highest performance use-cases. The cloud, unified storage and the move away from block-IO also favor Ethernet.
Ethernet wins on schedule, cost, and performance. It's also far easier to manage than a traditional SAN, which required specially trained SAN technicians. Fibre Channel is going to be on the defensive in a very tough fight for the market over the next few years. It's not down, and certainly not out, but Ethernet is giving Fibre Channel a heck of a pummeling, with no end in sight.
Solid state alone can't solve your volume and performance problem. Think scale-out, virtualization, and cloud. Find out more about the 2014 State of Enterprise Storage Survey results in the new issue of InformationWeek Tech Digest.