Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Fibre Channel Just Can't Keep Up

All-flash storage arrays are now on the market and in great demand by cloud data centers. Many deliver in the million IOPS range, with several gigabits-per-second streaming performance. What is more, with the evolution in flash technology, we can expect double that performance within a year.

This sort of performance is putting a huge strain on the SAN infrastructure. The preferred fabric alternative has been 8Gbit/s Fibre Channel, but we are now looking at the need for fabrics running at more than four times that performance just to keep up.

It’s worth considering the interfaces on these new boxes. We see eight ports of 10Gig Ethernet on the Violin array, for instance. And the same array has an option for eight ports of 40Gig InfiniBand or 4x8 PCIe 2 connections. 40Gig Ethernet can’t be far behind.

That prompts a question: Is Fibre Channel being left in the dust?

We need to look at technologies a little bit at this point. The physical interfaces of all of these technologies share a great deal, using many of the same elements, from cables and fiber optic links to connectors, driver circuits, and signal shaping. In many ways, we’ve reached the point of multiple protocols using the same bottom layer of the OSI stack. However, the narrowness of the supplier base, especially in Fibre Channel, means that pricing is very different, and upper-layer features are evolving at different rates.

One other implication of the hardware "sameness" is that any speed differences lie in the protocol stacks involved. These have all grown in efficiency through several generations, and the result is a few percent difference among the protocols in terms of delivered streaming or IOPS rates.

The protocols themselves are differentiating as we look at scaling storage to cloud size. The blockIO model used by Fibre Channel is a holdover from the early days of storage, when applications operated on data blocks. Today, most of what we do involves moving objects around and sharing data among many servers.

In this environment, object and file type protocols are more effective. Essentially, they put the file system in the storage appliances, rather than trying to disperse it over the servers and then somehow keep it in sync. Fibre Channel doesn’t support file modalities.

So, looking over the performance-driven world of the all-flash array, we see Fibre Channel at a major disadvantage to both Ethernet and InfiniBand. The latest Fibre Channel incarnation, at 16Gbit/s, is slow in gaining acceptance, while its competitors are rapidly gaining positions in high-performance computing and in datacenter connectivity in general.

It’s hard not to reach the conclusion that Fibre Channel has had its day, and that new, faster, and more mainstream interfaces are going to supplant it. Ethernet and InfiniBand, too, are evolving, but at a faster pace than Fibre Channel, and they are both able to migrate to the object/file model and have in fact already done so. Fibre Channel’s attempt to embrace Ethernet fabric, FCoE, has seen little market interest, and that is telling.

All in all, it’s beginning to look as if storage in a few years will be shaping up into a battle between 40 or 100Gigabit Ethernet and InfiniBand links, running object storage protocols. I suspect they may borrow heavily from the PCIe world and use a modified NVMe protocol because of its inherent low overhead and speed of operation, but that is speculative right now.

The cloud world demands greater speed every day in retrieving stored data. And every day Fibre Channel seems less ready to give it.