Last week's "Ethernet Has a Goldilocks Problem" post generated some thought-provoking responses, but one really caught my attention: If we are to suggest that iSCSI is a viable alternative storage protocol for converged networking, where does that leave NFS? After all, storage vendors are increasingly pushing NFS as an alternative to iSCSI for the storage of virtual machine environments. If it's good enough for VMware ESX, doesn't it deserve a place at the adult table when discussing converged networking?
First, let us consider the market position of NFS. Most surveys suggest that file-based storage holds its own against block protocols like Fibre Channel or iSCSI in terms of disk capacity shipments. It is only recently that NFS has gained acceptance for the support of virtual machine environments, but it already has quite a bit of traction.
A recent survey by Chad Sakac of EMC suggests that NFS is currently used in 42 percent of server virtualization environments. Although Chad did not ask the relative prevalence of this protocol in terms of capacity where virtual machines are supported, it is clear that NFS has a place in the enterprise data center. For comparison, iSCSI, Fibre Channel, direct-attached storage (DAS) and Fibre Channel over Ethernet (FCoE) are used in 76 percent, 56 percent, 19 percent and 9 percent of environments, respectively.
NFS has a reputation as a difficult and poorly performing storage protocol, but this is unfair in light of recent advances. NFS version 4 includes many simplifications and performance enhancements, and parallel NFS (pNFS), found in NFS version 4.1, can be a performance monster. This is the reason that EMC invested in Isilon and is now touting its capabilities in so-called big data environments.
Many NFS-haters are only familiar with the basic servers found in operating systems like Linux and low-end storage devices from companies like Iomega, Data Robotics and Buffalo. But enterprise storage arrays from NetApp, BlueArc, EMC and many others are in an entirely different league when it comes to performance, scalability and reliability.
If NFS is this common, supporting infrastructure applications like VMware, it must be taken seriously in any discussion of converged networking strategy. Like iSCSI, it just does not require any of the advanced features found in the new Data Center Bridging extensions for 10 gigabit Ethernet. But both protocols can benefit from advanced Ethernet features such as Priority Flow Control, bandwidth management and congestion notification.
The potential scene in alternative protocols like NFS and iSCSI presents a serious challenge to the proponents of FCoE. Network and storage traffic can be converged today using inexpensive 10 gigabit Ethernet ports and switches, and do not require an expensive FCoE/Fibre Channel infrastructure.
While it is unlikely that any shop committed to Fibre Channel would swap it all out for NFS or iSCSI, current users of these protocols will see much value in continuing to use them.