Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

The Great Storage Protocol Debate

As the Great Debate: iSCSI Beats Fibre Channel session I’m moderating at Interop NY next month will demonstrate, passions run high when you get storage and server administrators talking about their favorite storage protocols. While the iSCSI vs. Fibre Channel debate gets the most attention, the even older battle between file and block protocols is again heating up.

My problem with the file vs. block arguments is that most take way too narrow a view. Storage protocols don’t live in a vacuum, after all. While it’s true that building an NFS packet uses more CPU than sending a block request to a Fibre Channel HBA that offloads FCP (Fibre Channel Protocol), processing CPU utilization is a small part of the difference between running a vSphere infrastructure on NFS or Fibre Channel storage.

It should be obvious if you think about it that the real difference between running file and block storage protocols is management of the file system. If you choose to use a block protocol like Fibre Channel to connect your virtual server hosts to their storage, you’re also choosing to use VMware’s VMFS clustered file system. So while NFS may require a bit more work on the storage protocol side, your vSphere host also has to manage VMFS, and that doesn’t come free.

While some folks with a VMware focus, like my friend Rick Vanover, will claim that VMFS is the best file system for vSphere because it’s "designed for virtualization," those of us who have been working in the storage biz for a while know that good file systems take a long time to mature. Designing a file system for a single application is easier than designing a great-general purpose file system, but there are certainly advantages to using a mature file system like WAFL or the Isilon file system.

The primary advantage is of course ease of management. Even with vSphere 5, VMFS file systems are limited to 60 Tbytes or so. WAFL volumes can be twice that size, and scale-out solutions like Isilon's can have single volumes spanning petabytes. File systems inherently thin provision the files they contain and don’t require VAAI support to support hundreds of VMs or to reclaim unused space.

Traditionally, storage folks have considered file protocols to be unreliable and high-overhead alternatives to block protocols that work close to the hardware. In part, this comes from the fact that LANs have historically been significantly slower, and less reliable in packet delivery, than storage interconnects from SCSI to Fibre Channel. As data center LAN technology evolves to be as fast and as reliable as old-fashioned SAN hardware, the ease of management of file-based systems becomes more attractive.

Truth is, the vast majority of today’s applications and hypervisors are managing files. Yes, you can get the best performance out of your Oracle RAC system running RAW against a disk via Fibre Channel, but Exchange, SQL Server, vSphere and the like are accessing files, and, all things being equal, could be easier to manage over file protocols. In fact, with Windows 8, Microsoft will finally support Exchange and Hyper-V over SMB.

Finally, let me quote my friend J.Metz, who tweeted, "Arguing about performance differences between NFS, iSCSI and FC for VMware is like arguing about whether the Ferrari or the Lamborghini will get you to work faster." Personally, I’d rather drive the BMW. I’ll get there just as fast in traffic, or with the 55 MPH speed limit, and be a lot more comfortable.