The Great Storage Protocol Debate

As the Great Debate: iSCSI Beats Fibre Channel session I’m moderating at Interop NY next month will demonstrate, passions run high when you get storage and server administrators talking about their favorite storage protocols. While the iSCSI vs. Fibre Channel debate gets the most attention, the even older battle between file and block protocols is again heating up.

Howard Marks

September 26, 2011

3 Min Read
Network Computing logo

As the Great Debate: iSCSI Beats Fibre Channel session I’m moderating at Interop NY next month will demonstrate, passions run high when you get storage and server administrators talking about their favorite storage protocols. While the iSCSI vs. Fibre Channel debate gets the most attention, the even older battle between file and block protocols is again heating up.

My problem with the file vs. block arguments is that most take way too narrow a view. Storage protocols don’t live in a vacuum, after all. While it’s true that building an NFS packet uses more CPU than sending a block request to a Fibre Channel HBA that offloads FCP (Fibre Channel Protocol), processing CPU utilization is a small part of the difference between running a vSphere infrastructure on NFS or Fibre Channel storage.

It should be obvious if you think about it that the real difference between running file and block storage protocols is management of the file system. If you choose to use a block protocol like Fibre Channel to connect your virtual server hosts to their storage, you’re also choosing to use VMware’s VMFS clustered file system. So while NFS may require a bit more work on the storage protocol side, your vSphere host also has to manage VMFS, and that doesn’t come free.

While some folks with a VMware focus, like my friend Rick Vanover, will claim that VMFS is the best file system for vSphere because it’s "designed for virtualization," those of us who have been working in the storage biz for a while know that good file systems take a long time to mature. Designing a file system for a single application is easier than designing a great-general purpose file system, but there are certainly advantages to using a mature file system like WAFL or the Isilon file system.

The primary advantage is of course ease of management. Even with vSphere 5, VMFS file systems are limited to 60 Tbytes or so. WAFL volumes can be twice that size, and scale-out solutions like Isilon's can have single volumes spanning petabytes. File systems inherently thin provision the files they contain and don’t require VAAI support to support hundreds of VMs or to reclaim unused space.

Traditionally, storage folks have considered file protocols to be unreliable and high-overhead alternatives to block protocols that work close to the hardware. In part, this comes from the fact that LANs have historically been significantly slower, and less reliable in packet delivery, than storage interconnects from SCSI to Fibre Channel. As data center LAN technology evolves to be as fast and as reliable as old-fashioned SAN hardware, the ease of management of file-based systems becomes more attractive.

Truth is, the vast majority of today’s applications and hypervisors are managing files. Yes, you can get the best performance out of your Oracle RAC system running RAW against a disk via Fibre Channel, but Exchange, SQL Server, vSphere and the like are accessing files, and, all things being equal, could be easier to manage over file protocols. In fact, with Windows 8, Microsoft will finally support Exchange and Hyper-V over SMB.

Finally, let me quote my friend J.Metz, who tweeted, "Arguing about performance differences between NFS, iSCSI and FC for VMware is like arguing about whether the Ferrari or the Lamborghini will get you to work faster." Personally, I’d rather drive the BMW. I’ll get there just as fast in traffic, or with the 55 MPH speed limit, and be a lot more comfortable.

About the Author(s)

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights