The Outer Limits of NAS

As NAS gets bigger, so do the potential bottlenecks

October 13, 2006

6 Min Read
NetworkComputing logo in a gray background | NetworkComputing

The world has accepted server consolidation via NAS, hands down. What isn't clear is how far the consolidation envelope can be pushed before performance suffers.

"There is a growing market for environments that need to address or avoid I/O bottlenecks as a result of overconsolidation of storage capacity, including consolidating multiple smaller NAS filers or servers," maintains Greg Schulz of the StorageIO consultancy.

So where are the performance pain points in NAS scalability, and how are folk coping?

A key focal point appears to be performance, meaning the rate of input/output operations (IOPS) between the servers and NAS, as well as throughput on the storage network.

A simple example illustrates the problem. "If you are consolidating ten Windows storage servers to a NAS...you may be getting approximately 12,000 CIFS IOPS per server. This means just to equal the throughput you're getting from the ten servers, you would need at least 120,000 CIFS IOPS from the NAS server," states Marc Staimer, president of DragonSlayer Consulting.Predictably, NAS vendors squawk at the suggestion their wares might not scale. Though scalability has traditionally been considered a weak point of NAS, a number of suppliers are now touting NAS platforms that have no theoretical limit in both physical storage and global namespace. (See NAS Roadmap and 2004: Top Ten Trends to Watch .)

In recent figures published on the Website of Standard Performance Evaluation Corp. (SPEC), a non-profit industry group that acts as an umbrella organization for various benchmarks, Network Appliance boasts over 1 million IOPS, using 24-node clustered FAS6070s with Data ONTAP GX System operating software. But will anyone really buy a NAS cluster with 96 cores and 96 chips?

It doesn't matter, according to Ravi Parthasarathy, GM and director of the NAS Business Unit at NetApp. The point is that NetApp is able to demonstrate its ability to maintain high performance with its NAS increments. "Nobody will buy one big expensive brick," Parthasarathy maintains. Instead, it's important to be able to offer smaller elements, aka bricks, that customers will be confident in using as NAS building blocks.

NetApp and EMC, as well as smaller players such as Blue Arc, Isilon, Panasas, and Terrascale, also have been deploying virtualization and clustering to beef up their NAS efforts. (See Special Report: NAS Clustering and Virtualization , Petco Uses EMC, BlueArc: The Game Has Changed, Isilon Taps the Accelerator, Panasas Wins Cluster Deal, and Terrascale Integrates New England.) There also are a range of third-party file systems, clustering products, and virtualization solutions to help. (See Special Report: NAS Clustering and Virtualization .) And at least one startup, Gear6, purports to be solving the server-to-storage bottleneck with new technology. (See Gear6.)

But virtualization and even storage capacity don't solve the NAS scalability challenge for everyone. Managing all the virtualized file systems can be a headache. "There are some NAS systems out there that scale physically to the size that we want, but it gets very complicated when you start throwing in replication and connectivity to the end-servers," says a storage engineer at a financial services firm in the southeastern U.S., who asked not to be named.Next Page

The exec told Byte and Switch that, in reality, his firm's existing Fibre Channel SAN, which uses HP hardware and replication software from CA, is much more scaleable. "We already have multiple enterprise switches in multiple sites," he says, adding that a NAS deployment would require additional staff to manage the infrastructure.

Another storage consumer won't even consider moving certain kinds of traffic to the NAS. Rich Taylor, senior systems programmer for the IT department of Clark County, Nevada, says his group's massive server consolidation project involves NAS only for files in public and user directories, not for applications or database work. (See Clark County, Nev..) "It was never our intention to run applications on NAS," he insists.

What's more, Taylor says that while the government ITers are pleased with their EMC NAS gear, they find a difference in NAS performance with Fibre Channel versus SATA disks. While end users might not notice, performance of backups and restores threaten to be an issue when cheaper disks are used.

There are several ways, experts say, to ensure that NAS maintains its performance despite an increased load. First is through the use of silicon inside the NAS to offload reads and writes. Second is ensuring that the back-end disks themselves don't create a bottleneck. Third involves caching memory to avoid unnecessary I/O. And there's always the clustering option for high-performance computing (HPC) apps.NAS suppliers vary in their ability to offer any of all of these attributes. "Some NAS systems have robust performance (IOPs and bandwidth) capabilities along with large storage capacity capabilities, while others support large storage capacities with okay-to-moderate performance capabilities," notes Greg Schulz.

For SMBs, the issue may come down to cost. For those who don't want to shell out for expensive add-on hardware, there are some appliances and software packages available to help keep NAS I/O hopping. Datacore Software, for instance, has quietly been selling a product called UpTempo Performance for $700 per server and $100 per desktop workstation. This software runs on a Windows server and uses memory to cache and accelerate I/O traffic. One of its potential uses is to ensure that data performance is optimized before it hits the NAS.

Datacore CEO George Teixeira says some OEMs are pairing UpTempo with NAS engines to create cheap Windows-based NAS systems.

Sometimes, though, the issue of scalability may force a firm to discard NAS entirely. Over in Texas, Shlomi Harif, director of network systems and support at Austin Independent School District told Byte and Switch that he is much more comfortable the concept with large scale SAN than large scale NAS. "A K-12 institution has tens of thousands of users that need to hit their file systems," he explains. "It needs to be extremely available and resilient - a SAN provides that more than a NAS."

SANs, explains Harif, typically benefit from larger disk systems than NAS, and are also much easier to manage, something which is critical for a school district serving 91,000 students. "The NAS needs to be treated as I would treat a file server -- I have to worry about upgrades and operating systems."The school district relies on three SANs to provide around 38 Tbytes of storage, which makes up the bulk of the organization's storage architecture. Harif also has a number of smaller NAS deployments dotted around his organization, totaling less than 10 Tbytes. These, he told Byte and Switch are used for specific applications, such as video surveillance.

— Mary Jander, Site Editor, and James Rogers, Senior Editor, Byte and Switch

  • BlueArc Corp.

  • EMC Corp. (NYSE: EMC)

  • Gear6

  • Hewlett-Packard Co. (NYSE: HPQ)

  • Isilon Systems Inc. (Nasdaq: ISLN)

  • Network Appliance Inc. (Nasdaq: NTAP)

  • Panasas Inc.

  • Terrascale Technologies Inc.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights