Is NetApp SANbagging?

Publishes results claiming its FC arrays beat EMC and HP, then calls such tests 'misleading.' Que?

March 20, 2003

5 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Network Appliance Inc. (Nasdaq: NTAP) has engaged in an unusual bit of marketing jujitsu: It's bragging that its Fibre Channel-attached storage arrays match or exceed the performance of its competitors, while at the same time telling users that the results are "misleading."

Is NetApp trying to have its cake and eat it too?

Here's the story: NetApp earlier this month published performance test results on its Website showing its FAS825C, FAS940C, and FAS960C FC-attached storage systems performing on a par with, or beating, the midrange systems from EMC Corp. (NYSE: EMC) and Hewlett-Packard Co. (NYSE: HPQ). NetApp's FAS systems, launched last fall, represent its first foray into the SAN-attached storage market (see NetApp Does the Storage Two-Step).

In the white paper, dated March 8, NetApp compares its own internal benchmark testing to the results published by HP and EMC on their Websites. It charts the performance of its midrange F825C and its higher-end FAS940C and FAS960C systems against HP's Enterprise Virtual Array (EVA) and EMC's Clariion CX200, CX400, and CX600 on two tests: I/O operations per second (IOPS) and throughput (cached read data rates).

On each of the tests, NetApp's boxes appear to stack up well against the systems of two main rivals:

Figure 1: Source: Network Appliance Inc.

Figure 2: Source: Network Appliance Inc.

But NetApp also tells users to take such performance tests with a grain of salt. With regard to the IOPS test, which NetApp says used 512-byte chunks of data, "we urge customers to use extreme care when making purchasing or sizing decisions with this data," writes the paper's author, Stephen Daniel. "A conventional IOPS test has been designed to produce the largest possible number, regardless of applicability to the real world."

Daniel similarly discounts the throughput test:

  • Network Appliance again compares quite favorably. However, these measurements are also misleading. Published throughput measurements are taken in a way that ensures that data is transferred from storage system memory, not from disks.

In other words, the company appears to be saying: We hold our own on performance as shown by the these tests, but the results are quite possibly meaningless. It even includes this disclaimer at the end of the paper: "Network Appliance is not responsible for any errors that may be contained in this data."EMC, for one, dismisses this as a marketing stunt. "This smells like an attempt to claim relevance in the SAN market," says EMC spokesman Dave Farmer. "Details on how these numbers are achieved are nowhere to be found."

Indeed, NetApp does not provide any information about the configuration of its systems on the two tests in its white paper. When we asked NetApp for more detail, the company provided only a few additional pieces of information: The storage was fully RAID protected and was tested using shared cache in the storage system (no caching was used on the host or application). The NetApp systems were connected via Brocade Communications Systems Inc. (Nasdaq: BRCD) switches and Emulex Corp. (NYSE: ELX) host bus adapters.

How can we be sure it's an apples-to-apples comparison to EMC and HP? "NetApp set up the configuration so we could achieve maximum results, and we are confident our competitors would not post results on their Websites without doing the same," says company spokeswoman Jaime Le.

Analysts say the only credible vendor-independent test for measuring the performance of storage subsystems is the Storage Performance Council's SPC-1 benchmark.

"Performance 'benchmarketing' is a long-practiced art in this business," says Randy Kerns, senior analyst at Evaluator Group. "Don't believe anything regarding performance benchmarks from vendors. Skepticism is a good thing. Use the SPC benchmarks, and if a company doesn't have any, they probably don't compare favorably with the competition."NetApp actually does mention the SPC benchmark in its paper, but then -- confusingly -- appears to imply that it's simply too much work to run this battery of tests: "Formal benchmarks such as SPC-1... require the publication of dozens of pages of charts and graphics to describe the system's performance." Wow. Dozens of pages?! For the record, the NetApp white paper that includes this statement weighs in at a light'n'lean 4.5 pages. [Ed. note: We should be gracious and add a word of appreciation to NetApp for including a link to a Byte and Switch story -- thanks, guys!]

Would NetApp consider publishing SPC results at some future date? "We are open to this possibility if our customers deem it valuable," says Le.

Note that EMC has categorically refused to test its systems using the SPC benchmark. Hitachi Data Systems (HDS) says it supports the benchmark, but the company has not actually published any results. Vendors that have released SPC test results include HP, IBM Corp. (NYSE: IBM), Sun Microsystems Inc. (Nasdaq: SUNW), LSI Logic Corp. (NYSE: LSI), and 3PARdata Inc. (see 3PAR Claims Benchmark Title, HP Fiddles With Cache, LSI Screams Past IBM, Sun, SPC Test Results Lack Context, and Does EMC's DMX Measure Up?).

Still, NetApp deserves credit for acknowledging the limits of such metrics as IOPS, says Dianne McAdam, senior analyst with Data Mobility Group: "I like the fact that NetApp is not 'shouting' that they have the fastest arrays. They caution how one should interpret the results."

But, she adds, "I think it does a good job of positioning their subsystems... in the FC space." And that was probably NetApp's main intent.Todd Spangler, US Editor, Byte and Switch

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights