Bandwidth Brawl

Bandwidth Brawl Hitachi won't concede that EMC's DMX beats it at its own marketing game

February 19, 2003

6 Min Read
Network Computing logo

Since EMC Corp. (NYSE: EMC) launched the Symmetrix DMX, it has been exchanging a volley of performance claims and counterclaims – with its chief rival in the high-end enterprise storage space, Hitachi Data Systems (HDS).

"It's a pissing war," says Steve Duplessie, senior analyst at Enterprise Storage Group Inc.

EMC says the DMX, based on its proprietary Direct Matrix Architecture of point-to-point serial connections, ratchets up the system's internal data bandwidth to a maximum of 64 GByte/s – forty times that of the Symmetrix 5.5. Including message (or "control") bandwidth of 6.4 MByte/s, EMC asserts the DMX provides an aggregate of 70.4 GByte/s (see Does EMC's DMX Measure Up? and EMC Soups Up Symm).

Hitachi, which had touted a previously industry-leading aggregate internal bandwidth of 15.6 GByte/s – 10.6 GByte/s of data bandwidth plus 5.3 GByte/s of control bandwidth – for the Lightning 9980V, was rocked back on its heels. Nevertheless, its executives have refused to accept that EMC has surpassed it in terms of performance, claiming the DMX can't handle mixed workloads as well as the Lightning and noting that it doesn't provide as much single-system capacity (Hitachi's 9980V supports up to 1,024 drives versus the DMX 2000's 288).

"Overall we feel their weakness is being able to truly scale multiple applications... and in terms of capacity and performance, they're basically saying they don't have the ability to scale," says Phil Townsend, senior director of product marketing at HDS.Now, EMC is accusing HDS of upwardly revising the maximum cache bandwidth it can provide, to 10.6 GByte/s, after the DMX's launch. Cache bandwidth is the total throughput that's achievable between the back-end disks and the front-end external ports of a system. According to EMC, the Hitachi Lightning 9980V supports a maximum of 3.2 GByte/s, compared with the DMX 2000 at 16 GByte/s.

"HDS has four cache regions per system and is therefore only capable of performing four concurrent DRAM [dynamic random access memory] transfers at 800 MByte/s each," says EMC spokesman A.J. Ragosta.

But HDS says that's a blatant misrepresentation of its Hi-Star crossbar switching architecture. Here's Hitachi's math: It claims each of the Lightning 9980V's four cache switches (CSW) has eight ports, for a total of 32 data paths. Each of those 32 paths is clocked at 166 MHz, yielding 332 MByte/s per path for a total of 10.6 GByte/s. "EMC's logic only applies to a bus-based architecture," says an HDS insider.

Duplessie, however, verifies EMC's interpretation. "When HDS came out originally [with the Lightning 9900 in June 2000], they started making performance claims of 6.4 GByte/s – which is how fast you can go to the cache," he says. "In that sense, EMC can go 64 GByte/s to the cache. In the real world, you have to go through the cache."

HDS says it's sticking to its claims. "We stand by our numbers, and we've proven our performance and capabilities to our customers," says spokeswoman Jodi Reinman. She also notes that EMC refuses to publish performance figures for its systems using the Storage Performance Council's benchmark. [Ed. note: Although technically, neither has HDS, yet.]Furthermore, it's likely that EMC is overstating its claims on this front, too, says Gary Helmig, an analyst with SoundView Technology Group. "The actual performance that comes out of the back end of the [DMX] cache is about half of what EMC says," he says. "That's still huge, but it's around 8 GByte/s."

The debate, at a certain point, starts to turn on semantics. [Ed. note: If it hasn't bored one to tears first!] Hewlett-Packard Co. (NYSE: HPQ), which resells the Hitachi Lightning 9980V as the xp1024, quotes its "peak [bandwidth] from cache" as 3.2 GByte/s. However, says an HP spokesman, "this is not the same as cache bandwidth. The internal bandwidth of the xp1024 and 9980V is identical... We refer to the sum of cache bandwidth and shared memory bandwidth as internal bandwidth. That number for the xp1024 is 15.9 GByte/s. The cache bandwidth part of that is approximately 10.6 GByte/s." How thrilling.

Meanwhile, IBM Corp. (NYSE: IBM) – the third major player in this market segment – is more or less on the sidelines of this particular brouhaha. Its Enterprise Storage Server 800 (a.k.a. Shark) delivers aggregate internal bandwidth of 1.28 GByte/s, via eight 160-MByte/s Serial Storage Architecture (SSA) loops, according to IBM's Redbook on the system (see IBM Gives Shark a Kick).

NEXT: Why It Matters

The more important question is: How much does this all matter? In other words, since it's likely these performance limits will never come close to being pushed in actual usage, who cares about the theoretical upper bounds anyway?Actually, it matters a lot. The marketing claims are a key part of the overall fight for share in the multibillion-dollar high-end enterprise storage market. Vendors must persuade customers to commit to a platform that they'll be living with for the next two or three years, at least. And now that it's armed with the DMX, EMC is trying to shoot down the HDS Lightning 9900V as an architecture that's past its prime.

"This is just the beginning for us with this line," said Joe Tucci, EMC's president and CEO, in an interview earlier this month with Byte and Switch. "They [Hitachi] are now past their sweet spot." (See the full interview here).

Enterprise users agree that system performance is important, but they're wary of any vendor's performance claims.

"Bottom line: It's 'marketecture' in both cases," says Barry Brazil, enterprise SAN architect at Reliant Energy, a power company based in Houston. "Both arguments illustrate each company's proficiency in mathematical theory."

Having said that, however, Brazil – who is evaluating both HDS Lightning and EMC's Symmetrix DMX systems – agrees with EMC that bandwidth into cache is not nearly as important as the round-trip through cache, delivering the requested data back to the host from the spindle. "In the current world of in-order delivery, aggregate throughput far outweighs bandwidth in real-world situations," he says.On the other hand, Bill Bender, an infrastructure manager at Lucent Technologies Inc. (NYSE: LU), which runs several Symmetrix boxes, says performance is certainly not the only criterion for evaluating these systems.

"We really do not go to that depth of analysis [in terms of cache bandwidth] on the performance of these products," he says. Other factors are much more important, Bender says, including service, software, and the fact that EMC is able to tailor its offering to Lucent's needs.

Hilary Swanson, first VP of operations at IndyMac Bank

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights