NTAP Disses VTL De-Dupe

NetApp adds hardware compression, not de-duplication, citing cost and speed

October 16, 2006

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Network Appliance is adding compression capabilities to the virtual tape library (VTL) platform it rolled out in February 2006 in an attempt to boost performance and capacity as demand for VTL heats up. (See NetApp Readies Virtual Tape.)

NetApps late to the party. Most of its competitors already offer features designed to improve VTL scalability, performance, and efficiency. However, NetApp's going against the grain by adding compression through hardware. Most of the vendor's VTL competitors have focused on increasing VTL performance by adding data de-duplication software.

Data Domain and Diligent Technologies, for instance, have offered VTLs with de-duplication for months. FalconStor, whose software runs on VTLs sold by EMC, IBM, and Sun, recently announced a de-duplication product. (See FalconStor Extends VTL and FalconStor Plots De-Dupe Debut.) So did VTL startup Sepaton. (See Sepaton Readies De-Dupe.) Quantum plans to offer de-duplication in its VTLs later this year. (See IBM Accelerates SOA Use.)

What’s the difference between de-duplication and compression? Compression algorithms reduce a file’s size by removing redundant data within that file. De-duplication eliminates redundant files spread across a storage system, saving one instance of each file. Both are features VTL vendors are offering to reduce the amount of data and increase the speed of backups.

However, de-duplication achieves a far greater compression ratio when it comes to reducing the load for the VTL. Most de-duplication suppliers claim to squeeze data at a ratio of 20 to 1. NetApp claims its compression can squeeze data by two or three times its normal capacity.But Krish Padmanabhan, GM of NetApp's heterogeneous data protection business unit, says hardware compression is faster because doing it in software uses CPU cycles and slows performance. Restoration with de-duplication could also take longer because the application has to collect thousands of pieces of files spread across multiple disks, he asserts.

"What VTLs need, first and foremost, is screaming fast performance on the front end," Padmanabhan says. "We do compression that minimizes the hit on performance. The reason we're not doing de-duplication is that those products have significantly lower performance and are too slow for the backup window. Customers typically pressure us to go faster and faster. They’re also aware of cost.”

Notably, Padmanabhan won't rule out the possibility that NetApp might add its own take on de-duplication at a future date.

One analyst thinks de-duplication software could emerge as a companion to NetApp's hardware compression. "It’s a good first step for them," Enterprise Strategy Group analyst Heidi Biggar says of NetApp's compression. "One of the biggest obstacles to VTL adoption is cost, and one way to get cost down is to reduce capacity requirements. We're waiting for data de-duplication, they say it will happen sometime next year.”

Organizations using VTL will make their own decisions on the relative merits of efficiency features. One administrator at a health benefits firm using VTL disagrees with NetApp’s contention that hardware compression is superior. The admin, who plans to implement de-duplication in his Sepaton VTL, says de-duplication’s superior compression ratio makes up for any performance hits."De-duplication may have some effect on speed, but the capacity increases are worth it," says the administrator, who requests his name and company remain anonymous. "If I have to take a 5 percent loss to reduce my capacity by 50 percent, I'll take that. If my backup window is going from six hours to 40 minutes, I'll take a little bit of a hit with de-duplication to not have those long backup windows."

It is likely all VTL will have to use one form of compression soon. Rob Stevenson, managing director of market research firm TheInfoPro (TIP), says interviews with Fortune 1000 firms reveal enterprises have strict requirements for VTL.

"First, [VTL] has to seamlessly integrate with the backup process,” Stevenson says of feedback he’s getting from enterprise storage customers. “The second thing is performance. It has to scale and has to meet the backup window on weekends. Third is de-duplication, and fourth is replication for disaster recovery."

NetApp entered the VTL space well after its major competitors, and it appears to have a way to go to catch up on customers. When asked how many VTL customers NetApp has, Padmanabhan says "that's confidential." We have been very surprised by the size of this market. We went in not knowing how significant the VTL market is."

NetApp's NearStore VTL700 replaces the single-controller VTL600, with a starting price of $154,000 for 10 usable Tbytes. The dual-controller VTL1400 replaces the VTL1200 and starts at $238,000 with 20 Tbytes.— Dave Raffo, News Editor, Byte and Switch

  • EMC Corp. (NYSE: EMC)

  • Data Domain Inc. (Nasdaq: DDUP)

  • Diligent Technologies Corp.

  • Enterprise Strategy Group (ESG)

  • FalconStor Software Inc. (Nasdaq: FALC)

  • Hewlett-Packard Co. (NYSE: HPQ)

  • IBM Corp. (NYSE: IBM)

  • Network Appliance Inc. (Nasdaq: NTAP)

  • Quantum Corp. (NYSE: QTM)

  • Sepaton Inc.

  • TheInfoPro Inc. (TIP)

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights