Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Primary Storage Deduplication: NetApp: Page 3 of 3

It is up to the user to balance if post-process deduplication is a limitation or not and how much value compression might bring to them vs. the potential for performance loss, which is critical in primary storage. In addition, older versions of Data ONTAP places restrictions on the amount of deduplicated data that any singular volume, can contain, up to 16TBs.

In the future, NetApp will be raising this bar to about 50TBs per volume. Potentially, most of their current performance limitations will be less of an issue as they ride the same faster processor wave that any Intel based storage system will benefit from. The deduplication is only on a per volume level not across volumes. As a result, data on one volume that is the same as data on another volume will not be identified as redundant.

For many environments, these are minor limitations and despite them, NetApp clearly has the market lead in primary storage deduplication and the most user case studies to be referenced. As stated earlier, they deserve credit for leveraging the underpinnings of their existing operating system instead of re-inventing the whole process. I believe this approach provides a customer with greater comfort as they use the deduplication feature.