Data deduplication has become a ubiquitous term commonly used to describe a process that can identify redundancy within a data set and only store unique segments. The value of investing in deduplication is dependent on three variables that need to be considered: percentage of redundant data, the speed at which data can be transferred to the device, and the cost of the media on which the data is being stored.
As we discussed in our recent article "The State of Deduplication 2012", these variables explain why backup data was the first and most obvious place to focus the technology on. While the cost of the media was relatively inexpensive, it was more expensive than the competitor that it was attempting to replace: tape. Backup, of course, also had a high level of redundant data so the effective capacity being delivered as a result of eliminating this redundant data was high. Finally, while performance mattered, it was competing with tape so it was less important than in other environments.
When looking at archives, these variables explain why deduplication has less value in disk-based archive appliances and why those systems have struggled to gain acceptance. Most archives are storing the last copy of data (maybe two) but not the multitude of copies like backup. If a backup system is getting 20x data efficiencies, archives will get 3x to 5x. In fact, a compression system may have more value here since compression performs optimization on every file instead of just redundant files. In the final analysis, tape may still be the ideal long-term retention area for archive, especially when augmented by a disk front end.
[ Learn more about solid state storage. Read All-Flash Systems Vs. SSD Appliances. ]
The big focus for data deduplication vendors lately is primary storage. The challenge with primary storage is that while the cost of the physical media is more expensive, it also has a low level of redundancy. The good news is that the redundancy level may be higher than the archive use case. Performance is a bigger concern on primary storage. The deduplication system has to be able to deliver the storage efficiency while having little or no impact on performance.
Another area that is getting increasing attention for vendors is applying deduplication to solid state storage. While there is greater risk of a performance impact, the benefits of efficient capacity utilization on flash-based solid state systems can be large. The cost of the solid state media is at such a premium that even a relatively small efficiency improvement (5x) is well worth the investment in and potential overhead of deduplication. And as we will discuss in our upcoming webinar "The Five Lies Told About All-Flash Storage", deduplication can be applied to all flash and still provide significantly better performance than a hard disk alternative.
Deduplication, like every other storage technology, can pay great dividends if used in the right situation. It clearly makes sense in backup, may make sense in primary data storage, and I think especially makes sense where ever flash storage is used.
All deduplication techniques are not created equal though, and some may perform better than others in given situations. As always, it is important to ask a lot of questions and do your own testing on your own data. In an upcoming commentary I will cover some important questions to ask your data deduplication vendor.
Follow Storage Switzerland on Twitter
From thin provisioning to replication to federation, virtualization options let you reclaim idle disks, speed recovery, and avoid lock-in. Get the new, all-digital Storage Virtualization Guide issue of Network Computing. (Free registration required.)