Deduplication Moves Beyond Deduplication

While I don't think we ever settled the inline vs. post process debate, the basic blocking and tackling of deduplication seems to be a forgone conclusion. While some will still argue inline vs. post processing, users are now looking for more. What is interesting is how the duplication vendors are now trying to differentiate themselves from each other. Data Domain ...

George Crump

September 22, 2009

2 Min Read
NetworkComputing logo in a gray background | NetworkComputing

While I don't think we ever settled the inline vs. post process debate, the basic blocking and tackling of deduplication seems to be a forgone conclusion. While some will still argue inline vs. post processing, users are now looking for more. What is interesting is how the duplication vendors are now trying to differentiate themselves from each other. 

Data Domain for example today announces enhancements to their replication capabilities. Replication of backup is one of the more impressive side benefits of deduplication. Their product can now cascade replication jobs between DR sites, handle a larger "fan in" during many to one replication and provide improved performance in high bandwidth situations. 

Nexsan, alternatively, recently added power managed deduplication, a first as far as I know. Leveraging a relationship with FalconStor, the product can power down hard drives during off cycles. Power managed deduplication means that the backup jobs and the deduplication clean up work have to get done soon enough so that the drives can be idled. Power efficiency in deduplication has in the past been measured on power provided to real disk backup capacity. If your environment allows for quick backups, then the power efficiency of deduplication can move beyond the efficiency of capacity and on to the efficiency of powered down capacity. 

In backup jobs, high deduplication rates are almost assured. In primary storage where there is, or at least should be, less duplicate data, the going gets a little tougher. It seems that any solution in this space should offer compression, as Nexenta does with their ZFS based product, Storwize with their inline appliance or Ocarina Networks with their out of band optimizer. Ocarina adds deduplication to the process as well as content specific optimizers that provide a greater understanding of the file formats being processed. In addition, they can migrate data and track its location while they are optimizing it. 

Finally from companies like NEC, Permabit and Tarmin, we are seeing more complete disk archive products that can leverage the deduplication engine to improve replication, compliance and address storage scaling issues. While capacity efficiency will be at the heart of the next era of deduplication, the next generation of products will have to leverage the deduplication investment to produce products allowing deduplication to move beyond just deduplication.

The next era of deduplication is going to be a market filled with options for the data center. For now expect to have two or three different deduplication solutions in your environment, but also expect those solutions to do more than just optimize capacity, expect them to add value to other services leveraging their investment in deduplication. 

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights