Storage

04:12 PM
George Crump
George Crump
Commentary
50%
50%

Deduplication Moves Beyond Deduplication

While I don't think we ever settled the inline vs. post process debate, the basic blocking and tackling of deduplication seems to be a forgone conclusion. While some will still argue inline vs. post processing, users are now looking for more. What is interesting is how the duplication vendors are now trying to differentiate themselves from each other. 
Data Domain for example today announces enhancements to their replication capabilities. Replication of backup is one of the more impressive side benefits of deduplication. Their product can now cascade replication jobs between DR sites, handle a larger "fan in" during many to one replication and provide improved performance in high bandwidth situations. 
Nexsan, alternatively, recently added power managed deduplication, a first as far as I know. Leveraging a relationship with FalconStor, the product can power down hard drives during off cycles. Power managed deduplication means that the backup jobs and the deduplication clean up work have to get done soon enough so that the drives can be idled. Power efficiency in deduplication has in the past been measured on power provided to real disk backup capacity. If your environment allows for quick backups, then the power efficiency of deduplication can move beyond the efficiency of capacity and on to the efficiency of powered down capacity. 
In backup jobs, high deduplication rates are almost assured. In primary storage where there is, or at least should be, less duplicate data, the going gets a little tougher. It seems that any solution in this space should offer compression, as Nexenta does with their ZFS based product, Storwize with their inline appliance or Ocarina Networks with their out of band optimizer. Ocarina adds deduplication to the process as well as content specific optimizers that provide a greater understanding of the file formats being processed. In addition, they can migrate data and track its location while they are optimizing it. 
Finally from companies like NEC, Permabit and Tarmin, we are seeing more complete disk archive products that can leverage the deduplication engine to improve replication, compliance and address storage scaling issues. While capacity efficiency will be at the heart of the next era of deduplication, the next generation of products will have to leverage the deduplication investment to produce products allowing deduplication to move beyond just deduplication.
The next era of deduplication is going to be a market filled with options for the data center. For now expect to have two or three different deduplication solutions in your environment, but also expect those solutions to do more than just optimize capacity, expect them to add value to other services leveraging their investment in deduplication. 

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, ... View Full Bio
Comment  | 
Print  | 
More Insights
Cartoon
Slideshows
Audio Interviews
Archived Audio Interviews
Jeremy Schulman, founder of Schprockits, a network automation startup operating in stealth mode, joins us to explore whether networking professionals all need to learn programming in order to remain employed.
White Papers
Register for Network Computing Newsletters
Current Issue
Video
Twitter Feed