Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

The Reality Of Primary Storage Deduplication: Page 2 of 2

"A lot has to occur for this level of data optimization to become a reality. First, the primary storage vendors need to offer a deduplication engine in their storage solution. Second the deduplication process and its handling of the meta data will also need to prove its reliability," Crump said.

If all of that sounds a lot like compression that's because it is. In fact, stripping away the marketing, the difference is largely a matter of scale. Deduplication applies to files and blocks; compression applies to finding redundancies within files. A number of vendors are offering both, deduplicating and then compressing for added benefit. What's not simple is figuring out who will exactly benefit from deduplication and compression. A test done over at Edugeek, for example, showed that compression might save you five or six percent on storage, but write times also go up by five or six percent. (Obviously, numbers may change based on the algorithm used and the data set.)

Finding ways of solving the performance issue may well be the essential question for enterprises to adopt deduplication of their primary storage. Solid State Disks (SSDs) with their high performance could be of help here. Crump thinks that SSDs will be the perfect complement to deduplication technologies. Some vendors are already using high speed SSDs as a cache to offload the writes from the application while writing them to disk in the background. NetApp and Startup StorSimple, for example, are two vendors who have integrated SSDs into their deduplication platforms. Whether these tweaks are sufficient to provide enterprises with the necessary performance remains to be seen.