Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

When Hashes Collide

If there was any doubt in my mind that data deduplication is a mainstream technology, it was wiped out when I saw--in the business section of The New York Times last week--a full-page ad from Symantec touting its deduplication technology. Even so, I still occasionally run into people who consider deduplication to be a dangerous form of black magic that is likely to mangle their data and end their careers. This attitude represents an overestimation of the likelihood of a hash collision in deduplication and of the reliability of more traditional backup media.

First, let's look at the reliability of the other components in your storage system. Today's hard drives are rated to fail to read a sector once every 10^14 to 10^16 bits (100 to 10,000TB). As a backup to detect read errors and allow the array controller to rebuild the data from an Error Checking and Correction (ECC) stripe, enterprise drives add a 16 bit CRC (Cyclical Redundancy Check) in the T10 Data Integrity Field (DIF) that will itself fail to detect one in 64K (65536) errors. As your data travels across an Ethernet or Fibre Channel network, it is error-checked using a 32-bit CRC (Cyclical Redundancy Check), which will return the right value for the wrong data 1 in 10^9 times.

Finally, if you're avoiding deduplication because you don't trust it, you write the data to an LTO-5 tape, which has an error rate of one in 10^17. Well, one in 10^17 sounds great! I mean, the odds of winning the Powerball lottery are two in 10^8. LTO-5 error rates are a billion times better than that! Of course, the spec sheet also says that that's for non-media errors, so errors caused by tape mishandling, overuse and the like aren't included or calculable.

So how do those reliability levels compare to a typical deduplicating backup target? Among hash-based deduplicating systems, SHA-1 is the most commonly used hash function. With a 20-byte hash value, the odds of any two blocks generating the same hash from different data are about one in 10^48, which anyone will admit is a really big number. Of course, what we're worried about is the odds of two blocks in our data center generating a hash collision, and that depends on the amount of data in the deduplication universe.

As my friend W. Curtis Preston says, it's more likely that, on any given day, Jessica Alba will come running to me to be mine forever than that two blocks in my data will wrongly generate the same hash. The former is possible, after all. Ms. Alba and I are both alive, but, given the fact that I'm an old, fat, geeky guy in New Jersey and she's, well, Jessica Alba, it's highly improbable.  

  • 1