When Hashes Collide update from December 2010

If there was any doubt in my mind that data deduplication is a mainstream technology, it was wiped out when I saw--in the business section of The New York Times last week--a full-page ad from Symantec touting its deduplication technology. Even so, I still occasionally run into people who consider deduplication to be a dangerous form of black magic that is likely to mangle their data and end their careers. This attitude represents an overestimation of the likelihood of a hash collision in dedupli

Howard Marks

December 15, 2010

4 Min Read
Network Computing logo

If there was any doubt in my mind that data deduplication is a mainstream technology, it was wiped out when I saw--in the business section of The New York Times last week--a full-page ad from Symantec touting its deduplication technology. Even so, I still occasionally run into people who consider deduplication to be a dangerous form of black magic that is likely to mangle their data and end their careers. This attitude represents an overestimation of the likelihood of a hash collision in deduplication and of the reliability of more traditional backup media.

First, let's look at the reliability of the other components in your storage system. Today's hard drives are rated to fail to read a sector once every 10^14 to 10^16 bits (100 to 10,000TB). As a backup to detect read errors and allow the array controller to rebuild the data from an Error Checking and Correction (ECC) stripe, enterprise drives add a 16 bit CRC (Cyclical Redundancy Check) in the T10 Data Integrity Field (DIF) that will itself fail to detect one in 64K (65536) errors. As your data travels across an Ethernet or Fibre Channel network, it is error-checked using a 32-bit CRC (Cyclical Redundancy Check), which will return the right value for the wrong data 1 in 10^9 times.

Finally, if you're avoiding deduplication because you don't trust it, you write the data to an LTO-5 tape, which has an error rate of one in 10^17. Well, one in 10^17 sounds great! I mean, the odds of winning the Powerball lottery are two in 10^8. LTO-5 error rates are a billion times better than that! Of course, the spec sheet also says that that's for non-media errors, so errors caused by tape mishandling, overuse and the like aren't included or calculable.

So how do those reliability levels compare to a typical deduplicating backup target? Among hash-based deduplicating systems, SHA-1 is the most commonly used hash function. With a 20-byte hash value, the odds of any two blocks generating the same hash from different data are about one in 10^48, which anyone will admit is a really big number. Of course, what we're worried about is the odds of two blocks in our data center generating a hash collision, and that depends on the amount of data in the deduplication universe.

As my friend W. Curtis Preston says, it's more likely that, on any given day, Jessica Alba will come running to me to be mine forever than that two blocks in my data will wrongly generate the same hash. The former is possible, after all. Ms. Alba and I are both alive, but, given the fact that I'm an old, fat, geeky guy in New Jersey and she's, well, Jessica Alba, it's highly improbable.  Curtis even had a math Ph.D. create a spreadsheet to calculate the odds of a hash collision, which you can download from his Website BackupCentral.com/hashodds.xls. In order for the probability of a hash collision to equal the 10^15 odds of a disk read error, you would need 5x10^16 data blocks or 432 yottabytes of data in 8K blocks. I cheated and used the high-precision calculator at www.ttmath.org/online_calculator to compute that, for a deduping system with four petabytes of stored data in 8K blocks, the probability of a hash collision is 4.5x10^26, or about the same as a tape read error with perfect media.

Now, it's true that people tend to avoid catastrophic events, even if they're very unlikely, while accepting much higher probabilities of events that have lesser consequences. As a result, we mine coal for electricity knowing miners will die and people will get asthma, but we won't build nuclear power plants. But a hash collision doesn't ruin all your backup data. It just means that one block of data will be restored with the wrong data, just like a tape or disk read error.

One hash collision, one corrupt file to restore once every 10^26 times you backup 3PB of data. Seems like a reasonable risk to me. After all, I cross the street every morning to walk the dog, and I could get run over by a streetcar--or by Jessica Alba, if she reads this blog. No, I won't calculate the probability of that.

About the Author(s)

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights