Storage

04:00 AM
Howard Marks
Howard Marks
Commentary
Connect Directly
Twitter
RSS
E-Mail
50%
50%

The Truth About Storage Reliability

A pair of studies on storage technology reveal some disturbing facts about disk drive performance and reliability.

Two papers presented at the Usenix File and Storage Technology conference in February challenge many of the assumptions we've long used as the basis of our storage-system designs, most significantly the 1 million hour or higher MTBF found on the spec sheet of almost all disk drives sold for server use.

In both "Disk Failures in the Real World: What Does an MTTF of 1 Million Hours Mean to You," by Bianca Schroeder and Garth A. Gibson of Carnegie Mellon University, and "Failure Trends in a Large Disk Drive Population," by Eduardo Pinherio, Wolf-Deitrich Weber and Luis André Barroso of Google, the actual failure rate of drives was typically more than four times the 0.88 percent AFR (Annual Failure Rate) that a million-hour MTBF represents.

Each group studied the replacement history of more than 100,000 disk drives over data center lifetimes. CMU's samples included both enterprise (SCSI and Fibre Channel) and high-capacity drives with SATA interfaces. Google used desktop style ATA and SATA drives with spec-sheet MTBFs of 400,000 hours in their custom servers. Both studies used the same definition of a drive failure that you or I would use: If a drive had to be replaced by the data center maintenance team for any reason, it was declared a failed drive.

As a charter member of the "you can tell a vendor is lying if his lips are moving" club, I wasn't all that surprised that drives fail more than once every million hours. I was a bit surprised, though, by some of the studies' other findings. In the CMU study, SATA drives failed at about the same rate as the enterprise SCSI and Fibre Channel (FC) drives, contrary to the conventional wisdom that enterprise drives are 50 percent to 100 percent more reliable than their SATA counterparts.

Even more surprising was that drive-failure rates increased as drives aged, even within the five years most of us consider the reasonable service life of a disk drive, and there was no observed peak in drive failures at the beginning of the drives' lives due to infant mortality. In fact drive failures in years 4 and 5 were up to 10 times the rate predicted by the vendor spec sheets.

Howard Marks is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage ... View Full Bio
Previous
1 of 2
Next
Comment  | 
Print  | 
More Insights
Cartoon
Slideshows
Audio Interviews
Archived Audio Interviews
Jeremy Schulman, founder of Schprockits, a network automation startup operating in stealth mode, joins us to explore whether networking professionals all need to learn programming in order to remain employed.
White Papers
Register for Network Computing Newsletters
Current Issue
Video
Twitter Feed