Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

The Truth About Storage Reliability

Two papers presented at the Usenix File and Storage Technology conference in February challenge many of the assumptions we've long used as the basis of our storage-system designs, most significantly the 1 million hour or higher MTBF found on the spec sheet of almost all disk drives sold for server use.

In both "Disk Failures in the Real World: What Does an MTTF of 1 Million Hours Mean to You," by Bianca Schroeder and Garth A. Gibson of Carnegie Mellon University, and "Failure Trends in a Large Disk Drive Population," by Eduardo Pinherio, Wolf-Deitrich Weber and Luis André Barroso of Google, the actual failure rate of drives was typically more than four times the 0.88 percent AFR (Annual Failure Rate) that a million-hour MTBF represents.

Each group studied the replacement history of more than 100,000 disk drives over data center lifetimes. CMU's samples included both enterprise (SCSI and Fibre Channel) and high-capacity drives with SATA interfaces. Google used desktop style ATA and SATA drives with spec-sheet MTBFs of 400,000 hours in their custom servers. Both studies used the same definition of a drive failure that you or I would use: If a drive had to be replaced by the data center maintenance team for any reason, it was declared a failed drive.

As a charter member of the "you can tell a vendor is lying if his lips are moving" club, I wasn't all that surprised that drives fail more than once every million hours. I was a bit surprised, though, by some of the studies' other findings. In the CMU study, SATA drives failed at about the same rate as the enterprise SCSI and Fibre Channel (FC) drives, contrary to the conventional wisdom that enterprise drives are 50 percent to 100 percent more reliable than their SATA counterparts.

Even more surprising was that drive-failure rates increased as drives aged, even within the five years most of us consider the reasonable service life of a disk drive, and there was no observed peak in drive failures at the beginning of the drives' lives due to infant mortality. In fact drive failures in years 4 and 5 were up to 10 times the rate predicted by the vendor spec sheets.

  • 1