In fact, Backblaze reports that about 22% of its disk drives fail in their first four years -- an annual failure rate of over 5%. As expected, the distribution of drive failures over time forms a bathtub-shaped curve. Some drives suffer from infant mortality, as manufacturing defects of one sort or another cause parts to fail within the first year or less. Others fail at random and as drives age, parts or lubricants fail, increasing the failure rate.
I’m sure some of you are thinking, “But Howard, Backblaze doesn’t only -- or even primarily -- use enterprise-class disk drives. My array vendor does qualification and testing of every drive it sells me. That’s why I’m happy to pay two to five times what a Seagate or Western Digital drive would cost from Newegg.com for an HP/HDS/EMC nearline SAS or SATA drive.” Those “special” drives must have higher reliability, mustn’t they?
When disk drives were in short supply due to the floods in Thailand last year, Backblaze went so far as to encourage users to buy Seagate USB drives from Costco and send them to Backblaze to be shucked of their cases and USB controllers.
It’s true that while a casual observer may believe, to paraphrase Gertrude Stein, that a 4TB disk is a 4TB disk is a 4TB disk, there are some significant differences between nearline and consumer disk drives.
[Read about Seagate's new Kinetic Open Storage platform, an architecture that creates smart disk drives with Ethernet interfaces in "Seagate Boosts Disk Drive Intelligence."]
The most significant are in the firmware, where nearline drives -- which are assumedly connected to a RAID controller of some sort -- report errors faster while consumer drives continue to retry. Nearline drives are also programmed to react better to the sympathetic vibrations set up by large groups of drives in arrays, and some vendors use more powerful magnets in their nearline drive positioners. These differences can result in better performance, as demonstrated in a video by Sun Microsystems.
At least one vendor has started shipping its shingled recording drives in USB cases under the theory that the USB port acted as a bottleneck that would hide the lower random IO performance of the shingled drive. If Backblaze used some of those disks, they would perform quite differently than conventional drives.
Google’s experience demonstrated that there wasn’t a significant reliability difference between consumer and enterprise disks. Clearly, Google, which has been known to use Velcro to hold disk drives to motherboards, represents a less-than-optimal environment for disk drives. The same holds for Backblaze, which offered a first-generation pod that -- in my opinion -- was a bit lacking in the vibration mitigation department. Both make up for it in the application software layer.
Still, as your organization’s data custodian, you should plan for a 5% AFR and count yourself lucky if you have fewer drive failures. That means not only keeping spares and service contracts up to date but also using more advanced data protection models than simple RAID-5 single parity to ensure failed drives can be rebuilt from the remaining disks.
Once you accept that disk drives are less reliable than you thought, the more predictable failures SSDs can suffer as they face write endurance exhaustion should be a bit less scary.