Storage

02:09 PM
Howard Marks
Howard Marks
Commentary
Connect Directly
Twitter
RSS
E-Mail
50%
50%

On Large Drives And RAID

Xyratex's recent announcement that they've qualified Hitachi's 2 TB nearline drive for their disk systems got me thinking about how the RAID techniques of the past don't really address the needs of systems with many ginormous drives. As drives get bigger I worry that basic RAID-5 protection isn't sufficient for these beasts.

Xyratex's recent announcement that they've qualified Hitachi's 2 TB nearline drive for their disk systems got me thinking about how the RAID techniques of the past don't really address the needs of systems with many ginormous drives. As drives get bigger I worry that basic RAID-5 protection isn't sufficient for these beasts. 

For a company that isn't a household name in even the geekest of households, Xyratex plays a strategic role in the storage industry. Many of the big names in the business, most significantly former parent IBM, OEM Xyratex RAID arrays as their low to midrange products.  Even more vendors use Xyratex as a supplier of JBODs and SBODs or a contract manufacturer. We should start seeing 2TB drives in arrays from better known vendors in the past few months.

My concerns aren't based on the quality or failure rate of big drives, but the time it must take to rebuild from a hot spare after a drive failure. Just as scary is the quantity of data that has to be read, processed and written to the replacement drive is comparable to the drive's error rate.

In the few short years that capacity oriented drives, mostly but not necessarily with SATA interfaces, have worked their way into the data center, their capacity increased eightfold while their throughput has barely doubled. The ~130MB/s sustained data transfer rate that 1 and 2TB drives deliver is sufficient for the backup and archiving applications enterprises.

However, even if a RAID controller could rebuild a failed drive at 130MB/s, it would take over 4 hours to rebuild a 2TB drive. In the real world, I'd expect it to take at least 12 hours, even longer if the array is busy, since rebuilding is a lower priority task.

With an MTBF of 1.2 million hours, one could be lulled into a false sense of security by calculating that the  probability of 2 drives in the 5-20 in a RAID set failing is somewhat lower than that of winning the Publisher's Clearinghouse Sweepstakes. Someone wins the sweepstakes every year.  Drive failures come in bunches because the environmental problems, either in manufacturing or in deployments, that cause drive failures effect not just one drive but often a whole array or data center.

Howard Marks is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage ... View Full Bio
Previous
1 of 2
Next
Comment  | 
Print  | 
More Insights
Hot Topics
5
Do We Need 25 GbE & 50 GbE?
Jim O'Reilly, Consultant,  7/18/2014
White Papers
Register for Network Computing Newsletters
Cartoon
Current Issue
Video
Slideshows
Twitter Feed