For a company that isn't a household name in even the geekest of households, Xyratex plays a strategic role in the storage industry. Many of the big names in the business, most significantly former parent IBM, OEM Xyratex RAID arrays as their low to midrange products. Even more vendors use Xyratex as a supplier of JBODs and SBODs or a contract manufacturer. We should start seeing 2TB drives in arrays from better known vendors in the past few months.
My concerns aren't based on the quality or failure rate of big drives, but the time it must take to rebuild from a hot spare after a drive failure. Just as scary is the quantity of data that has to be read, processed and written to the replacement drive is comparable to the drive's error rate.
In the few short years that capacity oriented drives, mostly but not necessarily with SATA interfaces, have worked their way into the data center, their capacity increased eightfold while their throughput has barely doubled. The ~130MB/s sustained data transfer rate that 1 and 2TB drives deliver is sufficient for the backup and archiving applications enterprises.
However, even if a RAID controller could rebuild a failed drive at 130MB/s, it would take over 4 hours to rebuild a 2TB drive. In the real world, I'd expect it to take at least 12 hours, even longer if the array is busy, since rebuilding is a lower priority task.
With an MTBF of 1.2 million hours, one could be lulled into a false sense of security by calculating that the probability of 2 drives in the 5-20 in a RAID set failing is somewhat lower than that of winning the Publisher's Clearinghouse Sweepstakes. Someone wins the sweepstakes every year. Drive failures come in bunches because the environmental problems, either in manufacturing or in deployments, that cause drive failures effect not just one drive but often a whole array or data center.