Choosing High-Capacity Drives

From hard drives for specific uses to hybrid drives and various capacity-increasing technologies, there are a lot of options. How do you decide which is best?

Jim O'Reilly

January 26, 2015

5 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Do you long for a return to the days when server hard drives came in four speeds (5400, 7200, 10K, and 15K RPM) and two classes (desktop and enterprise)? Today, we only have the two slower speeds (with a couple of 10K holdouts from Western Digital), but more importantly, we're heading toward a single-drive class of high-capacity bulk storage, with the fast drives replaced by solid-state drives.

If you think this simplifies hard drive taxonomy, think again. Increasing capacity is proving to be a tough task for all three remaining hard drive vendors. Perpendicular Magnetic Recording (PMR) technologies have reached their limit, and capacity advances are coming from a read-mostly shingled magnetic recording method, helium–filled drives or pre-formatted media. The pace of capacity increases has halved and likely will slow even more.

To further complicate matters, margin-challenged hard drive providers are looking for boutique markets to create differentiation and avoid the race to the bottom on price. Drives optimized for surveillance systems, big data, and object storage are entering the market, and vendors are trying to lift their margins with a variety of hybrid drives.

All of this is confusing to IT professionals trying to figure out the best option for storing corporate data. Are those specialty drives worth the extra cost? Which drive technology is the best value for money? Which type of drive will be most reliable?

Specialized drives
Buying drives optimized for a task sounds plausible on the surface, but the improvements touted are only incremental. Drives will be 10% faster than standard bulk drives, but at $40 to $60 per terabyte, the price increase is a lot more than 10%. That alone brings up the question of whether just to buy an extra standard drive or two. Moreover, these performance improvements are benchmark measurements, and as the small print says, “Your actual results may differ.”

Potentially, these new specialized drives could add some extra life to an existing installation, but in many cases there may be alternatives that do a better job and have some lasting usefulness.

For example, you could add a bunch of SSDs or DRAM and some auto-tiering software to your installation. This gets over peak load or startup issues, while allowing bulk storage onto the old drives. Likely, this is an inexpensive solution compared with buying all-new drives, and if extra capacity is needed, it can be achieved with the largest capacity drives available.

Similarly, hybrid drives for the server space have a reasonable amount (say, 128 GB) of on-board flash, but SSD prices have dropped considerably, and hybrids are at a premium compared with separate SSDs and standard bulk HDDs.

One variant of the standard bulk hard drive is a radical change from today’s drives, however. This is the Ethernet-interfaced class intended for object storage and big data applications. This class of drives is so new that the jury is out on use-cases, but the idea of making the drives an Ethernet end point allows the infrastructure of networks and switches between the drive and the host to be simple and commoditized.

Ethernet drives fit the software-defined storage model really well. First appliance-level Ethernet drive products will appear in 2015, and market receptivity will unfold from there. This rates as a revolutionary new approach, with mainstream use probably at least two years away.

Capacity-increasing technologies
This leaves the question of which capacity-increasing technology to trust. Testing on very dense drives is a bit thin at this point, but lower-capacity versions of shingled drives (from Seagate) and helium-filled drives from WD/HGST have been in the market for about a year, and there are estimates of reliability and cost/performance available.

Online backup provider Backblaze regularly reports on drive reliability, performance, and other issues. The company is currently deploying 6 TB bulk drives. Its preliminary assessment is that Western Digital's 5400 RPM 6 TB drive outperforms Seagate’s 7200 RPM drive, which seems counter-intuitive. The likely reason for this is that Backblaze has a sequential write model for IO, and WD’s drives have a slightly higher data rate.

In three months of operation with a limited set of drives, neither drive type has failed. In fact the only major issue is that Seagate uses a watt more power.

Long term, helium leakage is a potential issue. Helium is a very small atom and notorious for finding ways to leak out of sealed containers. With helium-filled drives, any loss of 20% of more of the helium is probably fatal, since the helium reduces both heat generation through atmospheric friction and turbulence at the recording head. It takes time for this sort of effect to be seen.

This discussion has to be viewed in the light of 3D NAND SSD development. Stacking cells allows much more capacity from the same flash chip. Flash drive vendors are projecting capacity parity with bulk hard drives by the end of 2016, with cost-per-gigabyte parity in 2017 forecast by Intel. The result will be the replacement of bulk hard drives with SSDs in many applications, and especially in archiving use-cases.

Overall, there appears to be little difference between vendors in 6 TB bulk hard drives right now, although helium leakage is a concern with WD’s product. Specialized drives are of limited value, considering that retail prices for standard 3 TB drives have fallen below $90 recently. On a three-year to five-year horizon, bulk HDD will be under siege from 3D NAND in a battle that will play out much like enterprise HDD versus SSD.

My advice is to look for array chassis/controllers with a high bandwidth, so that the inevitable changeover to much faster SSD doesn’t require a forklift upgrade.

About the Author

Jim O'Reilly

President

Jim O'Reilly was Vice President of Engineering at Germane Systems, where he created ruggedized servers and storage for the US submarine fleet. He has also held senior management positions at SGI/Rackable and Verari; was CEO at startups Scalant and CDS; headed operations at PC Brand and Metalithic; and led major divisions of Memorex-Telex and NCR, where his team developed the first SCSI ASIC, now in the Smithsonian. Jim is currently a consultant focused on storage and cloud computing.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights