Storage

11:00 AM
George Crump
George Crump
Commentary
50%
50%

Hard Drives Get Squeezed

A number of developments are speeding the transition to solid-state storage.

A few years ago, as solid-state storage was becoming more prominent in the data center, especially thanks to flash memory, I predicted that hard drives would no longer be the primary destination of data by the end of this decade. At the time, I thought that might be an aggressive prediction but now it may be a conservative one--hard drives may not be the primary storage area well before the end of this decade, causing a premature obsolescence of the technology.

There are several drivers that may cause this to happen. The first is one that we could not have predicted a few years ago--the flooding that has so severely impacted production facilities in Thailand. While for the most part the lack of hard drive production capability will impact desktops, laptops, and server hard drives, it will also impact enterprise storage systems--if only to keep prices higher.

This shortage will force customers to look for other options for desktops, laptops, and servers. As we discussed in our article "The Hard Drive Shortage of 2012" we think many of these markets are ideally suited for solid-state storage technology. This can be in the form of SSD drives, SSD DIMMs, and PCIe SSD boards. As noted in that article, each of the systems can benefit from the performance increase that solid-state storage can deliver and in many cases they no longer need the on-board capacity that hard drives offer.

The other key driver that complements the move to SSD in these systems is the availability of alternative massive capacity at extremely affordable prices. The most obvious option here is, of course, cloud storage. While most cloud storage providers are hard drive-based, they should be able to continue to acquire the technology in large enough quantities to remain competitive. Additionally, they have a much better chance of optimal use of the capacity they purchase thanks to their multi-tenant architectures.

Tape cannot be left out of this discussion either, as it can be the perfect complement to a data center that is predominantly solid-state based. In this configuration, data would be stored on different tiers of solid-state storage as it ages and then moved to tape for long-term storage. This architecture would clearly provide better performance for active and near-active data, but provide a readily accessible, cost-effective, and power efficient long-term storage area. Tape systems can now be easily indexed for quick retrieval and, as we discussed in our article "What is LTFS," now has formats (like Linear Tape File System) that provide data portability and data interchangeability.

The performance improvement that SSD-based storage can deliver will hasten the potential demise of hard disk systems in another way. With hard drive-based technology, one of the most common methods to improve performance is to aggregate several hard drives together into a virtual volume or RAID group. When data is scattered across these drives, the increased number of drive heads allows for faster response time. Solid-state technology does not need this type of aggregation to provide improve performance. Our recommendation for more than a year now has been that any time you're adding hard drives to improve performance you should stop and consider some form of solid-state storage technology.

Even in environments that are going to use caching as a first step to solid-state storage, a move like that typically results in the purchase of fewer hard drives. The cache areas can be built large enough that the chances of a cache miss are relatively low, so most active data is being pulled from memory-based storage. This relegates the conventional storage system to almost an archive role. As that happens, the drives purchased for the system will be higher capacity and slower, and there will be fewer of them.

As this next decade closes, there will still be large farms of hard drive-based data centers, but these may be limited to specific use cases: cloud storage providers and extremely big data analytics are two good examples. It's reasonable to assume though, that the typical organizational data center will be predominantly based on solid-state for its storage infrastructure. As we've discussed in the past, those data centers are likely to include multiple levels of solid-state technology. Clearly the move is on. The current hard drive shortage is adding fuel to the fire and significantly speeds up this transition.

Follow Storage Switzerland on Twitter

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.

Comment  | 
Print  | 
More Insights
Slideshows
Cartoon
Audio Interviews
Archived Audio Interviews
Jeremy Schulman, founder of Schprockits, a network automation startup operating in stealth mode, joins us to explore whether networking professionals all need to learn programming in order to remain employed.
White Papers
Register for Network Computing Newsletters
Current Issue
Video
Twitter Feed