As I wrote often in 2013, I believe tape remains a critical component of the datacenter, especially when it comes to the backup and archive processes. Thanks to the increased importance of economics, as well as the popularization of three tape technologies, 2013 was a very successful year for tape technology.
Success driven by growth
Despite many predictions to the contrary, tape continued to be an important part of the storage infrastructure in 2013. Part of the reason is that tape's extinction has been vastly overrated. Most enterprises continue to count on tape as a storage mechanism for backups and archive. While disk may have augmented tape, it has not replaced it.
[Why is tape still relevant in the cloud era? See 3 Roles For Tape In The Cloud.]
What makes 2013 particularly interesting is the return to tape we saw in tier-2 and midmarket datacenters. A catalyst for this is the rate that data continues to grow. The spike in growth rate is being driven by the "Internet of Things," like IP-based videocameras, smartphones, and tablets, as well as sensors that are being placed on just about everything.
All these "things" are generating data, and that data needs to be stored, inexpensively and often for a very long time. Tape is still the leader in cost per GB and power efficiency. Technologies that make disk more price competitive, like deduplication and compression, are not as effective on this machine-generated data, as it is often unique and precompressed.
Success driven by standardization
One of the challenges facing the continued adoption of tape was a lack of data storage standards. While linear tape-open (LTO) storage had become almost ubiquitous, the format that data was written to LTO media was not. Each backup and archive application wrote in its own proprietary tape format. This made users dependent on that application for the life of the dataset, and in the case of archives, that can be a very long time.
The first technology that made tape more attractive in 2013 was linear tape file system (LTFS), which I have written about before. While LTFS was not new in 2013, its adoption picked up considerably. Widespread use of LTFS allows for application independence, which means that if an archive tape needs to be read eight years from now, it can be read directly from the operating system.
LTFS started with built-in operating system drivers so that any tape inserted into a drive could be read and written to right from the operating system. As 2013 came to a close, there were numerous archive applications that support the format and a few backup applications.
Another tape technology that was not new in 2013 but became more approachable was network-attached storage (NAS) tape. This allowed tape to be accessed via a network file system (NFS) or common internet file sharing (CIFS) mount point just like any other file server. Usually these gateway types of technologies integrated a small disk frontend so that user response was instantaneous, then as data aged it was moved to tape for long-term retention. Finally these gateways also began to integrate LTFS, so the interchange between systems was possible.
The third tape-related technology which emerged in 2013 was the introduction of a RESTful interface thanks to Spectra Logic's Black Pearl initiative. In the past, if you wanted your application to directly interface with a tape library, you had to write complex SCSI commands or go through some sort of gateway type of device. Now with this technology, tape can be written to directly from the application via the RESTful API interface, which is becoming standard in the cloud provider datacenter.
Truly, 2013 was a good year for tape. Both the rate at which data is growing and the type of data created are natural fits for tape media. The addition of technologies that make tape more transportable and easier to access allowed users to overcome the biggest roadblocks to reintroducing tape into their environments.
Because data growth shows no signs of leveling off, 2014 could be an even bigger year for tape. But tape technology manufacturers must keep innovating for tape to continue being an attractive complement to disk.
George Crump is President and Founder of Storage Switzerland, an IT analyst firm focused on storage and virtualization systems. He writes InformationWeek's storage blog and is a regular contributor to SearchStorage, eWeek, and other publications.
Developments in object storage technology have made integrating inexpensive, flexible cloud storage easy for organizations of all sizes. As new options have arisen, we've seen improvements in performance, security, availability, and compliance. Yet in this competitive market, only good planning and careful implementation will yield a strong ROI. Join us for the Cloud Shift: Object Storage Services webinar on how to achieve maximum benefit and cost savings from object storage. It's available on demand. (Free with registration.)