New Media Is Not The Key To Long-Term Storage

Optical disks, data crystals and other new storage media promise capacity and longevity, but that's not all it takes to meet the needs of a large, long-term archive.

Howard Marks

August 13, 2013

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

It seems every six months or so some researcher announces a new storage medium that will hold several copies of the Library of Congress, fit in my shirt pocket, and will still be readable a hundred, a thousand or a Dr. Evilish "meeeeellion" years later. As cool as it would be to store our data on holographic disks, stone-like media, DNA or even 3D Kryptonian data crystals, capacity and endurance aren’t the only attributes we need from a long-term storage medium.

If we accept the assumption that we need to ensure that our descendants can see the photos you posted on Facebook from your bachelor party, or that future historians can view your iPhone video to examine the foreshadowing of future events from your daughter's (the Nobel laureate's) high school valedictorian address, then we need a preservation system, not just the right medium.We can measure the desirability of a storage medium-- or more properly a storage system using the medium for a large, long-term archive--in several dimensions:

•Storage density

•Reliability

•Access time (seconds to first byte)

•Streaming performance

•Retention time

•Operating costs (power, maintenance, data migration costs)

In order for a new medium to be even worthy of consideration, it has to be better than conventional disk and tape in at least two dimensions, and the difference in one dimension has to be substantial. I don’t see any of the contenders fitting anything but limited niche cases for well over a decade.

Optical disks have long filled a niche in the archive storage space. Before disk-based storage systems like EMC’s Centera were accepted for data that had to be retained unmodified, optical WORM disks were the retention media of choice. Since then, the optical disk market has been dominated by technologies originally designed for the distribution of music and movies. The latest of these is Millenniata's M-Disk, a customized DVD that uses a stone-like substrate to provide what the vendor promises will last 1,000 years. The M-Disks, while they have to be written with a special drive, can be read in an ordinary DVD drive.

The sad truth is optical disks, even future high-capacity disks like the 300-Gbyte version Panasonic and Sony have announced for 2015, are a dead end. Shrinking the bits will soon start requiring very expensive short wavelength UV lasers, and as consumers move away from physical media, the money for R&D is also shrinking.

[Read David Hill's analysis of a cost-effective storage alternative in "How Tape and LTFS Can Relieve Storage Pressure."]

Of course, all these proposed archival media are removable. The good thing about removable media is that it keeps the incremental cost of more data low. Even if you keep all your data online in a robotic library, media slots are cheap. However, removable media also means there’s also a minute or so of latency as the library mounts each piece of media; the latency is even longer if there isn’t a drive free in the library.

Proponents of new storage media usually make a big deal of their shiny new toys, which--unlike old fashioned tape--are random access. Off the top of my head, I can't think of many applications that would care about random access latency once media is loaded but are fine with waiting a minute for the media to load. If you’re loading and unloading large objects, tape could be as good a solution as a holographic disk or DNA sample.

The other thing we have to remember is to compare revolutionary new media like data crystals not with current versions of today’s storage technologies but with the versions that will be leading edge when the new tech actually ships.

By the time Sony and Panasonic get their 300-Gbyte optical disk off the ground in 2015, we’ll be using 8-Tbyte disk drives and 6.4-Tbyte LTO-6 tapes. And by the time the recording device for a completely revolutionary technology like DNA or data crystals ships, the 35-Tbyte tapes IBM announced a year or so ago will seem old hat and tape will be up to 100 or 200 Tbytes per cartridge. Then who knows when holographic storage actually will arrive. It's five years away, just as it always is.

About the Author

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights