Shingled Magnetic Recording Part 1: How SMR Expands Disk Drive Capacity

Disk drive vendors have a new technique to increase disk density. Here's how it works.

Howard Marks

January 8, 2014

3 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Despite predictions every few years that we’ve reached some physical limit, the semiconductor industry has managed to double the transistor density of its products every two years, as predicted by Moore’s Law. For many years, disk drive bit density doubled every two years following Kryder’s Law, but recently drive vendors have been struggling to produce the ever bigger disks we as users demand. Their latest trick is to overlap disk tracks in a technology known as Shingled Magnetic Recording (SMR).

The growth in disk drive bit density has stalled as the bits written using the current technology known as perpendicular recording have gotten close enough to the superparamagnetic limit of the magnetic media on the drive that there can’t be any significant further advancements beyond the 1Tbit/in2 of today’s disk drives.

The superparamagnetic limit is the point where the individual magnetic domains on the disk are so small that they no longer reliably hold data, in part because of the influence of adjacent bits that may be magnetized in the opposite direction.

One way to squeeze more data onto a given area is to change the magnetic media to one that has a higher coercivity, which is one that requires a stronger magnetic field to switch from facing north to facing south. Using magnetically stiffer materials reduces the minimum size of a bit before the super paramagnetic limit raises its ugly head, but it also requires a more powerful magnetic field to flip those stiffer bits.

The problem the drive designers face is that they can’t focus the stronger magnetic field into narrow enough tracks to pack more data on a disk. Their solution is to partially overlap, or shingle, the tracks as they write them to the disk.

Shingling lets vendors cram more tracks in the same space, eliminating the guard bands that separate tracks on a more conventional disk. After data is written to track 1, the head moves a fraction of a track over and writes track 2.

[Read why Howard Marks thinks spinning disks will still be the better bargain through 2020 despite falling SSD prices in "SSDs Cheaper Than Hard Drives? Not In This Decade."]

Since the drive can read the narrow exposed portion of each track, shingled drives read data pretty much the same as conventional drives. The problem comes when an application tries to randomly write to a track that’s already been partially overwritten by the next shingled track. Where a conventional drive can just move the heads back to track 54 and rewrite sector 17, if you tried that on a shingled drive you’d wipe out sector 17 on tracks 55 and 56 as well.

For some applications, such as Facebook’s cold storage where disk drives are essentially WORM (write once, read many) devices, we could just shingle tracks across the entire surface of the disk. The application would just have to know that it can write sequentially and then read randomly. However, if we’re going take advantage of the additional capacity shingled tracks give us for more general use, just shingling all the tracks isn’t going to be the right solution.

While dedicated disk drives for deep archives would be interesting, the drive vendors have figured out a way to make shingled drives a more general solution. I’ll look at how in my next post.

About the Author

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights