Howard Marks

Network Computing Blogger


Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

Shingled Magnetic Recording Part 2: Using Shingled Drives

As I wrote in the first part of this two-part series, Shingled Magnetic Recording (SMR) is latest thing drive vendors have pulled out of their bag of tricks to allow them to continue to boost capacity. Unlike earlier technologies, such as embedded servo and perpendicular recording, shingle recording changes the basic operation of the disk drive, limiting shingled drives to sequential writes.

To address these limitations, the powers that be in the hard drive business -- that is, the three-and-a-half remaining vendors and the INCITS T10 and T13 committees that define the SCSI and ATA command sets respectively -- have come up with three models for shingled drives. Dumb - or more properly, restricted drives -- shingle the whole surface of each disk. These can only be written to by applications that know for write purposes they’re basically sequential devices, like tape.

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

In order to allow applications to perform some level of random access to the shingled drives, the other two solutions break the drive into zones of shingled tracks with a guard band in between the zones. For each zone, the drive maintains a write pointer or cursor that holds a pointer to the highest numbered block that has been written to in that zone. Applications can write starting at the block following the cursor.

When I first heard about zoned shingled drives, I thought that they solved the random write problem by reading a whole zone into the drive’s buffer memory, updating the data in the buffer, and then writing the data back. While this might work if a zone was only a few tracks, such small zones would limit the capacity gains we got from shingling.

Those capacity gains can be significant, but aren’t as big as some folks would lead you to believe. Shingling by itself could result in a 15% or greater capacity boost, depending on the size of the zones with bigger boosts going to restricted drives. Additional density boosts come from improved linear bit density, which comes from using that higher coercivity media with its smaller superparamagnetic limit, and from using larger data blocks. Writing sequentially allows the drives to write to the disk in blocks bigger than the 512, or more recently 4K, that standard drives use. Larger blocks can use more efficient ECC, consuming less capacity in inter-record gaps and error correcting data.

The drive vendors I’ve chatted with suggest that most drives will have zones of about 100 tracks. Since a track on a modern drive holds on the close order of a megabyte of data, a zone would therefore hold 100 or so MB too much for my simplistic read-modify-rewrite model. The T10 and T13 standards bodies are proposing a new set of commands that will allow an operating system or application to query the drive for the number of zones on it and the location of the write cursor for each zone.

Vendors could even make drives that have some zones with normal track layouts and other zones with shingled drives. A file system could query the drive, discover the standard track zones and use the standard track zones for their metadata while writing files to the shingled zones. Of course, standards bodies move at their own, usually rather glacial, pace so the projected date for a full first draft of the T10 zoned device standard is November 2016.

[Read about Seagate's recent $374 million acquisition in "Seagate Inks Deal To Acquire Xyratex."]

Some of the drive vendors are also planning, and in fact shipping, drives that, except for lower random write performance, look to the computers they're connected to like normal drives. Like SSD controllers -- which if you think about it, face a similar problem storing data in flash pages that have to be completely erased to be re-written to -- these transparent SMR drives use a log-based data layout, so they can constantly write new data to free space in a shingled zone.

Obviously, managing a log-based data layout requires a bit more intelligence in each drive than simple LBA, and since the logical to physical block map will normally be stored in the drive’s RAM, each drive will need a little bit of non-volatile memory and enough capacitor to dump that table in the event of a power failure. The additional couple of bucks in software and electronics should be worth it for the additional capacity.

Shingled recording should be a good solution for capacity oriented drives where random I/O performance isn’t important. For more performance-oriented purpose use, and even greater capacity, we’ll have to wait for heat-assisted magnetic recording and/or bit patterned media to make it out of the lab and into our datacenters.


Related Reading


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013



TechWeb Careers