Howard Marks

Network Computing Blogger


Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

The Promise of LTFS

A bit more than a year ago the folks behind the LTO tape format standards, primarily IBM, with some contributions from HP and Quantum, added the Linear Tape File System to LTO's feature list. While some niche markets, primarily the media and entertainment business, have adopted LTFS, it won't live up to it's promise without support from archiving and eDiscovery vendors.

LTFS divides a tape into two partitions, one that holds the file system metadata, and another that holds file data. With a little bit of software on a server, LTFS tapes look to the server like disks, and any application can write files to a tape just like they can write to a disk. LTFS isn't the first attempt to make tape look like disk, I remember the Backup Exec group at Seagate Software showing me a tape file system in the '90s.

The difference is that LTFS is standardized, making LTFS tapes a standardized data storage and exchange medium. Now you and I have switched from mailing floppy disks and DVD-R disks to using Dropbox, Sugar Sync and Yousendit, but when you need to move many gigabytes of data from one place to another, it's hard to beat the effective bandwidth of a box of tapes.

A box of 20 LTO-5 tapes holding 24TB of data will take roughly 12 hours to get from New York to San Francisco via overnight courier. That works out to an effective transfer rate of 2TB/hr or 4.4Gbps. If we allow 12 hours to spool the data to tape, which is about how long it would take to move from a disk stage to tape using a 6-drive tape library, the effective bandwidth is still 2.2Gbps.

Even if you were getting 20:1 data reduction through data deduplication and compression, you'd need a 100Mbps link to match the bandwidth of that small box of tapes replicating that amount of data across a network. Twenty-to-one data reduction may be achievable for backup data, but archives don't have nearly as much duplicate data as backup repositories, since each archive has just one copy of each data object. Archives of rich media, be they check images, photos from insurance claims adjuster's digital cameras, or medical images, don't reduce much at all, making that Fedex box even more attractive.

Without LTFS you'd have to be running the same backup application at both sites to send data via tape as each backup, or archiving, application writes data to tape in its own proprietary format.

In addition to providing a standard interchange format, LTFS promises big advantages to storing data in a standard format over a long period of time. If your archive program stores each object it archives as a native file in an LTFS file system, you're not dependent on a single vendor for the data mover, indexer, search engine and litigation hold functions. If your archiving vendor discontinues your current product, like EMC did with Disk Extender a few years ago, you can switch to another product and have it index the existing data without having to regurgitate it to disk and ingest it into a new archive. If you have trouble locating data, you could point a Google appliance at the LTFS repository and use Google search to find the relevant data.

We as customers should start pressuring our archiving vendors to support native LTFS as a repository option. Some vendors will respond that they support LTFS, since they support any NAS storage, but most archiving solutions store their data on disk in proprietary container files. While compressed and single-instanced containers may have made sense on disk, the lower-cost-per-GB of tape makes the flexibility of a standard storage format worth the extra storage space it takes up.


Related Reading


More Insights


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013



TechWeb Careers