Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Automatic Tiering - It Isn't HSM/ILM 2.0

Ever since NetApp's Tom Georgens said "I think the entire concept of tiering is dying" in an analyst call last month, the blogosphere has been all a-twitter about automated storage tiering, I mean George Crump alone got three blog entries out of it. Unfortunately, many of those writing about automated tiering are thinking about storage as strictly unstructured file data, arguing that better file management with an ILM-like solution would be a better idea. In array tiering, is the cost/performance answer for the problems ILM can't solve?

I will grant my fellow bloggers (and bloggerettes) that most organizations don't manage their unstructured data well. The average NAS in corporate America is loaded down with home directories of users that were fired in the last century, backups of user's iPods and multiple copies of the menus from every takeout joint in a 10 block radius.  A good data classification and archiving system could migrate that data to an archive tier and off the primary storage.  The resulting savings would come from having mutiple copies of the archive as opposed to the 5-10 copies we keep of primary data. The OPEX savings from fewer snapshots and less frequent full backups will be bigger than the $/GB savings from moving the data from 15K to 5400 RPM drives or even from a FAS6080 to $3-5/GB storage on a low-end NAS.

In theory, ILM is a great idea, and we should as an industry have made more progress towards systems that can classify and migrate data transparently, or at least translucently, to the users. The limited information found in typical file system metadata, and the fact that much of that data is completely mismanaged by IT pros, with users deciding what to save and where to save it, has made classification difficult keeping ILM out of the mainstream.

Even if our organization embraced ILM the way I would embrace Jennifer Connelly, we would still have disk I/O hotspots. The places where SSDs would really help performance, or by replacing short stroked FC drives reduce costs, aren't user folders and the like but generally databases and similar structured data

Take for example an application like Exchange. An Exchange data store is an atomic object. A system administrator can choose to put the data store on FC or SATA drives but the whole database has to be in one logical volume in the Windows volume manager. In a typical Exchange database 80 percent or more of the data is relatively static while a small percentage is several times busier.  

  • 1