Issues With Automated Tiering

While the industry, myself included, has been busy extolling the virtues of automated tiering, it's important to understand that it's not a be-all-end-all for the storage manager. Certainly there is plenty to like but, there are a few caveats that you should be aware of. From a performance perspective most (if not all) automated tiering systems leverage SSD or RAM to accelerate I/O and reduce latency. The upside of this, as we have discussed, is that it provides an automated way for storage mana

George Crump

March 10, 2010

3 Min Read
Network Computing logo

While the industry, myself included, has been busy extolling the virtues of automated tiering, it's important to understand that it's not a be-all-end-all for the storage manager. Certainly there is plenty to like but, there are a few caveats that you should be aware of. From a performance perspective most (if not all) automated tiering systems leverage SSD or RAM to accelerate I/O and reduce latency. The upside of this, as we have discussed, is that it provides an automated way for storage managers to take advantage of SSD. The downside is that the rest of the environment has to be fast enough to take advantage of it. Putting a really fast drive at the end of a wire is not necessarily going to deliver better performance.

For some environments, this means that the amount of data that can be justified going on SSD is relatively small. Investing in an entire automated tiering system just to get that little bit of performance boost may not be worth it. For example, if you have one application on one server that needs an I/O boost, it may be simpler and less expensive to install a PCIe based SSD in that one server and be done. This is not the case for all data centers. You need to make sure your environment can support and justify the broad performance increase that automated tiering can deliver.

The other goal of some automated tiering solutions is to drive down costs by moving inactive data to SATA-based storage, limiting how much much fibre or SAS capacity you buy. The performance capabilities of the rest of the environment are not the issue here. The automated tiering system is just looking for old data and most data centers have plenty of that. The challenge is that your primary storage system was not really designed to store old data for an extended period of time. Most systems do not have retention capabilities to make sure data is locked down and secured. In this case, I still prefer the archive-to-disk method, possibly leveraging file virtualization to make the moves to these archive tiers automated. The reality is that you may not want to, or may not be able to, get users to buy into the archive tier concept. If that's the case, then leveraging automated tiering to drive out some of the primary storage cost is valid.

The other concern, with the exception of file virtualization, is having data broken up into little chunks and then scattered across different tiers of storage. To solve this, look for a vendor that has some experience dealing with manipulating data at a finer granularity rather than focusing on the volume. Also, when testing, make sure you test what happens when you do a large recovery of a volume or database. How does the automating tiering intelligence handle this ingest of data?

Automated tiering is a popular concept, to say the least, and it can provide value for some data centers. As you approach this new technology, make sure you understand what parts of your environment can take advantage of it and what your long term data-retention needs are.

About the Author(s)

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights