The economics of Information Lifecycle Management didn't make sense, but primary storage optimization is different

George Crump

December 20, 2008

3 Min Read
Network Computing logo

11:00 AM -- Let's face it: There was a lot of money and hype spent on Information Lifecycle Management, or ILM, and for many the acronym became known as an Incredibly Large Marketing effort. In 2009, we are going to see a lot of discussion around Primary Storage Optimization, or data reduction. I believe this discussion is real and will be commonplace in most data centers within the next year.

ILM, for those new to IT storage, was and still is the process of identifying old data and then moving that data to a SATA-based storage system or tape. Ironically, we saw in the past the same percentage of inactive data as we do now (around 80 percent), but the problem was that most data centers already had overbought storage to the point that they had plenty of excess space anyway.

The economics didn't make sense. If you had 20 TBs of storage and were only using 10 TBs of it, the fact that we could identify that 8 TBs of that storage was old and should be stored somewhere else didn't matter. Paying for an additional 8+ TBs of additional storage when you already had 10 TBs of free space didn't make sense, even if that tier was cheaper by the gigabyte.

In 2009, we're going to have a different scenario. More data centers are at a much higher level of capacity utilization, and that 80 percent of data that is old has gone up a little bit. Some surveys now point to 90 percent. And Primary Storage Optimization is more than just moving old data off of primary storage -- it also includes better optimization of data on primary storage that is not moved.

The first big difference between PSO and ILM is that PSO has a better target to move to than has been available in the past: disk-based archives. Designed to optimize capacity, to scale, and to simplify management, these systems are a significant step ahead of what the target tier in 2001 was.Second, the data you want to keep on primary storage can be better optimized -- Storwize Inc. 's in-line compression appliance or NetApp Inc. (Nasdaq: NTAP)'s de-dupe are good early examples. Riverbed Technology Inc. (Nasdaq: RVBD), EMC Corp. (NYSE: EMC), and others have all indicated that primary storage optimization is on their radar screens as well.

Third, and often left out of a PSO discussion, is the optimization of captive capacity, or storage that is allocated but not used. You can't de-dupe what isn't there. Here, thin provisioning pioneered by companies like 3PAR Inc. and Compellent Technologies Inc. can reduce this problem. EMC, Hitachi Data Systems (HDS) , and just about every other storage manufacturer have implemented some form of thin provisioning. With the latest Symantec Corp. (Nasdaq: SYMC) and 3PAR announcements, and the announcement last summer by Compellent of Free Space Recovery, the technology is being extended to, not only thin volumes initially, but also to keep the volumes thin as files get deleted.

In 2009, the need is legitimate. You are running out of capacity, and the systems to optimize both the archive tier and the primary tier are effective. As a result, Primary Storage Optimization will not go the way of ILM.

George Crump is founder of Storage Switzerland , which provides strategic consulting and analysis to storage users, suppliers, and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

About the Author(s)

Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights