Things to Keep in Mind With Thin Provisioning

Thin provisioning of storage has evolved into one of the key features that new storage systems must offer

January 28, 2009

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

11:00 AM -- Thin provisioning of storage has evolved into one of the key features that new storage systems must offer. It allows you to create array volumes and assign them to applications at the projected capacity the application needs. But, as I discussed in my article on Thin Provisioning Basics, actual capacity is only allocated when new data is created in the system. This can mean a dramatic cost savings in storage acquisition costs.

As is true with most technologies, there are some things to keep in mind when you deploy thin provisioning -- and they dont necessarily include the knee-jerk concern about over-provisioning yourself out of actual disk space. Between the reporting that is available and responsible administration practices, the capacity emergency scenario rarely, if ever, raises its head. Instead there are other things that storage administrators should think about -- including expectations for how thin provisioning works in conjunction with data lifecycle practices.

One such area is block space reclamation, or rather, the lack thereof. The idea behind block space reclamation is to return blocks of storage that are no longer occupied by data so that they can be used again (reclaimed) and assigned to other volumes.

Let's say you create a 500-GB thin volume and things start off as predicted using 10 percent of that capacity, or 50 GB. Good! That's what thin is for -- it saved you 450 GBs or so right off the bat. Way to go.

Over several years, let's assume the volume grows and the used capacity approaches the expected 500 GB. As part of normal lifecycle processes you archive 100 GB of data that is more than three years old to a disk archive or even tape. Good for you! The volume that was thin when it started is now the size it would have been if you had created a fat volume originally. The capacity purchases you made over the years cost less per GB than if you hard-provisioned all the capacity up front. Of course, if the capacity only grows to 350 GB instead of 500 GB, then you would never have had to spend any money on 150 GB of unnecessary capacity. Any way you slice it, it's an excellent lifecycle story.Let's look at a different scenario -- one where you expect data to grow quickly or have a high churn rate of transient data, data that is created and deleted frequently. Database log files sometimes fit this profile, as do user accounts on file servers. Although thin provisioning may be able to save you money for storage capacity on these volumes initially, you might prefer to hard-provision them and manage them in accordance with data lifecycle practices and policies. Keeping end users in-line through hard capacity limits is an excellent way of maintaining strict control over your storage environment.

In a perfect world, the capacity consumed by moved or deleted files could be reclaimed and used by any other volume in the system that needed it. Frankly speaking, this is an area where file system developers need to step up to the plate. But progress is being made. For example, Symantec Corp. (Nasdaq: SYMC) worked with 3PAR Inc. to develop the functionality within its Storage Foundation (VxFS) file system that communicates back to the thin provisioning storage array that those deleted blocks are candidates for reclamation. Considering that industry support for this initiative includes both Hitachi Data Systems (HDS) and Hewlett-Packard Co. (NYSE: HPQ), it's becoming clear that advanced storage software like Storage Foundation needs to take over. Storage Foundation knows what blocks have live data, what blocks have deleted data, and what blocks have free space. Its new Smart Move technology will use this information so you can migrate data from a fat volume to a much more efficient thin volume.

Data lifecycle management will soon include fat-to-thin volume conversions as a way to cut costs. There will be multiple ways to do them, either by working with technologies like Smart Move or through array-based fat-to-thin conversion technologies such as those under development by storage manufacturers.

Thin provisioning is already reducing storage acquisition costs and power and cooling costs. New "thin" approaches to data lifecycle management will allow you to get greater efficiencies out of your storage infrastructure.

— George Crump is founder of Storage Switzerland , which provides strategic consulting and analysis to storage users, suppliers, and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.6668

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights