Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Measuring The State Of Primary Storage Deduplication

I've often been asked to rank various vendors' primary storage deduplication capabilities. This is a dangerous prospect as it is clearly subjective. But I can provide some ideas on how to measure each vendor's primary storage abilities so you can weigh those ideas in order of importance to your data center.

First, however, we need to discuss the risks of deduplication.

All deduplicated data has some risk -- after all, you can't get storage for nothing. Deduplication works by segmenting incoming data and creating a unique ID for each segment. These IDs are then compared to other IDs. If there is a redundancy, the redundant data is not stored -- instead, a link is established to the original segment, thereby saving you capacity.

All the IDs are stored in a meta-data table. This table is essentially a roadmap showing what segments belong to which data so that data can be reassembled when requested. If this table is somehow corrupted, you have more than likely lost the map to your data. Even though the map is still there, you can't access it, at least not easily.

[ Keep your VM data local and protect it too. Here's how. Are SAN-Less Storage Architectures Faster? ]

The size of the meta-data table is a concern for deduplication systems. Each new segment represents an entry into that table, and each redundant segment represents a branch to the tree. The size of the table can cause problems, especially when you consider the speed of accessing and updating it.

Think of the meta-data table as a relatively simple database that needs to be able to be updated and searched quickly. This is especially important in primary storage because you don't want write performance to be impacted while the table is searched for redundancy. To avoid this problem, most vendors place their table in RAM.

However, in the case of a large primary storage system that houses dozens -- or even hundreds -- of TBs of information, the entire table can't fit in memory. To get around this problem, the table is split between RAM and disk. The problem there is that a deduplication process is not cache-friendly, where a first-in first-out method of using RAM would generate viable hit rates. To overcome this, some vendors deploy their tables in flash; others process deduplication as part of an off-hour process rather than performing the function in real time.

Understanding deduplication meta-data is an important first step in assigning a grade to a vendor's deduplication efforts. Most of the problems described here can be overcome, as discussed in this recent webinar.

In my next column, I'll discuss what to look for in a vendor to ensure that your primary storage deduplication technology is safe, fast and scalable.

The next steps in going virtual up and down the stack, from network to desktop: Automation and finally taking hypervisor security seriously. The two go together, because if you're going to trust production systems to run without human intervention -- a must for delivering IT services on demand -- you'd better be darn sure attackers can't gain control. Get our 2013 Virtualization Management Survey report now. (Free registration required.)