In my last entry, "Can We Get to a Single Point of Deduplication?", I looked at who had the capabilities for a single point of deduplication, essentially consolidating the deduplication engine so that only one supplier and one dedupe engine manages all the optimized data regardless of storage tier; primary, secondary, archive and backup. Another question is, do you really need it?
As I pointed out, there would be some theoretical gains by consolidating the deduplication process. The more data that goes through the engine the better chance it has seen that data before and can be optimized. Also, if you have a different deduplication engine at each tier, that means the data has to be expanded or re-hydrated as you move that data between the tiers of storage.