The more compelling the event, the faster the adoption occurs. For example, even though SANs were becoming commonplace, look at how the rate of adoption increased when VMware and Vmotion showed up. Same with disk backup -- Avamar and then Data Domain leveraged de-duplication to make disk backup simple and optimized, as VTLs were seen as hard and cumbersome.
We are seeing the same scenario play out in data reduction. There is plenty of justification to move all of your old data from primary to archived storage, like those solutions offered by Permabit, Tarmin, and Bycast. We all know the number -- 80 percent of your data is inactive, so get it off of primary. While adoption is picking up, many data centers are just expanding what they have.
The problem is that you have to factor an archive into your plan as you are mapping out your next storage purchase. I think that disk archive is like the pre-VMware days of SAN. There is plenty of justification -- and the economy certainly qualifies as a compelling event -- but the moment someone nails the software to transparently move all this data off of primary is when the accelerator hits the floor. Part of this will be to optimize the data as it is being moved, as Ocarina Networks does. What de-dupe did for backups can now be done to archives, further increasing the gap between the costs of primary and secondary storage.
We are sort of at the beginning stages of this with Fibre Channel over Ethernet. General consensus is to roll FCoE in as you deploy new servers, but where the theory breaks down is that you have to deploy enough servers to justify the new switch that will do the FCoE-to-FC conversation, and you have to be out of, or low on, physical FC port capacity. For incremental additions of a few servers, it seems to make more sense to just fill out your existing network. Where there are performance problems, address those on a spot basis by upgrading either the Ethernet side with 10 Gbit/s or the FC side with 8 GB.