For a leading-edge industry, IT can suffer from some strange lags: I don't have a cool name for it, but the adoption lag (maybe that'll have to do?) is interesting. It's the time that elapses between the market generally acknowledging that some feature or tool is a good thing and the point at which most users have actually moved to adopt it.
Of course some delay is inevitable because of the irregularity of buying cycles, and perhaps also because of some natural cynicism (let's be kind and say pragmatic conservatism) when you're dealing with the organization's crown data jewels. But there's another thing at play too, I believe: the adoption lag gets elongated because of a tendency for us all to shift our sights to the next bright, shiny object … even before we've fully benefited from the potential in the prior wave. These days there's so much talk of clouds and software-defined-everything that the simple value of maximizing the efficiency of storage infrastructures via extensive virtualization can get overlooked. This article is a reminder to look in the storage mirror (no pun intended) and check that you are not missing out on what can be fairly long-hanging fruit.
Growing Infrastructure Efficiency
Organizations are struggling to manage the combined weight of rapidly proliferating server farms and the commensurately growing, dedicated storage pools. Server farms of hundreds and even thousands of individual machines have proven to be nearly impossible to manage … at least effectively and economically. End users are seeking solutions to these growing complexities and inefficiencies. The value of continually procuring individual components is diminishing, and the contemporary opportunity to implement integrated computing and storage platforms is well timed. "Integrated IT infrastructure" is certainly all-the-rage right now (often this is being called convergence) and is a direct result of the success of virtualization of servers, networks and storage.
Achieving overall infrastructure efficiency has never been easy, but one good start is to take the onus off the shoulders of end users to figure it out themselves. Server virtualization has been the catalyst for the evolution of the compute infrastructure. Consolidating multiple application workloads on far fewer hardware platforms has revolutionized the ability for users to be efficient in terms of "compute-ability." The benefits are well documented and significant: Server hardware for many users has been reduced by 50%, 60%, even 80%, enabling both capital and operational cost savings.
[ Learn how to make the most of virtualization. Read Why Flash Storage Excels In Virtual Environments. ]
However, the aggregation of server workloads has put tremendous stress on both networking and storage. The challenges in storage are extreme, with industry measures and anecdotal evidence consistently confirming that anything from 40% to 50% or more of disk capacity sits idle (i.e., wasted), locked behind a particular server. This can even happen while another storage "pool" is running out of capacity. Just buying more storage is clearly (at best) inefficient, and (at worst) a sheer waste of resources and a poor business decision when unused capacity is sitting idle elsewhere in the same environment. In some cases, storage has thus become an unwanted poster child for inefficiency, driving up acquisition costs, consuming data center floor space and energy, and increasing management time.
Harvesting The Benefits of Storage Virtualization
Storage virtualization offers benefits to the storage infrastructure that are similar to those that server virtualization delivers on the compute side. In other words, storage virtualization is breaking the mold so that the true value of data services is no longer limited by hardware characteristics. Instead of dedicated pools of storage capacity that cannot be shared, a virtualized storage pool can enable: