One of the benefits of some storage virtualization systems is that they allow you to use any vendor's hardware and bring it under a single storage services umbrella. The basic idea is that you not be locked into any one vendor in particular. This sounds like nirvana, but so far it hasn't really lived up to expectations. That may change thanks to server virtualization.
The concept of abstracting the services that a storage controller provides like LUN management, snapshots, and thin provisioning has been around for more than a decade. Most storage systems today are not really a tight integration between hardware and software. Vendors, with a few exceptions, are software developers first and they often use off-the-shelf hardware. You are really buying the software, or what I call the storage services, and with those service comes the hardware they the select but probably did not design.
The goal of vendor agnostic storage virtualization is to break that model. This traditionally meant buying a relatively powerful set of servers, clustering them for availability and running the storage software vendors product. From there you could essentially attach any vendors disk system, giving you leverage when it came time to buy. Again sounds like nirvana, and while some users bought into the idea, most did not.
The reason for the lack of adoption was there a bit of a "kit" nature to this approach. You had to assemble the products, connect it to the servers running the storage services software, and get it all working. When implemented, these systems are impressive. They can migrate between storage vendors, replicate to different ones, and even stripe volumes across different manufactures systems.
If something went wrong though, you had to go to your hardware vendors and ask for help. This was sometimes difficult to do since you were not using their software. Basically the lights were on, so they thought their job was done. While the storage software companies tried to help out, there was only so much they could do and often the customer was left to figure it out on his or her own.
This lead to the systems that currently dominate the storage virtualization market, single manufacturer systems that provide the software complement of virtualization like abstracting volume creation from disk spindle management, thin provisioning, snapshots, replication and so on. The hardware though came from the same manufacturer to eliminate the "kit" nature described above. The user community has voted with their dollars that this was an acceptable compromise and vendor agnostic storage virtualization is a relatively niche market today.
I've said many times that we have only scratched the surface of how server virtualization will change the way IT operates. One of those changes may be at the storage layer. The hypervisor may end up virtualizing storage just as it virtualizes CPU's and network connectivity.
The hypervisor may make the "kit" nature of vendor agnostic storage virtualization seem more manageable. Just as users are becoming less concerned about what brand of server they use, they may become less concerned about the brand of storage they use. You will get to focus on reliability and performance of the storage system instead of who has the best snapshot capabilites.
In fairness today's hypervisors lack the complete capabilities to be able to perform all the storage service functions like replication, snapshots and clones, but as we discuss in our article "The VDI Storage Trade Off", software is now available to fill those gaps.
Letting the hypervisor handle macro storage services like data location and then using software to provide the more granular services like scalable snapshots may be a viable alternative. For many, this may be an ideal path to making storage a more cost effective part of a server or desktop virtualization project.
Follow Storage Switzerland on Twitter
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.