VMware has been good for the storage industry, but maybe not so good for the storage administrator. Provisioning of storage to new hosts and virtual machines (VMs) remains one of the more time consuming tasks in the enterprise. The speed at which storage can respond to the random workloads of the virtual environment is one of the biggest bottlenecks in performance. And the cost to store all the data that VMs create remains one of the biggest costs of the virtual infrastructure.
Storage vendors have flooded the market with various solutions and the options can be overwhelming. As we will discuss in our upcoming webinar "The Requirements of VM Aware Storage", while every data center is unique there are some basics that you should now expect from the storage system that supports your virtual infrastructure.
Solid State Disk
Solid state disks (SSD) are potentially the best answer to the above mentioned random I/O that the virtual environment creates. There is now almost universal agreement on that point. How to implement SSD into the environment is where the disagreement occurs. It is obvious, for the time being, that hard disk (HDD) storage systems will continue to be a mainstay of the data center. The cost of capacity on HDD is simply too good to ignore.
The SSD system you select will largely be dependent on where your current HDD storage system is in its lifecycle and just how bad a performance problem you have. For organizations that need to get a few more years out of their hard disk-based storage systems, a caching appliance or a standalone SSD appliance can be ideal options. For organizations that are ready for a storage refresh, a tightly integrated Hybrid SSD as we discussed in "Hybrid SSD Storage vs. Unified Storage" may strike the right cost/performance balance for them. Or it may be time to step up to an All-Flash Array, which leverages deduplication and compression to deliver top end but still affordable performance.
Before virtualization, troubleshooting consisted of monitoring the LUN or volume assigned to the connecting server. Now those servers are hosts, with dozens of VMs on them. Even VMs are more than a bunch of blocks on disk. There are components that have different internal parts. There is the system state, the parts of the server itself, and these tend to have write heavy I/O traffic. And there is of course that server's data, which is typically read heavy. All of this means that the storage system needs to understand not only what is going on inside of the host but also inside the VM. This is critical information to make sure that the right data is on the right storage at the right time.
Performance is not the only challenge, managing capacity is equally important. In the typical virtual environment, all data has been moved to a shared storage device of some kind. This storage needs to be optimized as much as possible. Techniques like deduplication, cloning, and thin provisioning should all be leveraged to extract maximum dollar per GB stored.
The intelligent use of SSD, the ability to understand what VMs are doing, and the ability to optimize the capacity being consumed are foundational for the virtualized architecture. They are critical to pushing the data center closer to the 100% virtualized goal while at the same time making sure that the virtualization ROI is maintained.
Follow Storage Switzerland on Twitter
New innovative products may be a better fit for today's enterprise storage than monolithic systems. Also in the new, all-digital Storage Innovation issue of InformationWeek: Compliance in the cloud era. (Free with registration.)