A month ago, I received a phone call from my mother-in-law. This means one of two things: a family birthday is approaching, or she is having technical problems. It's generally the latter, and this call was no exception. Last Christmas, we purchased a new laptop for her. It had a decent processor, with 4 GB of memory and a 320 GB hard drive. Being a storage guy, I should have looked at the hard drive specs closer, but she is a pretty light user. In just a few months, her statement of the problem was simple and direct: "My laptop is so slow I can't use it."
Upon inspection, I discovered that the hard drive was pegged, running at 80% most of the time. The first thing I did was replace the hard drive with a 500 GB hybrid drive. The difference was immediate and dramatic. Looking at it from a purely storage standpoint, the old drive was delivering at best around 50 IOPs. The new hybrid drive was a 7.2K drive that should bump to 80 IOPs. Add in the 4 GB of NAND flash that fronted the hybrid drive, and the IOP level was supposedly up to 80% better.
How does all this relate to enterprise deployments? The No. 1 measure of success is the end-user experience. And, if your end-users' reaction mirrors that of my mother-in-law, then you will have another failed VDI deployment on your hands. VDI storage problems are why we have seen next-generation storage companies coming out of the woodwork. Many of these companies claim that they will fix the VDI problem without ever really tackling the root cause.
The first part of the problem is that many IT admins deploy virtual desktops in the same manner as servers. Deploying desktops is not the same as deploying servers. What really matters is adding enough memory and CPU. After that, the storage just needs to be highly available. When deploying 15, 30, or even 40 servers per virtualization host, the IO and latency of storage in most arrays can support the need.
The difference is during deployment of 100, 150, or 200 desktops per host. Desktops normally require 2 GB to 4 GB of memory range, where many of the servers will need 16 or more. The next step is to combine the IO needs per system. The OS requires some baseline IO, whether you are running a server operating system or a desktop operating system.
This is where scale comes into play. Most virtual desktops can consume around 20 IOPs per desktop. Looking at 100 desktops per host -- a reasonable consolidation rate -- we need around 2,000 IOPs. Using 15K drives would require a dozen drives per host. To achieve shared storage capabilities for high availability of the systems, the workload needs to be sent across some storage network. Most VM deployments now go over IP, and latency becomes a concern. Consider 10 hosts with 100 desktops per host -- or the equivalent of 1,000 machines -- all vying for storage network space. Adding in fast storage will help, but if there is still network to cover, you will have latency concerns.
Every new storage vendor has the next great answer to virtual desktop deployments, from adding caching cards to traditional arrays, to building dedicated arrays. The latest trend is an all-flash array, which gives incredible speed if you can push the full amount to them, but comes with a significant cost. In between, there is the hybrid array, similar to the drive that I installed in my mother-in-law's laptop. This provides some acceleration of flash without the expense. Another alternative is a software solution that brings the desktop workload closer to the hypervisor. Using the memory available in the host, you can cache some of the desktop, if not all of it.
In most scenarios, I would argue that all flash is overkill. If you are pushing 20,000 IOPs per drive and have 12 to 24 drives, you have more than 240,000 IOPs. While these are impressive numbers, is it really needed? We talked earlier of 100 desktops per host, with 1,000 users, equal to 20,000 IOPs. All flash makes fast storage, but at a high cost. This does not mean that you should ignore flash. Instead, look at a hybrid array and include a few flash drives providing 20,000+ IOPs. You achieve the necessary capacity when combined with traditional 15K drives.
The last step is to address the latency in the storage network. A software solution brings value in this phase by bringing the desktop as close as possible to the compute layer. For the majority of VDI deployments combining software with a hybrid array can provide an optimal VDI deployment. Providing minimal latency and the proper amount of IOPs, you can have a cost per desktop that not only keeps it in range with physical desktops, but also avoids the mother-in-law problem of "my system is slow."Michael Letschin has more than 15 years of experience in the IT industry, ranging from systems engineer to IT director. Most recently, he has held roles as sales engineer and now as director of product management at Nexenta Systems, a software-defined storage company. Michael ... View Full Bio