The needs of a virtual desktop environment are different, because you have thousands of virtual desktops instead of hundreds of virtual servers. Basically, you need consistently good performance across those thousands of virtual desktops instead of occasionally very high performance. There is one exception though: login. In the virtual server world, all of the systems are typically always on. You don't shut your servers down for the night and then boot them when you come back in the morning, so the virtual server world does not see a login storm. In the desktop world, for security purposes, you want your users to login and logout. The problem is that they all login around the same time. Also you can't pre-stage logins. No user is going to want the server to pre-login to the system. This would break security protocols. The result is at some point you are going to have a bunch of users login at the same time.
Solid State Disk (SSD) is a perfect fix for this. It can handle the sudden I/O requirements that a login storm creates. The problem is that the desktop virtualization is supposed to drive down desktop computing costs, putting all those desktops on silicon was not part of the game plan. Capacity optimization in the virtual desktop world is going to be critical and something that I'll discuss in our next entry. The ability to auto-tier data sets to SSD will also be important. The result should be a system that has virtual desktops begin to login the images of those desktops, and the specific user profiles are staged into SSD so that users see a rapid login. Then, as the login storm dies down, destage these images from SSD to regular hard disk and give the SSD tier back to databases during the work day and evening.
This is going to add a new wrinkle to auto-tiering. The ability to move data types is typically based on policy instead of access patterns. You'll want to have the ability to override what the storage system thinks it should do, and allow a forced move of key virtual desktop components to the SSD tier when needed. Ideally, I'd like to see the auto-tiering technology learn about how data is accessed and learn when it should put what data into what tier, until then you are going to need monitoring tools to help you make those moves.
It is interesting that I'd just completed an article that discusses many of these issues. I like being on the same page with someone as distinguished as Mr. Levine. Next up in fixing desktop virtualization storage is capacity management. Again, the goal of desktop virtualization is to drive down costs. Moving all the users storage off of inexpensive local hard drives and on to expensive SAN storage is not going to work. Capacity optimization is going to be a must. Stay tuned for more on that.