Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Virsto’s Storage Hypervisor Enables Storage to Play Nicely with Server Virtualization: Page 3 of 4

Second, on the read cache side, Virsto’s provides storage tiering, which supports the option to create an SSD-based tier 0 where shared binaries, such as the golden masters used in VDI environments, can be placed to ensure extremely high read performance. Taken together, Virsto feels that these features make very large use of a very small amount of SSD capacity. In contrast, when SSD is simply used as a cache, the capacity required is usually a larger percentage of the overall primary storage capacity. In today’s virtual environments, it’s not uncommon to have tens or hundreds of terabytes as the primary data store, which means you can require a lot of SSD to get the performance speedups you desire. While SSD costs may be coming down, they are still easily at least 10x more expensive than spinning disks.

The third element of Virsto’s logging architecture is the configuration of a managed virtualized pool of capacity on primary storage called vSpace. This provides for a single pool of block-based storage, the ability to assign QoS at the VM level and to manage up to four tiers of storage within the vSpace. The pool of storage capacity can span multiple arrays, including heterogeneous arrays (i.e., not just one storage product family from one vendor but different arrays from different vendors, as well). Note that IT organizations may continue to use familiar software management services, such as remote mirroring or replication capabilities. They may also retain their snapshotting and thin provisioning capabilities if they choose, but Virsto also offers snapshotting and thin provisioning capabilities that it feels provides significant performance and scalability advantages over more conventional technologies. IT would have to choose which set of snapshotting software to use, but it would make no sense to use both sets.

Speaking of the snapshotting capabilities, Virsto can generate innumerable snapshots very quickly. This is the fourth, and perhaps most innovative, piece of the Virsto architecture. With thick VMDKs, it is common for each VM to take 20-30 minutes to be provisioned as all of the data is written over to the new VM. With Virsto, through the use of their writable clones, each vDisk becomes a 4 Kb marker that is created in seconds. Virsto feels its snapshot architecture prevents the significant degradation from many snapshots accumulating over time that some alternative architectures may suffer from. Note that these pooling and tiering capabilities also lead to faster provisioning and prevent over-provisioning, so management of storage generally improves.

Altogether then, Virsto strives to improve storage performance, capacity utilization, provisioning speeds, and management. In fact, Virsto claims (your mileage may vary) up to 10x improvement in end-users’ perceived performance (over the alternative), 90% reduction in the capacity utilized (versus using the standard approach), and up to 99% reduction in the provisioning times associated with high performance storage. Note that the Virsto storage hypervisor does this with no disruption or changes in managing a server virtualization infrastructure.

VDI as an Illustration of Where Virsto Plays
Let’s illustrate the use of Virsto with a key use case — virtual desktop infrastructures (VDI).
To many IT administrators, implementing a VDI would be a dream-come-true. The theory is that business users need to access applications and use data, but that all this can be done over a network from a centralized IT facility (such as a data center) to thin clients (user computing devices that do not run large applications nor store much data) rather than thick devices, such as a desktop or laptop that store both applications and data. Many IT organizations are in love with the idea because of the perceived administrative efficiencies (such as updating one copy of an OS rather than having to update an OS on thousands of devices at the user endpoint) as well as tighter security (sensitive data stored centrally is easier to secure than that on a user device).

However, though the concept of VDI is simple enough, the reality too often becomes a nightmare. There are a number of stumbling blocks, but let’s just look at two of them. The first is cost, and a major cost of that is storage. My good friend John Webster of the Evaluator Group stated in a February 14, 2012 blog post that the typical virtual desktop running Windows applications is moving almost 1 MB of data per second with an access rate of 10 I/Os per second (averaging 100 KB per I/O). In addition, data block sizes can vary rapidly and accesses are very random. To support that performance, John recommends using SSDs at the array level, the ability to write thousands of writable snapshots, thin provisioning, and automated provisioning assistance. Of course, standard storage environments can do some of this but not all of this easily.

The second stumbling block is the consumerization of IT, where users want the ability to use more than one device, including tablet computers and smartphones, with the need to not only support applications and data that are common to all devices, but also applications and data that may be specific to each. Now, all this can be done with VDI in what is called the persistent approach, but this approach typically has meant keeping a copy of everything for each user rather than the non-persistent approach, which makes a single copy of everything that is in common. Although the non-persistent approach is appropriate for some classes of users (such as an overseas help desk), it is simply not going to fly with knowledge workers who demand “any device, anytime, anywhere” along with rapidly changing needs. But this approach is more costly.