Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Virsto’s Storage Hypervisor Enables Storage to Play Nicely with Server Virtualization: Page 2 of 4

Now, three of the main proponents of a storage hypervisor are DataCore, IBM, and Virsto and, as might be expected, all have different perceptions of what a storage hypervisor is. Today we will examine the storage hypervisor through Virsto’s lens.

Virsto installs software in each physical host as a virtual storage appliance, with one master VM that provides a central management point for installing and configuring the Virsto datastores. To the guest VMs, Virsto datastores look like NFS datastores, although they are in fact running on block-based storage. In addition, Virsto presents a new virtual hard disk option for server and virtualization administrators called a Virsto vDisk, which looks to VMware’s vSphere exactly like a native VMDK (Virtual Machine Disk Format). Virsto vDisks are storage targets for guest VMs just like native VMDKs, and the guest VMs don’t even know they are using Virsto storage — they just see the benefits.

In VMware speak (with equivalent terms for the Microsoft and Citrix worlds), a Virsto vDisk appears to vSphere as if it were an EagerZeroedThick (EZT) (hey, I didn't name it) VMDK, the highest performance storage object in a VMware environment. But that exclusivity comes at a price, including long provisioning times and inefficient storage capacity consumption. Each native VDMK allocates a fixed amount of storage, say 30 to 40 GB, for each VM. This capacity is reserved regardless of whether it is used by each VM. This is the most common choice of provisioning VMs for production workloads, since it meets performance requirements with the fewest disk spindles. A space-efficient alternative is to use another native VMDK option from VMware called Linked Clones, which are very space-efficient, but lack the performance characteristics of an EZT VMDK.

In essence, Virsto vDisk provides the performance benefits of an EZT VMDK, as well as the rapid provisioning times and storage space efficiency of a Linked Clone. The Virsto vDisk pretends to pre-allocate a disk just as a native EZT VDMK would, but that means that disk is allocated only logically to the server platform, rather than physically partitioning primary storage with fixed allocations. One of the big operational benefits of Virsto is that it stops the prevailing practice of carving up a single LUN for each VM. Instead, the blocks from each VM are intelligently placed on the physical disk capacity that has been set up as one single Virsto datastore, and capacity is consumed only when data is written to the storage. This approach is, of course, an instance of the ever-more-popular part of storage virtualization called thin provisioning, and done properly, it removes the critical problem of over-provisioning storage.

Virsto relies on a single dedicated Virsto vLog to be configured for each physical host. Each physical host that runs Virsto software writes to a 10GB LUN that has been set up as the Virsto vLog. This logging architecture improves write performance acceleration but how? The Virsto vDisks delivers the write I/Os from any VM to the appropriate vLog. The Virsto vLog then commits the write back to the VM, which means that it tells the VM that it can go on immediately to its next operation. However, the vLog really hasn’t waited until the I/O has been written to primary storage and confirmation of a successful write returned. The vLog is instead asynchronously de-staging logged writes to primary storage every 10 seconds, which allows Virsto’s data map and optimization algorithms to lay down blocks from a single VM intelligently so as to avoid fragmentation and ensure high sequential read performance.

In other words, the Virsto vLog effectively takes a very random I/O stream from the physical host and turns it into an almost 100% sequential stream as it commits writes to the log, one after another, without having to suffer the acknowledgment (rotational and seek time) latency that writing randomly to any disk would normally have to endure. Writing sequential, rather than random, I/Os to spinning disk makes better use of disk spindles and, consequently, performance. Sequential I/O avoids the performance penalty that random I/O causes due to the rotational latency and seek times of spinning media trying to cope with random writes.

With Virsto, customers may not need to add SSD to meet performance requirements, but if customers do want to use SSD, Virsto provides for much more efficient use of it than conventional approaches that use it as a write cache (other solutions focus on read cache, which is important, but many use cases also need acceleration from write cache, as well). First, customers can choose to use SSD for the Virsto vLogs. The fact that Virsto sequentializes the write I/O is critical, since random writes can seriously reduce the performance of SSDs. A write log requires significantly less SSD capacity to speed up all the writes than a cache does, making much more efficient use of available SSD capacity. In Virsto’s architecture, these logs must be on external shared storage to support functions like failover, so the SSD capacity must be located in the SAN, not in a host-based SSD card.