Virsto’s Storage Hypervisor Enables Storage to Play Nicely with Server Virtualization

On the whole, server virtualization has been a blessing due to physical machine to logical machine consolidation that leads to better system and CPU utilization and attendant improved IT cost economics. But servers are only part of broader interlocked “systems” of hardware components (including networks and storage) and software (including operating systems and applications) that must work in harmony together. And any change in one component (in this case, servers) may cause complica

David Hill

April 5, 2012

14 Min Read
Network Computing logo

On the whole, server virtualization has been a blessing due to physical machine to logical machine consolidation that leads to better system and CPU utilization and attendant improved IT cost economics. But servers are only part of broader interlocked “systems” of hardware components (including networks and storage) and software (including operating systems and applications) that must work in harmony together. And any change in one component (in this case, servers) may cause complications or problems (such as performance and capacity utilization) with another component (in this case, storage). This is yet another instance of the law of unintended consequences. However, Virsto’s chosen role is to repeal that instance of the law through the use of its storage hypervisor.

Back to Basics — First Server Virtualization
In IT, we tend to use the word “server” loosely. In one sense, a server is a computer hardware system, i.e. a physical box with CPU(s), motherboard, memory, network cards/connections, etc. In another sense, a server is a computer program that responds to the requests of other programs. These other programs are called clients, hence the concept of the client-server paradigm. In traditional models, each physical server ran one OS and one application supporting a specific process. The introduction of the server hypervisor changed that paradigm.

A server hypervisor is a virtual machine manager (VMM) which enables multiple instances of operating systems (homogeneous — all the same, such as Windows, or a heterogeneous — a mix, such as Windows and Linux) to operate concurrently as "guest" virtual machines (VMs) on a single host physical computer. The hypervisor provides the necessary isolation for each OS and its associated applications, as well as making sure resources (such as memory and CPU cycles) are allocated without conflict among the VMs.

The basic premise for going to the extra work of installing a hypervisor and managing the herd of VM cats is that the CPU resources of many physical servers are greatly under-utilized. Consolidating multiple VMs, applications and workloads on a single physical server initially means that no longer used servers can be redeployed, decommissioned, or held in inventory to meet future needs, thus reducing the demand to buy new servers. Overall, the cost economics of IT on the server side are improved. But if that were the entire story, we wouldn’t have learned anything new. However, there is a saying that you can’t ever do just one thing. In this case, too much of a good thing (server consolidation) can lead to a bad thing — storage-related performance, capacity utilization, and manageability issues. Let’s see why.

Back to Basics — Second, Server Virtualization’s Negative Impact Upon Storage
Now instead of one OS with its associated applications reading and writing to storage, virtualized systems support multiple instances of OSs and their attendant applications, all of which need I/O access to storage. If that happens without any coordination among the various VMs, then I/Os are heavily randomized in that no one has knowledge of which disk spindles are going to be hit harder than others and when. That means that storage performance (i.e., perceived user response time) can become degraded to the point of being unacceptable, as some disk spindles are overloaded, whereas others are twiddling their thumbs, so to speak.

Of course, IT has a standard solution to this problem, which is to throw more hardware (in this case storage) at it. That leads to over-provisioning (which is a polite way of saying that you spent too much money on too much of something) of storage, in that extra spindles, which help address the performance issue, are used. However, much of the disk space that was intended to store data (and is what you theoretically pay for) is left emptier than is preferable. If the net result of server virtualization is an overly expensive storage infrastructure, that obviously undermines the core benefits of server virtualization.

Virsto’s Storage Hypervisor — Balances without Over-Provisioning
Is there a solution to this problem? SSD vendors would raise their right hands and say that they have one way to deal with the problem. However, let’s explore another possible solution (noting that SSDs may play a complementary role): Virsto, a Silicon Valley-based company that says it wants to become the “VMware of storage,” believes that its storage hypervisor solves this problem.Now, three of the main proponents of a storage hypervisor are DataCore, IBM, and Virsto and, as might be expected, all have different perceptions of what a storage hypervisor is. Today we will examine the storage hypervisor through Virsto’s lens.

Virsto installs software in each physical host as a virtual storage appliance, with one master VM that provides a central management point for installing and configuring the Virsto datastores. To the guest VMs, Virsto datastores look like NFS datastores, although they are in fact running on block-based storage. In addition, Virsto presents a new virtual hard disk option for server and virtualization administrators called a Virsto vDisk, which looks to VMware’s vSphere exactly like a native VMDK (Virtual Machine Disk Format). Virsto vDisks are storage targets for guest VMs just like native VMDKs, and the guest VMs don’t even know they are using Virsto storage — they just see the benefits.

In VMware speak (with equivalent terms for the Microsoft and Citrix worlds), a Virsto vDisk appears to vSphere as if it were an EagerZeroedThick (EZT) (hey, I didn't name it) VMDK, the highest performance storage object in a VMware environment. But that exclusivity comes at a price, including long provisioning times and inefficient storage capacity consumption. Each native VDMK allocates a fixed amount of storage, say 30 to 40 GB, for each VM. This capacity is reserved regardless of whether it is used by each VM. This is the most common choice of provisioning VMs for production workloads, since it meets performance requirements with the fewest disk spindles. A space-efficient alternative is to use another native VMDK option from VMware called Linked Clones, which are very space-efficient, but lack the performance characteristics of an EZT VMDK.

In essence, Virsto vDisk provides the performance benefits of an EZT VMDK, as well as the rapid provisioning times and storage space efficiency of a Linked Clone. The Virsto vDisk pretends to pre-allocate a disk just as a native EZT VDMK would, but that means that disk is allocated only logically to the server platform, rather than physically partitioning primary storage with fixed allocations. One of the big operational benefits of Virsto is that it stops the prevailing practice of carving up a single LUN for each VM. Instead, the blocks from each VM are intelligently placed on the physical disk capacity that has been set up as one single Virsto datastore, and capacity is consumed only when data is written to the storage. This approach is, of course, an instance of the ever-more-popular part of storage virtualization called thin provisioning, and done properly, it removes the critical problem of over-provisioning storage.

Virsto relies on a single dedicated Virsto vLog to be configured for each physical host. Each physical host that runs Virsto software writes to a 10GB LUN that has been set up as the Virsto vLog. This logging architecture improves write performance acceleration but how? The Virsto vDisks delivers the write I/Os from any VM to the appropriate vLog. The Virsto vLog then commits the write back to the VM, which means that it tells the VM that it can go on immediately to its next operation. However, the vLog really hasn’t waited until the I/O has been written to primary storage and confirmation of a successful write returned. The vLog is instead asynchronously de-staging logged writes to primary storage every 10 seconds, which allows Virsto’s data map and optimization algorithms to lay down blocks from a single VM intelligently so as to avoid fragmentation and ensure high sequential read performance.

In other words, the Virsto vLog effectively takes a very random I/O stream from the physical host and turns it into an almost 100% sequential stream as it commits writes to the log, one after another, without having to suffer the acknowledgment (rotational and seek time) latency that writing randomly to any disk would normally have to endure. Writing sequential, rather than random, I/Os to spinning disk makes better use of disk spindles and, consequently, performance. Sequential I/O avoids the performance penalty that random I/O causes due to the rotational latency and seek times of spinning media trying to cope with random writes.

With Virsto, customers may not need to add SSD to meet performance requirements, but if customers do want to use SSD, Virsto provides for much more efficient use of it than conventional approaches that use it as a write cache (other solutions focus on read cache, which is important, but many use cases also need acceleration from write cache, as well). First, customers can choose to use SSD for the Virsto vLogs. The fact that Virsto sequentializes the write I/O is critical, since random writes can seriously reduce the performance of SSDs. A write log requires significantly less SSD capacity to speed up all the writes than a cache does, making much more efficient use of available SSD capacity. In Virsto’s architecture, these logs must be on external shared storage to support functions like failover, so the SSD capacity must be located in the SAN, not in a host-based SSD card.

Second, on the read cache side, Virsto’s provides storage tiering, which supports the option to create an SSD-based tier 0 where shared binaries, such as the golden masters used in VDI environments, can be placed to ensure extremely high read performance. Taken together, Virsto feels that these features make very large use of a very small amount of SSD capacity. In contrast, when SSD is simply used as a cache, the capacity required is usually a larger percentage of the overall primary storage capacity. In today’s virtual environments, it’s not uncommon to have tens or hundreds of terabytes as the primary data store, which means you can require a lot of SSD to get the performance speedups you desire. While SSD costs may be coming down, they are still easily at least 10x more expensive than spinning disks.

The third element of Virsto’s logging architecture is the configuration of a managed virtualized pool of capacity on primary storage called vSpace. This provides for a single pool of block-based storage, the ability to assign QoS at the VM level and to manage up to four tiers of storage within the vSpace. The pool of storage capacity can span multiple arrays, including heterogeneous arrays (i.e., not just one storage product family from one vendor but different arrays from different vendors, as well). Note that IT organizations may continue to use familiar software management services, such as remote mirroring or replication capabilities. They may also retain their snapshotting and thin provisioning capabilities if they choose, but Virsto also offers snapshotting and thin provisioning capabilities that it feels provides significant performance and scalability advantages over more conventional technologies. IT would have to choose which set of snapshotting software to use, but it would make no sense to use both sets.

Speaking of the snapshotting capabilities, Virsto can generate innumerable snapshots very quickly. This is the fourth, and perhaps most innovative, piece of the Virsto architecture. With thick VMDKs, it is common for each VM to take 20-30 minutes to be provisioned as all of the data is written over to the new VM. With Virsto, through the use of their writable clones, each vDisk becomes a 4 Kb marker that is created in seconds. Virsto feels its snapshot architecture prevents the significant degradation from many snapshots accumulating over time that some alternative architectures may suffer from. Note that these pooling and tiering capabilities also lead to faster provisioning and prevent over-provisioning, so management of storage generally improves.

Altogether then, Virsto strives to improve storage performance, capacity utilization, provisioning speeds, and management. In fact, Virsto claims (your mileage may vary) up to 10x improvement in end-users’ perceived performance (over the alternative), 90% reduction in the capacity utilized (versus using the standard approach), and up to 99% reduction in the provisioning times associated with high performance storage. Note that the Virsto storage hypervisor does this with no disruption or changes in managing a server virtualization infrastructure.

VDI as an Illustration of Where Virsto Plays
Let’s illustrate the use of Virsto with a key use case — virtual desktop infrastructures (VDI).To many IT administrators, implementing a VDI would be a dream-come-true. The theory is that business users need to access applications and use data, but that all this can be done over a network from a centralized IT facility (such as a data center) to thin clients (user computing devices that do not run large applications nor store much data) rather than thick devices, such as a desktop or laptop that store both applications and data. Many IT organizations are in love with the idea because of the perceived administrative efficiencies (such as updating one copy of an OS rather than having to update an OS on thousands of devices at the user endpoint) as well as tighter security (sensitive data stored centrally is easier to secure than that on a user device).

However, though the concept of VDI is simple enough, the reality too often becomes a nightmare. There are a number of stumbling blocks, but let’s just look at two of them. The first is cost, and a major cost of that is storage. My good friend John Webster of the Evaluator Group stated in a February 14, 2012 blog post that the typical virtual desktop running Windows applications is moving almost 1 MB of data per second with an access rate of 10 I/Os per second (averaging 100 KB per I/O). In addition, data block sizes can vary rapidly and accesses are very random. To support that performance, John recommends using SSDs at the array level, the ability to write thousands of writable snapshots, thin provisioning, and automated provisioning assistance. Of course, standard storage environments can do some of this but not all of this easily.

The second stumbling block is the consumerization of IT, where users want the ability to use more than one device, including tablet computers and smartphones, with the need to not only support applications and data that are common to all devices, but also applications and data that may be specific to each. Now, all this can be done with VDI in what is called the persistent approach, but this approach typically has meant keeping a copy of everything for each user rather than the non-persistent approach, which makes a single copy of everything that is in common. Although the non-persistent approach is appropriate for some classes of users (such as an overseas help desk), it is simply not going to fly with knowledge workers who demand “any device, anytime, anywhere” along with rapidly changing needs. But this approach is more costly.Still, IT organizations have not given up the dream of VDI. A study commissioned by Virsto showed that 67% of the respondents plan to engage in a VDI project within the next 12 months and that 54% have already started a pilot or deployed a virtual desktop project. On the downside, the survey showed that 46% of VDI projects are stalled due to unacceptable user performance and project cost overruns.

Now, as a rhetorical question, why do you think that Virsto sponsored the survey? Right: to demonstrate the magnitude of a problem for which they have a solution! Think of what Virsto brings to the table in a highly virtualized (thousands of virtual desktops) environment. Yes, the ability to use space wisely using Virsto vDisks and vPool (keeping the costs down) and the ability to get the necessary performance (a vLog which transforms random I/Os into more performance-friendly sequential I/Os), which overcomes any potential user response time performance objections. In other words, the Virsto Storage Hypervisor takes the pain out of one of IT’s thorniest current problems.

Mesabi Musings
Server virtualization is generally seen as a boon, but the catch (which probably has slowed the adoption of server virtualization in some cases) is that these projects often result in new problems with storage capacity utilization and performance due to the physical server trying to use its limited resources to handle a much heavier randomized I/O workload. The solution to the problem has been a Hobson’s choice (between unpalatable alternatives): overprovisioning storage or accepting less performance than necessary. Rather than accept either of these alternatives, some IT organizations have slowed down their implementations of server virtualization.

To combat these challenges, Virsto has developed a storage hypervisor that works closely with server hypervisors at the physical host level and on down to primary storage, thus eliminating the need for an unpalatable Hobbesian choice. The planned result is performance that meets users’ expectations, use of storage capacity in a way that better meets budget expectations and improved management administration benefits, such as easier provisioning of storage. The Virsto approach does this in a non-disruptive manner to both the server virtualization infrastructure and the storage infrastructure. Introducing one technology server virtualization may lead to unexpected problems, but what one technology can create, another technology can solve, and in this case that is what Virsto appears to have achieved.

At the time of publication, Virsto is not a client of David Hill and the Mesabi Group.

About the Author(s)

Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like

More Insights