Data centers

07:37 AM
David Hill
David Hill

Virsto’s Storage Hypervisor Enables Storage to Play Nicely with Server Virtualization

On the whole, server virtualization has been a blessing due to physical machine to logical machine consolidation that leads to better system and CPU utilization and attendant improved IT cost economics. But servers are only part of broader interlocked “systems” of hardware components (including networks and storage) and software (including operating systems and applications) that must work in harmony together. And any change in one component (in this case, servers) may cause complica

Second, on the read cache side, Virsto’s provides storage tiering, which supports the option to create an SSD-based tier 0 where shared binaries, such as the golden masters used in VDI environments, can be placed to ensure extremely high read performance. Taken together, Virsto feels that these features make very large use of a very small amount of SSD capacity. In contrast, when SSD is simply used as a cache, the capacity required is usually a larger percentage of the overall primary storage capacity. In today’s virtual environments, it’s not uncommon to have tens or hundreds of terabytes as the primary data store, which means you can require a lot of SSD to get the performance speedups you desire. While SSD costs may be coming down, they are still easily at least 10x more expensive than spinning disks.

The third element of Virsto’s logging architecture is the configuration of a managed virtualized pool of capacity on primary storage called vSpace. This provides for a single pool of block-based storage, the ability to assign QoS at the VM level and to manage up to four tiers of storage within the vSpace. The pool of storage capacity can span multiple arrays, including heterogeneous arrays (i.e., not just one storage product family from one vendor but different arrays from different vendors, as well). Note that IT organizations may continue to use familiar software management services, such as remote mirroring or replication capabilities. They may also retain their snapshotting and thin provisioning capabilities if they choose, but Virsto also offers snapshotting and thin provisioning capabilities that it feels provides significant performance and scalability advantages over more conventional technologies. IT would have to choose which set of snapshotting software to use, but it would make no sense to use both sets.

Speaking of the snapshotting capabilities, Virsto can generate innumerable snapshots very quickly. This is the fourth, and perhaps most innovative, piece of the Virsto architecture. With thick VMDKs, it is common for each VM to take 20-30 minutes to be provisioned as all of the data is written over to the new VM. With Virsto, through the use of their writable clones, each vDisk becomes a 4 Kb marker that is created in seconds. Virsto feels its snapshot architecture prevents the significant degradation from many snapshots accumulating over time that some alternative architectures may suffer from. Note that these pooling and tiering capabilities also lead to faster provisioning and prevent over-provisioning, so management of storage generally improves.

Altogether then, Virsto strives to improve storage performance, capacity utilization, provisioning speeds, and management. In fact, Virsto claims (your mileage may vary) up to 10x improvement in end-users’ perceived performance (over the alternative), 90% reduction in the capacity utilized (versus using the standard approach), and up to 99% reduction in the provisioning times associated with high performance storage. Note that the Virsto storage hypervisor does this with no disruption or changes in managing a server virtualization infrastructure.

VDI as an Illustration of Where Virsto Plays
Let’s illustrate the use of Virsto with a key use case — virtual desktop infrastructures (VDI). To many IT administrators, implementing a VDI would be a dream-come-true. The theory is that business users need to access applications and use data, but that all this can be done over a network from a centralized IT facility (such as a data center) to thin clients (user computing devices that do not run large applications nor store much data) rather than thick devices, such as a desktop or laptop that store both applications and data. Many IT organizations are in love with the idea because of the perceived administrative efficiencies (such as updating one copy of an OS rather than having to update an OS on thousands of devices at the user endpoint) as well as tighter security (sensitive data stored centrally is easier to secure than that on a user device).

However, though the concept of VDI is simple enough, the reality too often becomes a nightmare. There are a number of stumbling blocks, but let’s just look at two of them. The first is cost, and a major cost of that is storage. My good friend John Webster of the Evaluator Group stated in a February 14, 2012 blog post that the typical virtual desktop running Windows applications is moving almost 1 MB of data per second with an access rate of 10 I/Os per second (averaging 100 KB per I/O). In addition, data block sizes can vary rapidly and accesses are very random. To support that performance, John recommends using SSDs at the array level, the ability to write thousands of writable snapshots, thin provisioning, and automated provisioning assistance. Of course, standard storage environments can do some of this but not all of this easily.

The second stumbling block is the consumerization of IT, where users want the ability to use more than one device, including tablet computers and smartphones, with the need to not only support applications and data that are common to all devices, but also applications and data that may be specific to each. Now, all this can be done with VDI in what is called the persistent approach, but this approach typically has meant keeping a copy of everything for each user rather than the non-persistent approach, which makes a single copy of everything that is in common. Although the non-persistent approach is appropriate for some classes of users (such as an overseas help desk), it is simply not going to fly with knowledge workers who demand “any device, anytime, anywhere” along with rapidly changing needs. But this approach is more costly.

David Hill is principal of Mesabi Group LLC, which focuses on helping organizations make complex IT infrastructure decisions simpler and easier to understand. He is the author of the book "Data Protection: Governance, Risk Management, and Compliance." View Full Bio
3 of 4
Comment  | 
Print  | 
More Insights
Hot Topics
VMware NSX Banks On Security
Marcia Savage, Managing Editor, Network Computing,  8/28/2014
How To Survive In Networking
Susan Fogarty, Editor in Chief,  8/28/2014
White Papers
Register for Network Computing Newsletters
Current Issue
2014 Private Cloud Survey
2014 Private Cloud Survey
Respondents are on a roll: 53% brought their private clouds from concept to production in less than one year, and 60% ­extend their clouds across multiple datacenters. But expertise is scarce, with 51% saying acquiring skilled employees is a roadblock.
Twitter Feed