Data centers

07:37 AM
David Hill
David Hill
Commentary
50%
50%
Repost This

Virsto’s Storage Hypervisor Enables Storage to Play Nicely with Server Virtualization

On the whole, server virtualization has been a blessing due to physical machine to logical machine consolidation that leads to better system and CPU utilization and attendant improved IT cost economics. But servers are only part of broader interlocked “systems” of hardware components (including networks and storage) and software (including operating systems and applications) that must work in harmony together. And any change in one component (in this case, servers) may cause complica

Second, on the read cache side, Virsto’s provides storage tiering, which supports the option to create an SSD-based tier 0 where shared binaries, such as the golden masters used in VDI environments, can be placed to ensure extremely high read performance. Taken together, Virsto feels that these features make very large use of a very small amount of SSD capacity. In contrast, when SSD is simply used as a cache, the capacity required is usually a larger percentage of the overall primary storage capacity. In today’s virtual environments, it’s not uncommon to have tens or hundreds of terabytes as the primary data store, which means you can require a lot of SSD to get the performance speedups you desire. While SSD costs may be coming down, they are still easily at least 10x more expensive than spinning disks.

The third element of Virsto’s logging architecture is the configuration of a managed virtualized pool of capacity on primary storage called vSpace. This provides for a single pool of block-based storage, the ability to assign QoS at the VM level and to manage up to four tiers of storage within the vSpace. The pool of storage capacity can span multiple arrays, including heterogeneous arrays (i.e., not just one storage product family from one vendor but different arrays from different vendors, as well). Note that IT organizations may continue to use familiar software management services, such as remote mirroring or replication capabilities. They may also retain their snapshotting and thin provisioning capabilities if they choose, but Virsto also offers snapshotting and thin provisioning capabilities that it feels provides significant performance and scalability advantages over more conventional technologies. IT would have to choose which set of snapshotting software to use, but it would make no sense to use both sets.

Speaking of the snapshotting capabilities, Virsto can generate innumerable snapshots very quickly. This is the fourth, and perhaps most innovative, piece of the Virsto architecture. With thick VMDKs, it is common for each VM to take 20-30 minutes to be provisioned as all of the data is written over to the new VM. With Virsto, through the use of their writable clones, each vDisk becomes a 4 Kb marker that is created in seconds. Virsto feels its snapshot architecture prevents the significant degradation from many snapshots accumulating over time that some alternative architectures may suffer from. Note that these pooling and tiering capabilities also lead to faster provisioning and prevent over-provisioning, so management of storage generally improves.

Altogether then, Virsto strives to improve storage performance, capacity utilization, provisioning speeds, and management. In fact, Virsto claims (your mileage may vary) up to 10x improvement in end-users’ perceived performance (over the alternative), 90% reduction in the capacity utilized (versus using the standard approach), and up to 99% reduction in the provisioning times associated with high performance storage. Note that the Virsto storage hypervisor does this with no disruption or changes in managing a server virtualization infrastructure.

VDI as an Illustration of Where Virsto Plays
Let’s illustrate the use of Virsto with a key use case — virtual desktop infrastructures (VDI). To many IT administrators, implementing a VDI would be a dream-come-true. The theory is that business users need to access applications and use data, but that all this can be done over a network from a centralized IT facility (such as a data center) to thin clients (user computing devices that do not run large applications nor store much data) rather than thick devices, such as a desktop or laptop that store both applications and data. Many IT organizations are in love with the idea because of the perceived administrative efficiencies (such as updating one copy of an OS rather than having to update an OS on thousands of devices at the user endpoint) as well as tighter security (sensitive data stored centrally is easier to secure than that on a user device).

However, though the concept of VDI is simple enough, the reality too often becomes a nightmare. There are a number of stumbling blocks, but let’s just look at two of them. The first is cost, and a major cost of that is storage. My good friend John Webster of the Evaluator Group stated in a February 14, 2012 blog post that the typical virtual desktop running Windows applications is moving almost 1 MB of data per second with an access rate of 10 I/Os per second (averaging 100 KB per I/O). In addition, data block sizes can vary rapidly and accesses are very random. To support that performance, John recommends using SSDs at the array level, the ability to write thousands of writable snapshots, thin provisioning, and automated provisioning assistance. Of course, standard storage environments can do some of this but not all of this easily.

The second stumbling block is the consumerization of IT, where users want the ability to use more than one device, including tablet computers and smartphones, with the need to not only support applications and data that are common to all devices, but also applications and data that may be specific to each. Now, all this can be done with VDI in what is called the persistent approach, but this approach typically has meant keeping a copy of everything for each user rather than the non-persistent approach, which makes a single copy of everything that is in common. Although the non-persistent approach is appropriate for some classes of users (such as an overseas help desk), it is simply not going to fly with knowledge workers who demand “any device, anytime, anywhere” along with rapidly changing needs. But this approach is more costly.

Previous
3 of 4
Next
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
ALISTG
50%
50%
ALISTG,
User Rank: Apprentice
6/11/2012 | 12:55:38 PM
re: Virsto’s Storage Hypervisor Enables Storage to Play Nicely with Server Virtualization
Some good info in here but this could really have benefited from being a bit more objective.
e.g.
How does this approach compare to other solutions out here (such as Atlantis ILIO).
More Blogs from Commentary
Infrastructure Challenge: Build Your Community
Network Computing provides the platform; help us make it your community.
Edge Devices Are The Brains Of The Network
In any type of network, the edge is where all the action takes place. Think of the edge as the brains of the network, while the core is just the dumb muscle.
SDN: Waiting For The Trickle-Down Effect
Like server virtualization and 10 Gigabit Ethernet, SDN will eventually become a technology that small and midsized enterprises can use. But it's going to require some new packaging.
IT Certification Exam Success In 4 Steps
There are no shortcuts to obtaining passing scores, but focusing on key fundamentals of proper study and preparation will help you master the art of certification.
VMware's VSAN Benchmarks: Under The Hood
VMware touted flashy numbers in recently published performance benchmarks, but a closer examination of its VSAN testing shows why customers shouldn't expect the same results with their real-world applications.
Hot Topics
2
IT Certification Exam Success In 4 Steps
Amy Arnold, CCNP/DP/Voice,  4/22/2014
1
The Ideal Physical Network
Martin Casado 4/23/2014
White Papers
Register for Network Computing Newsletters
Cartoon
Current Issue
Video
Slideshows
Twitter Feed