David Hill

Network Computing Blogger


Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

Virsto’s Storage Hypervisor Enables Storage to Play Nicely with Server Virtualization

Now, three of the main proponents of a storage hypervisor are DataCore, IBM, and Virsto and, as might be expected, all have different perceptions of what a storage hypervisor is. Today we will examine the storage hypervisor through Virsto’s lens.

Virsto installs software in each physical host as a virtual storage appliance, with one master VM that provides a central management point for installing and configuring the Virsto datastores. To the guest VMs, Virsto datastores look like NFS datastores, although they are in fact running on block-based storage. In addition, Virsto presents a new virtual hard disk option for server and virtualization administrators called a Virsto vDisk, which looks to VMware’s vSphere exactly like a native VMDK (Virtual Machine Disk Format). Virsto vDisks are storage targets for guest VMs just like native VMDKs, and the guest VMs don’t even know they are using Virsto storage — they just see the benefits.

In VMware speak (with equivalent terms for the Microsoft and Citrix worlds), a Virsto vDisk appears to vSphere as if it were an EagerZeroedThick (EZT) (hey, I didn't name it) VMDK, the highest performance storage object in a VMware environment. But that exclusivity comes at a price, including long provisioning times and inefficient storage capacity consumption. Each native VDMK allocates a fixed amount of storage, say 30 to 40 GB, for each VM. This capacity is reserved regardless of whether it is used by each VM. This is the most common choice of provisioning VMs for production workloads, since it meets performance requirements with the fewest disk spindles. A space-efficient alternative is to use another native VMDK option from VMware called Linked Clones, which are very space-efficient, but lack the performance characteristics of an EZT VMDK.

In essence, Virsto vDisk provides the performance benefits of an EZT VMDK, as well as the rapid provisioning times and storage space efficiency of a Linked Clone. The Virsto vDisk pretends to pre-allocate a disk just as a native EZT VDMK would, but that means that disk is allocated only logically to the server platform, rather than physically partitioning primary storage with fixed allocations. One of the big operational benefits of Virsto is that it stops the prevailing practice of carving up a single LUN for each VM. Instead, the blocks from each VM are intelligently placed on the physical disk capacity that has been set up as one single Virsto datastore, and capacity is consumed only when data is written to the storage. This approach is, of course, an instance of the ever-more-popular part of storage virtualization called thin provisioning, and done properly, it removes the critical problem of over-provisioning storage.

Virsto relies on a single dedicated Virsto vLog to be configured for each physical host. Each physical host that runs Virsto software writes to a 10GB LUN that has been set up as the Virsto vLog. This logging architecture improves write performance acceleration but how? The Virsto vDisks delivers the write I/Os from any VM to the appropriate vLog. The Virsto vLog then commits the write back to the VM, which means that it tells the VM that it can go on immediately to its next operation. However, the vLog really hasn’t waited until the I/O has been written to primary storage and confirmation of a successful write returned. The vLog is instead asynchronously de-staging logged writes to primary storage every 10 seconds, which allows Virsto’s data map and optimization algorithms to lay down blocks from a single VM intelligently so as to avoid fragmentation and ensure high sequential read performance.

In other words, the Virsto vLog effectively takes a very random I/O stream from the physical host and turns it into an almost 100% sequential stream as it commits writes to the log, one after another, without having to suffer the acknowledgment (rotational and seek time) latency that writing randomly to any disk would normally have to endure. Writing sequential, rather than random, I/Os to spinning disk makes better use of disk spindles and, consequently, performance. Sequential I/O avoids the performance penalty that random I/O causes due to the rotational latency and seek times of spinning media trying to cope with random writes.

With Virsto, customers may not need to add SSD to meet performance requirements, but if customers do want to use SSD, Virsto provides for much more efficient use of it than conventional approaches that use it as a write cache (other solutions focus on read cache, which is important, but many use cases also need acceleration from write cache, as well). First, customers can choose to use SSD for the Virsto vLogs. The fact that Virsto sequentializes the write I/O is critical, since random writes can seriously reduce the performance of SSDs. A write log requires significantly less SSD capacity to speed up all the writes than a cache does, making much more efficient use of available SSD capacity. In Virsto’s architecture, these logs must be on external shared storage to support functions like failover, so the SSD capacity must be located in the SAN, not in a host-based SSD card.


Page: « Previous Page | 1 2 | 34  | Next Page »


Related Reading


More Insights


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013



TechWeb Careers