Special Coverage Series

Network Computing

Special Coverage Series

Commentary

Howard Marks
Howard Marks Network Computing Blogger

VMware Gets Real About Virtual Storage

VMware is making good on its talk about software-defined storage with three products: vFlash, VSAN and Virsto. But as always, there are pros and cons.

VMware has talked about software-defined storage and a virtual SAN for a long time, but the tangible software it delivered still relied on shared storage--or in the case of its Virtual Storage Appliance (VSA), it wasn't ready for prime time.

This year's VMworld in San Francisco featured real software to back up the software-defined storage story. In the near term, VMware users will have three significant software-defined storage products: vSphere Flash Read Cache for server-side caching, VSAN to create hybrid storage pools from server attached devices, and good old Virsto for better snapshots and write acceleration.

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

The vSphere Flash Read Cache, frequently known as vFlash after the more ambitious development project from whence it sprung, is built into the ESXi 5.5 kernel, though VMware may choose to only enable it for users with Enterprise or Enterprise Plus licenses. While it does run at the hypervisor level and is well integrated into the vCenter administration, vFlash is primitive.

Yes, the guest VMs don't need an agent, but administrators must manually configure the cache settings for each guest, including the fixed amount of flash that guests should use as cache. At least half a dozen third-party caching products can dynamically allocate cache so VMs get more cache space when they need it and give up cache space for data they still have cached from the last run on Thursday night.

I hope vFlash will show IT that server-side caching is a good idea. I also hope free evaluation software from third parties will entice IT to choose from among these better products. VMware says vFlash will be out later this year.

[Get an overview of server-side caching and the latest moves from startups in "Server-Side Caching Gets Smarter Via Startup PernixData."]

VSAN, the virtual SAN (but without the lowercase v), turns storage across a cluster of servers (up to eight in a cluster) into a shared, hybrid storage pool. Each server donating storage in a VSAN cluster must have at least one SSD and 1 to 36 spinning disks. The SSD is used as a write-through cache with synchronous replication, protecting data across multiple servers. Unlike VMware's VSA, no hardware RAID controller is required.

VSAN works much like the storage engines in Nutanix and Simplivity's hyper-converged platforms and EMC's Scale-IO do, but there are differences.

One key difference is that those products are implemented as virtual storage appliances that run as virtual machines. By contrast, VSAN is a hypervisor kernel module. As a kernel module, VSAN interacts with the hypervisor not via NFS or iSCSI as the VSAs are forced to, but at the virtual machine level. This provides a shorter, cleaner path between the storage engine and the devices it manages, and reduces the number of CPU context switches needed to process a storage I/O request. VSAN is shifting from private to public beta, but most likely won't be ready for full production until early 2014.

The third leg of VMware's software-defined storage stool is Virsto, which VMware acquired in February. VMware has pretty much left Virsto alone, so it remains same flawed gem it always was. VMware's vSphere can take advantage of Virsto's metadata-based snapshot technology when duplicating VMs or composing VDI images through the VAAI cloning feature.

The flaw is that Virsto snapshots can't completely replace vSphere's log-based snapshots, as there is no "take snapshot" primitive in VAAI. So when you back up your VMs, the vStorage API for Data Protection will still use the performance- and space-sapping log-based snapshot as the source for the backup.

Reliable sources have told me that Virsto founders Alex Miroshnichenko and Serge Pashenkov are, rather than working on a new version of Virsto, working on improvements to the VMFS snapshot mechanism. I infer from this that VMware acquired Virsto more for Alex and Serge to fix their snapshots than as a replacement for VMFS.

The one significant storage technology VMware didn't advance at VMworld is Virtual Volumes, or vVols. The vVols technology essentially micro-volumes a block storage array to allow each .VMDK or virtual disk to be managed individually. Most block storage vendors plan to use vVols to provide per-VM storage management features including snapshots and replication. Current SAN protocols from iSCSI to Fibre Channel and SCSI just can't scale to the thousands of volumes required in even a moderate sized virtualization environment.

Also curiously missing was serious discussion of VMware's VSA. While VMware spokespeople were clear to say that the VSA was not being discontinued and would still be marketed to the SMB space, VMware didn't announce any enhancements or even really discuss how customers should choose between VSAN and VSA.

My impression is that VMware plans to sell VSAN at a higher price than the $2,000/server cost of the current VSA, leaving the VSA as a low-cost alternative. No official word on pricing (or even solid rumor) for VSAN has come my way, so that's merely an educated guess.

VMware is making real progress towards the software defined data center of its dreams, despite the obvious impact on parent company EMC's array business if the world flocks to VSAN. EMC is wise enough to know that its lunch is on the table, and that EMC is better off if VMware takes big bites rather than an upstart eating it all.

[Catch Howard Marks' informative session "SSDs In the Data Center" at Interop New York, from September 30th to October 4th. Register today!]



Related Reading



Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 

Editor's Choice

2013 SDN Survey: Growing Pains

2013 SDN Survey: Growing Pains

In this report, we'll look at the state of Software-defined networking in the marketplace and examine the survey results. We'll also outline what a typical SDN infrastructure looks like in 2013, including a controller, programmable infrastructure, applications and network multitenancy. We'll dig into both OpenDaylight and OpenFlow, two open source initiatives that are shaping SDN from outside. Finally, We'll offer guidance for network professionals on how to approach bringing SDN into their own environments.
Get full survey results now! »

Vendor Turf Wars

Vendor Turf Wars

The enterprise tech market used to be an orderly place, where vendors had clearly defined markets. No more. Driven both by increasing complexity and Wall Street demands for growth, big vendors are duking it out for primacy -- and refusing to work together for IT's benefit. Must we now pick a side, or is neutrality an option?
Get the Digital Issue »

WEBCAST: Software Defined Networking (SDN) First Steps

WEBCAST: Software Defined Networking (SDN) First Steps


Software defined networking encompasses several emerging technologies that bring programmable interfaces to data center networks and promise to make networks more observable and automated, as well as better suited to the specific needs of large virtualized data centers. Attend this webcast to learn the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging.
Register Today »

Related Content

From Our Sponsor

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

Business executives are challenging their IT staffs to convert data centers from cost centers into producers of business value. Data centers can make a significant impact to the bottom line by enabling the business to respond more quickly to market demands. This paper demonstrates, through a series of examples, how data center infrastructure management software tools can simplify operational processes, cut costs, and speed up information delivery.

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Both hot-air and cold-air containment can improve the predictability and efficiency of traditional data center cooling systems. While both approaches minimize the mixing of hot and cold air, there are practical differences in implementation and operation that have significant consequences on work environment conditions, PUE, and economizer mode hours. The choice of hot-aisle containment over cold-aisle containment can save 43% in annual cooling system energy cost, corresponding to a 15% reduction in annualized PUE. This paper examines both methodologies and highlights the reasons why hot-aisle containment emerges as the preferred best practice for new data centers.

Monitoring Physical Threats in the Data Center

Monitoring Physical Threats in the Data Center

Traditional methodologies for monitoring the data center environment are no longer sufficient. With technologies such as blade servers driving up cooling demands and regulations such as Sarbanes-Oxley driving up data security requirements, the physical environment in the data center must be watched more closely. While well understood protocols exist for monitoring physical devices such as UPS systems, computer room air conditioners, and fire suppression systems, there is a class of distributed monitoring points that is often ignored. This paper describes this class of threats, suggests approaches to deploying monitoring devices, and provides best practices in leveraging the collected data to reduce downtime.

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Rack power of 10 kW per rack or more can result from the deployment of high density information technology equipment such as blade servers. This creates difficult cooling challenges in a data center environment where the industry average rack power consumption is under 2 kW. Five strategies for deploying ultra-high power racks are described, covering practical solutions for both new and existing data centers.

Power and Cooling Capacity Management for Data Centers

Power and Cooling Capacity Management for Data Centers

High density IT equipment stresses the power density capability of modern data centers. Installation and unmanaged proliferation of this equipment can lead to unexpected problems with power and cooling infrastructure including overheating, overloads, and loss of redundancy. The ability to measure and predict power and cooling capability at the rack enclosure level is required to ensure predictable performance and optimize use of the physical infrastructure resource. This paper describes the principles for achieving power and cooling capacity management.