Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Time For LUN To Die

While we may disagree about whether hyperconverged architectures will take over the data center and if there’s a future for dedicated storage arrays, the storage and virtualization chattering classes are in agreement that the LUN, as a unit of storage abstraction, needs to die. Now that VMware is actually preparing to release its long-awaited Virtual Volumes (VVOLs) technology, we can start managing storage one VM at a time.

Today’s SAN protocols and storage systems require a logical connection from a server to each volume that server will connect to and limit each server to 255 active connections. This limit seemed quite generous for physical servers with exclusive access to a volume. And as long as storage administrators were dealing with a handful of physical servers, requiring a few CLI commands or mouse clicks to provision a new volume was reasonable.

In a hypervisor environment, the only practical way to manage conventional storage is to create a relatively small number of shared datastores and put multiple VMs in each. That simplifies management and allows virtualization admins to create VMs without waiting for the storage team to provision LUNs.

When, as in most organizations, your storage team allocates a small number of fixed size LUNs to your hypervisor cluster, you’re left managing several datastores and several pools of free space. Since prudent administrators will leave some free space in each datastore, more datastores means more idle capacity. Thin provisioning helps quite a bit, but vSphere isn’t as aggressive as say Hyper-V with its use of the T10 UNMAP command to release unneeded capacity to the free pool, stranding capacity in individual datastores over time.

Even worse, since the storage array knows nothing about the data it holds other than a logical block address, any data services the array provides have to be provided for an entire LUN. That significantly reduces the value of these data services. If we want to replicate some VMs and not others, we need to segregate those VMs into replicated and non-replicated datastores.

Because applications can only be quiesced for a very short period of time, Windows VSS times out if the snapshot isn’t taken within 15 seconds and it’s not possible to take an application-consistent snapshot of the many VMs stored on an array LUN. This forces backup applications to use hypervisor snapshots, with their significant performance impact, as their data sources.

While a few storage systems such as those from Tintri and Nutanix have provided per-VM data services, they’ve acquired the context they need to identify a given VM’s data by maintaining their own file systems and delivering it via NFS. VMware VVOLs have long promised a more general solution that will work on block as well as file storage. While I’ve wondered whether VVOLs would ever come, their arrival is looking more imminent than ever.

In my next post,  I'll look at how VVOLs do their magic and how they change the storage planning equation.