Tintri Tackles VM Storage Challenges
March 24, 2011
Organizations are losing the fight against the rising tide of data, and while IT vendors are reaping the rewards of throwing more storage at the problem, it's not a long-term solution. According to the third annual InformationWeek Analytics State of Enterprise Storage Survey, the amount of actively managed storage continues to expand at around 20 percent per year, with many IT staffs dealing with growth rates exceeding 50 percent and most data centers doubling storage capacity every two to three years. Another recent study, InformationWeek Analytics Storage and File Virtualization Survey, shows that 82 percent of respondents are using or assessing storage virtualization.
That's the missing piece of the puzzle, or at least as far as handling the surging virtual machine (VM) storage, says Tintri, which is launching Tintri VMstore, what it calls the world's first production VM-aware storage system. The company was started by Dr. Kieran Harty, who led all desktop and server research and product development at VMware from 1999 through 2006.
Legacy storage systems, which account for the lion's share of storage being bought today, were architected before virtualization was even a consideration, says Harty. With VMstore, organizations can virtualize as much as 80 percent or more of their IT infrastructures, instead of the typical 20 to 30 percent prevalent today, he says.
Designed to optimize performance and simplify management of VM storage at scale, VMstore features a VM-aware file system to service I/O workloads on a per-VM and per-virtual disk basis. The hardware appliance starts at $65,000 for a single node with over 1TByte of flash and 16-by-1TByte SATA drives. Available initially for VMware users, typically with 100-plus VMs, other hypervisors will be added according to customer demand, says Tintri.
One of the strengths of VMstore is that it's easy for a system administrator to use. There is no need for Fibre Channel storage expertise, nor all the messy Fibre Channel mapping and zoning, says analyst Terri McClure, Enterprise Strategy Group. "It was not designed from the storage up by adapting legacy storage architectures to accommodate virtual server environments; it was designed from the virtual server down, to address the unique storage challenges introduced when you virtualize the server layer--performance, 'seeing,' managing and troubleshooting storage in relation to virtual servers from capacity and troubleshooting drilldowns. [It] creates a virtual storage layer that is as dynamic as today's server environments. No need to figure out virtual-to-physical mapping between physical servers, virtual servers, data stores, LUNS and disks."