There's been a lot of buzz in the storage and virtualization communities about using the storage in, or directly attached to, virtualization hosts as a replacement for the shared storage systems that have been the enterprise standard for almost two decades. As VMware’s VSAN prepares to emerge from its public beta next month, the hype is only building. Will server SANs take over the datacenter or are we rapidly climbing the Gartner Hype Cycle’s Peak of Inflated Expectations?
To be clear, the products we’re talking about create a common pool of storage from the flash memory and spinning disks on the virtualization hosts themselves. Stu Miniman at Wikibon dubbed this technology Server SAN and I previously called it Storage SoftwareDefinus Virtucalis Scaleoutus.
In many ways server SANs, including the Maxta Storage Platform, Sanbolic’s Melio platform, Atlantis ILIO USX and EMC’s ScaleIO, in addition to the aforementioned VSAN, are software implementations of the hyper-converged systems like those from Nutanix, Scale Computing and Simplivity.
The server SAN architecture has some significant advantages, especially for environments such as remote and branch offices, where a pair of servers can provide a complete computing infrastructure. That would save a business $10,000 or more -- the cost of an external shared storage system.
[Find out the real reason disk drives haven't gotten faster than 15K RPM in "The Myth Of The Supersonic Disk Drive."]
Server SANs also have a political advantage. I’ve spoken to several virtualization admins who are excited about server SANs simply because they can use this technology to free themselves from what they see as the tyranny of their organization’s storage administrators. In many organizations, the SAN guys will only provision Tier 1 storage replicated to another site for the VMs and the virtualization admins, who understand the workloads better, know that many of their VMs just don’t need their storage to be that fast and protected. They would rather give those VMs a more cost- effective home.
The server SAN fanbois are pitching the technology as a simple solution. While that may be true from a management perspective for some solutions, using direct-attached storage as a common pool creates its own set of complications.
The primary one is that servers are inherently unreliable devices with many single points of failure. This requires server SAN solutions to replicate data to multiple servers, which in turn means that for a server SAN to deliver the kind of performance a dedicated all-flash or even hybrid array does, the servers will need to be connected by at least a 10Gbps network. I remain unconvinced that server SANs are universally more desirable than best-of breed-storage for virtualization products such as Tintri.
I’m sure we’ll see server SANs take some piece of the market for use cases such as private clouds and remote offices. The question is how much storage will migrate from traditional SANS to server SANs and similar software-defined storage.
I’ll be talking flash, and of course software-defined storage, at Interop Las Vegas in a half-day workshop on March 31 "Deploying SSDs In the Data Center" and a pair of sessions on April 3 "Using Flash On The Server Side" and "Software-Defined Storage: Reality or BS?"
Disclaimer: While DeepStorage has a friendly relationship with just about everyone in the storage market, Maxta and Simplivity are clients.
Howard Marks is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage ... View Full Bio