It was just two short years ago that then VMware CTO Steve Chambers coined the term "hyper-converged" to differentiate the combined compute/storage appliances that Nutanix, SimpliVity, and Scale Computing were introducing from converged systems like VCE’s Vblock. Fast forward to today and everyone from mainstream IT vendors to software-defined storage startups are peddling hyper-convergence as the next big thing, or at least another option on their line card.
As with most new IT architectures, hyper-convergence first emerged from startups, but just as a tsunami starts with an undersea earthquake, the current tsunami of hyper-converged products started with the introduction of VMware's EVO:RAIL platform in August. EVO:RAIL is a standardized reference architecture that defines a high-density, four-server cluster that hardware vendors can build from their respective parts bins.
While you can buy EVO:RAIL systems from Fujitsu, Dell, Supermicro, directly from VMware mothership EMC, or most recently HP and Hitachi Data Systems, the hardware is basically the same. Each of the four nodes has dual Xeon E5-2620 v2 processors, 192 GB of memory, a 400 GB SSD, and three 1.2 TB disk drives.
Vendors might choose different 10 Gbps Ethernet chips and some users might have a strong preference for HP's iLO over Dell's DRAC, but EVO:RAIL systems are essentially interchangeable. Much of that interchangeability comes from VMware's software bundle, which includes not just vSphere and VSAN, but also an automated installation process and simplified management console.
When HP was conspicuously missing from the initial EVO:RAIL announcements, I predicted that rather than run with the pack, it would build its hyper-converged appliances around its StoreVirtual VSA, the latest name for the Lefthand iSCSI technology.
Turns out I was half right as the HP ConvergedSystem 200-HC StoreVirtual (could HP come up with a longer name?) appliances do just that. The base model uses all SAS drives since the SQL Server supporting a dozen cash registers in a big box store might not need more than a few hundred IOPs. The more performance-oriented model adds SSDs and leverages StoreVirtual's sub-LUN tiering. I was also half wrong as HP decided to hedge its bets and load the VMware software stack to make it EVO:RAIL.
As successful as I expect EVO:RAIL to be, it's not the most interesting thing going on in the hyper-converged world. The boffins at Gridstore, like the boffins at Scale Computing before them, figured out that if their scale-out storage system for Hyper-V ran under Windows on a single Xeon, they could put it on a beefier server and use Hyper-V to run compute workloads as well. The servers in their 4 node, 2U appliance are pretty beefy with 10 core E5-2690 processors.
Despite my opinion that all flash and hyper-convergence are two great tastes that might not taste great together, vendors are bringing out all-flash hyper-converged systems. Since budgets are binary, people who can afford an all-flash solution will buy it the same way people that can afford BMWs select the ultimate driving machine over a more prosaic Hyundai and Chevy. Nutanix's NX-9000 puts 6x1.6TB flash for 3.2TB of usable capacity with 3-way mirroring on each 20-core node.
Even Gridstore has gotten in on the all-flash act, using 1 TB of high-endurance SSD as a cache in front of 5 TB of more read-oriented SSDs on each node. Gridstore's virtual controller uses a distributed parity data protection model, which has significantly less overhead than the more typical three-way mirroring -- an advantage with expensive flash.
It now appears clear that even if hyper-converged systems don't become the dominant architecture for the next decade, they will -- like blade servers before them -- be the right choice for a significant piece of the market. While they've been preoccupied closing the deal for IBM's x86 server division, the folks at Lenovo will have to come to the party pretty soon.