One of the biggest trends in the data center during the past two or three years has been the emergence of integrated stacks that combine compute platforms, storage and networking into a complete system. Customers like the idea of an integrated stack because it eliminates finger pointing when something goes wrong and reduces implementation time.
Vendors like them because they get to sell a lot of kits, and loyal customers won't even think about using some other vendor's products. As a result, all the full-line IT infrastructure vendors now have some sort of integrated blade server/storage/networking stack. One could even argue that Dell's acquisition of Force10 and IBM's of Blade Network Technologies were primarily plays to be able to control all the technology in their stacks.
For an old timer like me, there's a certain "everything old is new again" aspect to the whole concept of integrated stacks. Back in the mainframe and minicomputer era, vendors like IBM, DEC and Data General delivered complete systems to their customers, including not only the compute and storage but also peripherals like terminals and printers, and even application software like the first-generation email applications PROFS from IBM and All-In-1 from DEC.
Of course, not all stacks are created equal. The first of the current generation of infrastructure stacks was the Cisco-VMware-EMC VCE joint venture, which started with tall, grande and venti combinations of EMC storage, Cisco UCS servers and network gear all running VMware's vSphere. As other vendors released their own stacks, they've ranged from reference architectures like NetApp's FlexPod or EMC's VSPEX to complete systems that come all cabled up with the hypervisor and management tools preinstalled like VCE's.
As someone who made his living integrating servers, network gear and storage from different vendors, I never found the full rack and larger, pre-engineered systems terribly attractive. I figured I could pick best-of-breed components and make them work anyway. I have found the smaller systems that generally fit in a single blade chassis by including a storage blade like Dell EqualLogic's new PS-4110. A compact system like that seemed to me a good fit for branch offices.
During the past year, a few startups have introduced a new type of converged IT infrastructure combining storage and compute not just in a preconfigured rack, but also in a single brick that could serve as the basis of a scale-out system. Steve Chambers, who works in VCE's office of the CTO, dubbed these systems hyperconverged in a post on his ViewYonder.com blog, where he divided the integrated stack market into six segments.
The hyperconverged systems are usually based on server hardware that uses a virtual storage appliance that manages the SSDs and/or spinning disks in that node and communicates with the VSA in other nodes to create a clustered distributed file system and publishes the data to the hypervisors via iSCSI or NFS. The whole thing is then managed by a vCenter plug-in so while there is a sophisticated storage system in the back end, the whole shebang can be managed by a virtualization or server admin.
While I haven't seen one yet, the next step would be for someone to include an Ethernet switch as part of the package, the way Skyera did on its Skyhawk array.
Several vendors make hyperconverged systems, including Nutanix, SimpliVity, Pivot3 and Scale Computing. I'll be writing more about these systems soon.