Calculating Hyperconvergence Costs

A look at what's inside HCI and the cost considerations.

Jim O'Reilly

June 12, 2017

8 Slides
Calculating Hyperconvergence Costs

Hyperconverged infrastructure is a hot area of IT right now. The realization that next-generation storage appliances look a lot like servers triggered a movement to make server storage sharable, bringing together the benefits of local storage in the server with the ability to share that storage across the whole cluster of nodes.

The key is a software package that presents all of the storage in the cluster as a single pool, with automated addition or subtraction of drives. This pool of storage, sometimes described as a virtual SAN, can be divided up and presented as smaller drive volumes, subject to access controls and sharing rules. The virtual SAN software handles issues such as replication and erasure coding groups and also deals with drive errors and data recovery questions.

A hyperconverged cluster looks like a set of identical nodes, each with a set of drives, a multi-processor server motherboard, DIMMs and LAN connections. This rigidity in configurations is somewhat artificial, and we are already seeing more wriggle room in the rules on what drives can be used and whether all the nodes have to be the same. This is because, fundamentally, HCI nodes are commercial off-the-shelf (COTS) systems.

Now, the fact that we can use COTS parts doesn’t mean every drive type is equal in performance or dollars per terabyte. Good performance of a cluster means that drives need to be fast, making NVMe SSDs the drive of choice. The premiums associated with NVMe drives are dropping, though getting a drive with 10 gigabytes-per-second throughput is still expensive.

With SSD capacities rising sharply over the next two years, driven by 3D NAND technology, the number of expensive drives per node will drop and some slower bulk SSDs will be added to provide secondary, cold storage.

Networking needs a very low overhead scheme, making RDMA over Ethernet is the solution of choice. 25 GbE ports are taking over from 10GbE, but a discrete LAN NIC card is still needed for RDMA support. This should change as the large cloud service providers pressure Intel for RDMA in the chipset.

In today's HCI nodes, server motherboards are typically dual- or quad-CPU cards. These are virtualized servers, so memory is a primary constraint in the number of VMs that can run; consequently, DIMMs are a significant cost element in a configuration. The advent of NVDIMMs will help by providing a bulk store to extend the effective DIMM capacity by a factor of 4X or more.

Let’s dig a bit deeper into HCI to figure out how to pick the right HCI node and what it will cost you.

(Image: agsandrew/Shutterstock)

About the Author

Jim O'Reilly

President

Jim O'Reilly was Vice President of Engineering at Germane Systems, where he created ruggedized servers and storage for the US submarine fleet. He has also held senior management positions at SGI/Rackable and Verari; was CEO at startups Scalant and CDS; headed operations at PC Brand and Metalithic; and led major divisions of Memorex-Telex and NCR, where his team developed the first SCSI ASIC, now in the Smithsonian. Jim is currently a consultant focused on storage and cloud computing.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights