Hyperconverged systems don't do much when it comes to networking.
Hyperconverged infrastructure (HCI) is often marketed as a "data center in a box." The hype creates the somewhat false pretense that the technology wraps up storage, compute and networking into one neat software-defined package that can be molded and shaped to suit what your needs. What many early adopters have discovered is that the networking part of the hyperconvergence equation is out of step with compute and storage.
For most server and storage administrators, HCI makes a big leap forward as it marries the two infrastructure components together from a management perspective. But for network administrators, hyperconvergence isn't much of a game changer.
The networking component of a hyperconverged system is the least evolved due to how HCI developed. Hyperconverged trailblazers such as Nutanix and Simplivity did not include the network as part of their hyperconverged platform. Instead, it was simply assumed that the network was already built out in the data center; hyperconverged blocks of compute and storage would naturally plug-in to the existing infrastructure.
There are some flaws about this assumption. For example, what happens when the underlying networking infrastructure ages so that it can't efficiently support the hyperconverged system? Clearly, networking needs to be an integral part of the HCI package.
We've begun to see hyperconverged vendors scramble to partner with traditional networking vendors to provide the missing piece to their products either as a combined OEM or through co-authored best-practice design documentation. Additionally, technology companies that operate in both the server and network space, including Cisco and HPE, have jumped into the hyperconverged space to offer a single-vendor solution. However, today that simply means they’ll sell you supported data center top-of-rack switch gear as part of a hyperconverged package.
On the virtualization side of a hyperconverged system, the management of virtual network interfaces and virtual switches can vary from one vendor product to the next. Often, vendors add their own software-defined functions that are specific to the network to help speed up deployments and prevent human error. For the most part, this means that network administrators are going to end up building network profiles. These profiles are templates that can be used to broadly define information such as Quality of Service (QoS), load balancing, and security policies, which can then be pushed out to virtual switch ports. These are some nice capabilities, but only scratch the surface when it comes to networking.
Unless HCI vendors do a better job at integrating networking, network administrators won't see real change until software defined networking (SDN) makes it into their data center. Remember that in large enterprise networks, a single hyperconverged block is going to be one of many within a data center. Each block must be networked together with other hyperconverged units that are either sitting next to one another or in physically dispersed public and private data centers in multi-cloud architectures. So, until an end-to-end management solution is put in place, as a network administrator, you're still left managing individual chunks using separately managed data center switches and other network infrastructure components.
SDN in the data center as well as the latest trend known as intent-based networking can accomplish what today's hyperconverged systems currently can't do, which is a truly unified data center management experience from a network perspective.