Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Running Kubernetes on VMWare: Challenges and Solutions

sagar-vmware.jpg

VMware
(Source: Pixabay)

As many know, VMware server virtualization installs a hypervisor on a physical server, allowing multiple virtual machines (VMs) to run on the same physical server. Each VM can run its own operating system (OS). This means that a physical server can run multiple operating systems.

All VMs on the same physical server share resources such as network and RAM. In 2019, in an effort labeled as Project Pacific, VMware added support for hypervisors to run containerized workloads on Kubernetes clusters in a similar way. These types of workloads can be managed by the infrastructure team in the same way as virtual machines, and DevOps teams can deploy containers alongside VMs.

In this article, we'll discuss how VMware infrastructure supports Kubernetes as part of its vSphere product and the new Tanzu framework, and cover key challenges faced by enterprises when deploying containerized workloads on VMware infrastructure.

VMWare Services for Kubernetes

vSphere with Kubernetes

vSphere is VMware’s core server virtualization platform. VMware has made significant efforts to support containerized workloads within vSphere, and today vSphere users can run Kubernetes directly on the hypervisor tier.

When activated on a vSphere cluster, the vSphere with Kubernetes integration lets you run Kubernetes workloads directly on ESXi hosts. Another major advantage of this feature is that it enables you to create upstream Kubernetes clusters in dedicated resource pools. In vSphere with Kubernetes, containers run on a special type of VM called vSphere Pods.

Running Kubernetes at the hypervisor layer allows both roles, administrators and DevOps engineers, to operate on the same containerized applications, facilitating improved collaboration between vSphere administrators and DevOps teams.

Placing the Kubernetes control plane on the hypervisor layer enables the following features in vSphere:

  • vSphere administrators can create namespaces as part of the Supervisor Cluster. These namespaces can be set up with dedicated memory, CPU, and storage, and each namespace can potentially be used by a separate team or project.
  • DevOps engineers can use shared resource pools in a Supervisor namespace to run Kubernetes containers.
  • DevOps engineers can create and manage multiple Kubernetes clusters in a namespace and manage their lifecycle via the Tanzu Kubernetes Grid service.
  • vSphere administrators can manage and monitor Tanzu Kubernetes Clusters and vSphere Pods using the same tools as regular VMs.
  • vSphere administrators can view and monitor vSphere Pods and Tanzu Kubernetes Clusters running in different namespaces, where they are in the environment, and how they use VMware resources.

Tanzu

Tanzu is a VMware platform with products and services that enable businesses to build, run, and manage Kubernetes environments from a single point of control (learn about the challenges of enterprise kubernetes which Tanzu was designed to solve).

Capabilities:

  • Enterprise Kubernetes management - enables management of thousands of Kubernetes clusters across the organization and manage hundreds of users.
  • Kubernetes for SDDC - enables you to run regular VMs alongside Kubernetes containers, tightly integrated with vSphere, NSX, and vSAN.
  • Kubernetes for public cloud - enables you to build a custom Kubernetes setup in the cloud, with expert guidance from VMware, using carefully selected open source technologies.

Benefits:

  • Build cloud-native applications - develop applications using a modern cloud-native model and deploy existing legacy applications side-by-side using the same platform.
  • Run Kubernetes anywhere - deploy a common Kubernetes framework with cluster lifecycle management capabilities in your data center, public cloud, or at the edge.
  • Centrally manage all clusters - connect all Kubernetes clusters to a single point of control for access, backup, utilization management, and security policies.

Project Pacific and Project Monterey

Project Pacific, announced in late 2019, was a highly publicized VMware effort to integrate Kubernetes with its vSphere product. From VMware 7, Project Pacific architecture and principles are embedded into VMware’s core products.

In September 2019, VMware announced a new technology preview, Project Monterey, which will take the VMware-Kubernetes integration one step further.

The main premise behind project Monterey is that modern applications rely on CPUs alongside other specialized hardware like graphical processing units (GPUs) and field-programmable gate arrays (FPGA), but these are hard to support using the current approach to VMware-Kubernetes integration.

VMware launched a re-architecture of its VMware Cloud Foundation (VCF) stack, in which it aims to support these new types of hardware, as well as high-performance networking using SmartNIC technology. This will allow Kubernetes workloads running on VMware to benefit from new types of hardware with practically identical performance to bare metal servers.

Challenges and Solutions of Running Kubernetes on VMware

VMware has made tremendous progress in its efforts to integrate and standardize Kubernetes and containerized workloads. While ease of use has improved by an order of magnitude, there are still many challenges involved in running Kubernetes on VMware.

Operational Complexity

Running open-source software on bare metal is a complex task. One of the problems is the time required to order the physical hardware. In the United States, the average wait time is 86 days.

Solution: VMware vSphere provides a virtual hardware compatibility list. You can gain flexibility by using vMotion to move workloads in VMware.

Serving High Loads

If you need to manage higher application loads than normal, VMs can slow down. This can cause the node to degrade performance or completely fail. If this happens on the manager node, the cluster can lose quorum, and all nodes become unusable.

Solution: VMware recommends leaving spare capacity of at least 5% of the CPU on each physical node.

Loss of Performance

Container platforms generally speed up the development process. However, a common problem with running Kubernetes on a VM is that Pods run very slowly. Some users attribute this to hypervisors that add another layer of monitoring. Another reason is that if you run Kubernetes on VMware, you need to allocate resources at multiple levels.

Solution: You can use VM-level reservations to dedicate physical resources to VMs, without sharing them with other resources on the same host. Another solution is to use resource pool reservation. This makes it possible to transition load from a node suffering from performance issues to other active nodes in the resource pool.

Upgrading Existing Applications

One of the biggest challenges is how to make the most of containers and microservices within existing applications and projects. This is known as application modernization.

Solution: From VMware 7 and onwards, you can run containerized workloads alongside legacy applications running in traditional VMs. This gives you the flexibility to transition an application portfolio gradually to cloud-native technology.

Conclusion

VMware has made great efforts to make it easy to deploy Kubernetes on its infrastructure, but we’re not quite there yet. Kubernetes on VMware is still more complex and less performant than plain Kubernetes installed on your own bare metal services or directly on VM instances in public cloud providers.

The lure of running Kubernetes alongside legacy applications, with common management and monitoring, is great for many enterprises. But technical teams will sigh when they learn of the obstacles they need to overcome. Will VMware defeat its notorious complexity and monolithic architecture and become lean enough to support Kubernetes the way it was meant to be? Time will tell.