Zero trust is so much more than simply controlling authentication and authorization of users and endpoints. Securing the communication between distributed workloads inside the private cloud must also be addressed. As it stands today, there are three leading strategies that enterprise IT departments use to gain control over lateral network movement within the private cloud. Let's look at the differences between each architecture type, so you understand the pros, cons, and caveats of each.
Traditional network perimeter firewall appliances
Taking a page from internet and WAN perimeter security philosophies, many organizations have positioned layer 4-7 firewalls at the private cloud perimeter. The firewalls are used to create logically separated secure "zones" from which inter-workload communications can safely occur. Movement between zones is protected by firewall permit/deny policies and is often passed through a gauntlet of supplemental network security tools, including intrusion detection/prevention sensors and network sandbox appliances.
While this model is easy for network operations teams to conceptualize, it suffers from a few notable shortcomings. For instance, the creation and oversight of secure zones require a number of manual processes. Thus, as the need for more granular control over private cloud communications grows, the creation of these zones and associated policies can become overwhelming from a management perspective.
But perhaps the bigger issue is the fact that traffic destined for inter-zone communication must be redirected to a centralized set of firewalls and associated network security tools. This hairpinning of traffic can quickly create bottlenecks that ultimately cause performance issues for applications and those that use them. When situations like this inevitably occur, administrators often resort to securing only the most critical of workload communications while allowing others to flow without these important controls in place. Thus, in the end, a full zero trust framework within perimeter-secured private clouds may prove difficult to achieve.
Distributed network security agents installed at the OS level
To eliminate the inherent growth, scalability, and data security issues observed when using perimeter-based security strategies in the private cloud, some are looking to distributed security solutions as a remedy. A number of popular enterprise cybersecurity vendors offer software agents that can be installed directly on server operating systems (OS) while being centrally managed and monitored. Distributing these network security tools at the OS level eliminates the potential for network bottlenecks and inefficient paths. Additionally, they provide for more granular control over inter-server communication while simultaneously reducing management complexity.
Beyond scalability and management benefits, this workload security model works best in environments that continue to maintain large numbers of legacy servers that are hosted on bare-metal hardware. Placing agents on the server OS provides an added level of control and visibility that far surpasses the perimeter security model. However, be aware that there are concerns over the chance of compromise if a bad actor were to gain access to a server at the OS level. In these cases, there is a chance that the security agent could be disabled, nullifying much of the control over lateral movement between systems.
Distributed network security agents installed at the hypervisor level
By now, most private cloud environments have migrated away from bare-metal servers and onto virtualized platforms – the two largest players being Microsoft and VMware. In these private cloud environments, the utmost in workload security that fully adheres to zero trust principles can be obtained when using distributed network security agents that are installed at the hypervisor level.
Speaking with Vivek Bhandari, Sr. Director of Product Marketing at VMware, he states, "The beauty of placing distributed network security directly on the hypervisor is that security is put directly in line with the data plane. This makes for the most efficient – and arguably most secure placement for protecting workload communications. In virtualized private cloud environments, not only are servers virtualized but large portions of network infrastructure are virtualized as well. Thus, it makes sense to place security at the hypervisor to deliver granular control and visibility without the fear of the OS being compromised."
Achieving zero trust inside the private cloud
While we continue to see a macro shift of applications and services to public cloud providers, the reality is that most business-critical workloads continue to be deployed within private clouds. Thus, careful thought must be put into how security teams should approach secure workload access in these environments. The three architecture options described here are meant to help show how zero trust principles can be achieved depending on the need for scalability, manageability, and overall control.