Service Mesh and Proxy in Cloud-Native Applications

The heart of any service mesh is the proxy. The proxy serves as the data plane and determines the features, functionality, and performance of your mesh

Jason Morgan

April 29, 2021

5 Min Read
Service Mesh and Proxy in Cloud-Native Applications
(Source: Pixabay)

Digital transformation and cloud adoption are rapidly fueling cloud-native technology adoption with containerized microservices and Kubernetes at its core. However, a microservices-based architecture can bring on a whole new set of challenges. No longer do app components talk to each other through secure and fast function calls. Instead, they communicate via an inherently unreliable and insecure network. That's why visibility into network communication is an imperative to identify potential issues and address them before they result in significant business disruption. The most pressing risks to organizations amidst transformation and modernization efforts can be solved by deploying service mesh.

That said, the service mesh space is dynamic, with lots of projects and vendors building new and interesting solutions. While this vibrant and crowded space has multiple service mesh offerings, they all have one thing in common: they work by having a proxy intercept, and transform, application traffic.

What exactly is the role of the proxy in a service mesh? Let’s talk about service meshes and how the proxy fits in.

What is a service mesh?

There are many definitions of a service mesh. I usually prefer—and I may be biased here—the "Meshifesto," but here's my take on it:

A service mesh is a tool for controlling the interactions between applications. Service meshes work by inserting proxies next to individual applications and intercepting all traffic to and from that application instance. Those proxies make up the data plane and receive command and control signals, policies, and instructions from a separate control plane.

This definition refers to two basic components of a service mesh: the data plane and the control plane.

The data plane: This is where your application traffic lives. The proxies intercept all communication between applications and do something with them. That "something" is driven by two things: the capabilities of your proxy and the instructions they get from the control plane.

The control plane: The definition is in the name here. The control plane manages the interactions between the proxies by providing policy and information to them. The control plane also provides operators with an interface into the mesh and hosts whatever API it exposes. Additionally, it often hosts any built-in monitoring and visualization tools the mesh provides.

What does the service mesh give you?

With all that said, it's essential to understand why people want to use a service mesh. What do you get from a mesh? Different meshes provide different features and functionality, but most fall into the following buckets:

  • Security features: encrypting traffic between applications, ensuring identity, or handling higher-level concerns like policy and authentication and authorization.

  • Reliability features: making interactions between apps more reliable, accounting for network and application failures, or improving on built-in Kubernetes constructs.

  • Observability features: providing insight into what your apps are doing, making metrics about their interactions easily available, and providing maps into inter-app communications.

How does it do that?

The proxy intercepts communication between the applications in the mesh. The way a service mesh works in containerized environments like Kubernetes is that every single instance of an application has its own proxy, and the two work together like two peas in a pod. The proxy can handle everything from encrypting the traffic to automatically helping your application handle unexpected failures or latency.

So, the proxy is Important?

Yes! The heart of any service mesh is the proxy. The proxy serves as the data plane and determines the features, functionality, and performance of your mesh. And no matter which service mesh you use, it will involve a lot of proxies: If you have a production environment with 100 unique applications, each running three instances to remain highly available, adding a service mesh means running at least 300 proxies.

The proxy also shapes the control plane for a mesh. The control plane provides mesh administrators, usually, your platform team, with a user interface and is responsible for providing command and control to the data plane. A data plane with a lot of features and configuration options allows a control plane to offer lots of features, expose functionality, and, potentially, add complexity.

To stretch an analogy a bit, if the service mesh was a car, then the proxy represents the engine—at least if your engine was a scalable, fully distributed system of independent but cooperating components. As with your car, what's under the hood can have a huge impact on the operator's experience. Is it fuel-efficient or a gas guzzler? Is it easy for a beginner to hop in and drive, or do you need to learn to use a stick shift? Do you need to tune the engine before driving, or can you push a button, start it, and go?

Jason Morgan is Technical Evangelist for Linkerd at Buoyant.

Part two of this article will explore why the proxy matters and what is the best proxy.

About the Author(s)

Jason Morgan

Jason Morgan is Technical Evangelist for Linkerd at Buoyant and co-chair of the CNCF Business Value Subcommittee. Passionate about helping others on their cloud native journey, Jason educates engineers on Linkerd, the original service mesh. You might have encountered his articles in The New Stack, where he breaks complex technology concepts down for a broader audience. Before joining Buoyant, Jason worked at Pivotal and VMware Tanzu.

Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like

More Insights