Service Mesh Mania

Service meshes will be an important component of your containerized environments whether on-premises or in the cloud.

Lori MacVittie

December 21, 2018

3 Min Read
Network Computing logo

In the wake of KubeCon came a flood of announcements and accolades that clearly indicate containers have succeeded cloud as the most talked about and interesting technology today.

Among them were announcements surrounding service meshes. From Istio coming to the Google Kubernetes Engine (GKE) to the open beta for Aspen Mesh, signs of the maturing solution for operating microservices at scale were everywhere. 

(Image: Unsplash)

Questions remain, however, about service meshes in general. What are they? Why do you need one? Doesn’t Kubernetes scale containers itself? What other value does a service mesh add? 

So, this month I thought we’d dig a little deeper into service meshes and why you’re likely to want one incorporated into your Kubernetes environment.

What is a service mesh and why do you need one?

Let’s start with the easy ones – what is it and why do you need one? A service mesh is a system of interconnected sidecar proxies that:

  1. Enable you to scale microservices using application layer (layer 7/HTTP) values. These are values like URI and hostname and other HTTP header attributes. This capability is important when routing and scaling APIs that are backed by microservices.

  2. Provide a way to enable tracing without significant developer effort. Tracing is important to troubleshooting in the highly distributed and volatile world of containers. Tracing instruments HTTP headers with information that can help identify the path a request took through the environment and where it may have gone wrong.

  3. Offers a way to aggregate logging across highly distributed systems. Individual microservices can disappear in an instant and, with them, their valuable log data. A service mesh can act as a centralized logging option that keeps safe important log entries.

Doesn’t Kubernetes scale containers itself?

Why yes, yes it does. But Kubernetes default scale operates at the transport layer – layer 4. That’s usually TCP for HTTP-based applications. TCP restricts scaling to operate only on IP addresses and ports. All the application layer goodness like URI path, server name, and interesting information in the headers is hidden at layer 7. The default scaling methods are great when you’re just load balancing across exactly the same microservice. But if you’re trying to route /api/product and /api/profile to different microservices, you need something that operates at the application layer. That’s one of the capabilities a service mesh brings with it.

What other value does a service mesh add?

A service mesh sees everything that happens inside a container cluster. And with an enterprise-grade service mesh, it can see what’s happening in clusters that live elsewhere, like the cloud. That means a service mesh is able to monitor more than just the health and liveness of a given pod/node. It can provide visibility into a variety of metrics such as latency, error rates, and mTLS status. Supporting the need for operability, it can also immediately alert on configured thresholds to ensure action can be taken to minimize MTTR.

One of the reasons Kubernetes is inarguably winning the container market is its focus on enabling an ecosystem. While it provides basic capabilities in a number of important application service categories including scaling and observability, the system enables and encourages others to extend it. Services meshes are a response to the need for application-aware scaling services as well as greater visibility into the frenetic communication that occurs in containerized applications – particularly when it is built on a microservices architecture.

Service meshes will be an important component of your containerized environments whether on-premises or in the cloud.

About the Author(s)

Lori MacVittie

Principal Technical Evangelist, Office of the CTO at F5 Networks

Lori MacVittie is the principal technical evangelist for cloud computing, cloud and application security, and application delivery and is responsible for education and evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she authored articles on a variety of topics aimed at IT professionals. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University. She also serves on the Board of Regents for the DevOps Institute and CloudNOW, and has been named one of the top influential women in DevOps.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights