Lately, the term “service mesh” has been cropping up in discussions about containerized and cloud-native applications. It came up in one expert's list of networking predictions for 2018, and the topic was mentioned at some of the recent Interop ITX sessions.
What is a service mesh and why would you need one? And what do IT infrastructure pros need to know about this new trend?
To find out, Network Computing spoke with Martin Taillefer, a senior staff engineer at Google and founder of the Istio project, an open source effort focused on service mesh. He highlighted several points about this new technology.
1. Service mesh is directly related to microservices architecture.
In recent years, enterprise developers, particularly those utilizing DevOps approaches, have begun building applications based on microservices architecture. In a nutshell, that involves breaking large applications into small independent pieces.
According to Taillefer, this microservices approach delivers a number of benefits in terms of scalability, management, independence." "But unfortunately, it introduces a large number of secondary problems, such as independent failure modes, difficulty debugging, difficulty to observe the behavior of the system, difficulty to impose quotas, and so forth," he said.
That’s where service mesh comes in.
“What service mesh is about is to introduce a common framework and a common set of rules and policies that you can apply to a fleet of microservices so that you can think of a fleet of microservices as a unit and still overcome the difficulties introduced by splitting things up into pieces,” said Taillefer.
Essentially, the service mesh is an abstraction layer that offers visibility into the interactions among all those microservices. It also provides traffic management and security capabilities, allowing IT to apply policies to their microservices traffic.
2. Organizations have several open source options for service mesh.
Taillefer’s Istio project is one of the best-known options for service mesh. The project began around 20 months ago. “With our experience at Google of running the world’s largest fleet of microservices, we realized that we understood the problems faced, we understood what was coming,” explained Taillefer. “In many respects in this field, at Google, we can see the future. We’ve seen all the problems, we’ve tried many solutions, and after many years, we’ve arrived at something we think works.”
Inspired by the success of the Kubernetes project, the team is taking what they learned inside Google and applying it outside the company. The project now also counts Lyft, IBM, and Red Hat among its sponsors.
But Istio isn’t the only player in the space. Other options include the Cloud Native Computing Foundation’s Linkerd, Buoyant’s Conduit and Lyft’s Envoy. All four projects are open source, and while they compete in some ways, they also complement each other in others.
3. Few companies are using service mesh in production today, but that number will likely grow.
Service mesh technology is still very new. While pioneers in the space like Google and Lyft are using the technology, it hasn’t yet caught on widely.
“I think there are a few companies that are using a little bit of Istio in production. . . but it’s not large-scale deployments,” acknowledged Taillefer.
However, that situation could change as soon as this summer. “We have a large number of customers who are awaiting our 1.0 release before officially rolling out this stuff in production,” he said. “We’re hoping the 1.0 release is going to be sometime in July.”