Microservices: The Infrastructure Impact
Time for a pop quiz. Simplify the equation 14 = X – 7. Ready? Go.
Okay, enough time. You figured that out using what today's elementary schools call "mental math." You probably added 7 to 14 and came up with x=21.
Pretty simple, really, but consider what you just did. In order to simplify the equation, you took away from one side and added to the other. More complex algebraic equations require regrouping of variables and the like. Never fear, I wouldn't ask you to do that. Instead, know that the key lesson for today is that if you take away from one side of an equation, you must add to the other. This is relevant as we start looking at the two sides to the app equation, namely production and development. Or, if you want to simplify that, deployment and development.
There is a great deal of focus on simplifying development processes to increase the frequency and speed with which organizations can get to market. Whether that market is internal – productivity apps, for example – or external, revenue-generating apps is less important than the need to simplify as a means to reduce the friction that slows down development.
Microservices architectures are one of the ways in which simplification of app development is occurring. The goal is that breaking down monolithic apps into bite-sized services improves quality and speed. After all, if a team is responsible for only one piece of an application (a microservice), it's better able to focus on continuous dev and delivery of that service. APIs provide the collaborative glue between services that ultimate make up today’s modern and often cloud-native apps.
Doing so, however, has unintended consequences on operations and the production environment. The number of network and application services required to deliver that app to its intended audience increases with each decomposition. What that means is while we’re relieving friction through simplification in app dev, we’re shifting that friction into production. Scaling an app becomes much more difficult. It’s no longer a matter of simply cloning an app instance and adding to a load balancer. It requires multiple scaling domains -- one per microservice -- along with the ability to understand demand and usage from a much higher level.
The resulting architectural complexity that arises from simplification is challenging. As instances of apps go through their varying lifecycles, a significant number of events must occur -- events that trigger a process that today must necessarily be orchestrated lest they fail to keep up with demand. It’s simply no longer humanly possible to manually scale up and down the five microservices that make up an app, at least not in any financially responsible way. It has to be automated, and orchestrated, using cloud or cloud-like tools and technology.
That means yet another layer of technology that needs to be mastered and managed -- another layer of abstraction that lies between you and the apps you’re ultimately responsible for delivering. This is not the management side of the cloud or the piece that provisions and configures. Rather, this is the orchestration side of the house responsible for real-time operations. This side automatically scales, reacts to failure, and reroutes and adjusts how requests and responses flow through services. It also performs automatic service chaining, SDN, and real-time adjustments of capacity and flow.
These capabilities are necessities largely driven by the simplification of the dev side of the equation and shifting complexity from one side of the equation to the other. We are nowhere near seeing the impact of application architecture simplification on production operations. The “bottleneck” associated with the network was merely a symptom of a growing problem that will only be solved by automation and orchestration.
Even if you’re not fully embracing cloud computing, it's inevitable that you will be embracing cloud-like constructs if you’re going to succeed in the future. That's because automation and orchestration, enabled by APIs, are the only way to manage the growing complexity in operations caused by the simplification of apps.
Recommended For You
There isn’t a standard way of performing an application baselining or profiling. Here is a how-to video with suggestions on how to work through the process.
Hybrid and edge data centers are expanding the role of the traditional data center. This makes DCIM more important today. As with any management software, organizations need to know when it makes sense to keep it on-premise versus going with cloud-based DCIM.
The Interop 2019 speaker discusses ways that enterprises explore DevOps, the skills gap, and the rise of security as code.
Composable infrastructure provides a cloud-like experience for provisioning resources. Understand how it works and how it differs from Infrastructure as Code.
Big changes are happening with data center management as emphasis shifts from core to edge operations. The core is no less important, but the move to the edge opens new challenges as the environment becomes more complex.
Video overview on how to use a portable WAN emulator to validate bandwidth requirements to a backup server.