Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Where DevOps Meets The Network

There's a belief -- or maybe it's  more of a perception -- that high-speed networks pretty much eliminate performance issues. After all, if the packets are traveling faster, performance gets better, right? The rabbit is going to beat the turtle.

Only we all know he doesn't, right? 

That's because he stopped. Many times. The reason he stopped is irrelevant to this discussion. The reality is he took breaks and naps and pretty much destroyed his "high speed" performance with lots and lots of stops along the way.

And that’s where this analogy becomes an overlay on the network (pun intended). Because even with the incredible network speeds we have today -- and the increases still coming -- we still have latency.

Latency is, in its simplest definition, the amount of time it takes for data to get from one place to another. For some, this means the time it takes for a client to get a response from its respective app. But for those architecting the app and the network, it's also about the time between hops in the network. And not just in the network as in between routers and switches, but in the network, meaning between the services that deliver the apps like security, load balancing, and caching.

In a traditional architecture, there is X latency caused by network propagation, transmission and processing by intermediaries. We could do the math, but this isn't graduate school and to be honest, I hated those calculations. Suffice to say every "bump" in the wire (hop) introduces latency, period. And every protocol introduces its own amount of latency on top of the base latency from IP communications. All that latency adds up to the total response time of an application, which we try valiantly to keep under five seconds because users are demanding.

To make this more difficult, emerging architectures like microservices increase that latency by introducing even more stops along the way. A single app may suddenly be comprised of 10 services, each with its own set of network services. Ten times the services, 10 times the latency.

This is where DevOps meets the network: where the notion of application performance management must include those outside app dev and even ops. Ops in the traditional sense, meaning compute or application infrastructure, cannot alone address this key aspect of application performance. It's not just about the app or even its services, it's also a topological (and therefore network architecture) issue.

Sure, ops can tweak TCP and leverage multiplexing or adopt HTTP/2 or SPDY to improve round-trip times and reduce performance-killing latency, but that cannot and does not impact the latency inherent in the network architecture. Too many hops and too much hairpinning (or tromboning, depending on your preference) are going to impact performance irrespective of the efforts of ops and dev.

Figure 1:

That's why the network (as in network ops) has to get involved with ops and dev. It's imperative that the network architects and operators understand the architecture of the application and its requirements for network and application services as a way to better orchestrate the flow of traffic from one end to another. It isn't enough for network ops to just rapidly provision and configure those services, they need to be optimally provisioned and configured in a way that addresses the performance needs of the application.

This is why DevOps is often discussed in terms of its cultural impact on the organization, because it's not enough to automate and orchestrate individual silos of functionality across the application delivery spectrum. The automation and orchestration should not just be supportive of the need to work faster, but also of the need to work smarter.  This requires collaboration with ops and dev to ensure that the stops along the way are minimized and organized in the most optimal, performance-supporting manner possible.

And that’s a cultural change, and one that needs to occur in order for the business to meet and (one hopes) exceed expectations in the application economy.