• 06/03/2015
    9:00 AM
  • Rating: 
    0 votes
    Vote up!
    Vote down!

Where DevOps Meets The Network

Network architects and operators need to work with application developers and operators to achieve optimal application performance.

There's a belief -- or maybe it's  more of a perception -- that high-speed networks pretty much eliminate performance issues. After all, if the packets are traveling faster, performance gets better, right? The rabbit is going to beat the turtle.

Only we all know he doesn't, right? 

That's because he stopped. Many times. The reason he stopped is irrelevant to this discussion. The reality is he took breaks and naps and pretty much destroyed his "high speed" performance with lots and lots of stops along the way.

And that’s where this analogy becomes an overlay on the network (pun intended). Because even with the incredible network speeds we have today -- and the increases still coming -- we still have latency.

Latency is, in its simplest definition, the amount of time it takes for data to get from one place to another. For some, this means the time it takes for a client to get a response from its respective app. But for those architecting the app and the network, it's also about the time between hops in the network. And not just in the network as in between routers and switches, but in the network, meaning between the services that deliver the apps like security, load balancing, and caching.

In a traditional architecture, there is X latency caused by network propagation, transmission and processing by intermediaries. We could do the math, but this isn't graduate school and to be honest, I hated those calculations. Suffice to say every "bump" in the wire (hop) introduces latency, period. And every protocol introduces its own amount of latency on top of the base latency from IP communications. All that latency adds up to the total response time of an application, which we try valiantly to keep under five seconds because users are demanding.

To make this more difficult, emerging architectures like microservices increase that latency by introducing even more stops along the way. A single app may suddenly be comprised of 10 services, each with its own set of network services. Ten times the services, 10 times the latency.

This is where DevOps meets the network: where the notion of application performance management must include those outside app dev and even ops. Ops in the traditional sense, meaning compute or application infrastructure, cannot alone address this key aspect of application performance. It's not just about the app or even its services, it's also a topological (and therefore network architecture) issue.

Sure, ops can tweak TCP and leverage multiplexing or adopt HTTP/2 or SPDY to improve round-trip times and reduce performance-killing latency, but that cannot and does not impact the latency inherent in the network architecture. Too many hops and too much hairpinning (or tromboning, depending on your preference) are going to impact performance irrespective of the efforts of ops and dev.

Figure 1:

That's why the network (as in network ops) has to get involved with ops and dev. It's imperative that the network architects and operators understand the architecture of the application and its requirements for network and application services as a way to better orchestrate the flow of traffic from one end to another. It isn't enough for network ops to just rapidly provision and configure those services, they need to be optimally provisioned and configured in a way that addresses the performance needs of the application.

This is why DevOps is often discussed in terms of its cultural impact on the organization, because it's not enough to automate and orchestrate individual silos of functionality across the application delivery spectrum. The automation and orchestration should not just be supportive of the need to work faster, but also of the need to work smarter.  This requires collaboration with ops and dev to ensure that the stops along the way are minimized and organized in the most optimal, performance-supporting manner possible.

And that’s a cultural change, and one that needs to occur in order for the business to meet and (one hopes) exceed expectations in the application economy. 


Re: Where DevOps Meets The Network

We hear a lot of talk about the evolution of the network these days. You do a good job setting the scene here, Lori and explaining where and why the network is going to need to rise to meet the demands of next-gen applications. We hear about even huge giants like Netflix making use a of microservice approach, and they obviously have teams upon teams dedicated to eliminating every last bottleneck. Not everyone is Netflix, but users everywhere are bound to be just as impatient. For what it's worth, the number of devices is set to explode in the coming years, and the amount of data consumed per-device is too - and where's it coming from? These same apps. 

The DevOps-like approach makes a lot of sense when you break it down; just as you said, DevOps is not about automating one or two processes, it's about baking everything you need into your release cycle from the ground up, and making it work quickly and smoothly. So, yes, if you integrate network optimization needs, performance monitoring, and problem resolution into your processes at the ground level, it might just become painless for your network team... instead of them having to put out fires as they arise. It's better for everyone. People (re: culture) are part of that; having networking people at the very first meeting, to know what's needed before it's needed - every time, not just sometimes. 

Re: Where DevOps Meets The Network

Nice observations zerox. Seems like that cultural change will be a challenge since enterprise silos are so entrenched. I imagine the change would need to be driven from the C-level.