The load on IT is growing. Applications double every four years. Operational costs double every eight years. And data volumes double every 18 months. That's according to a March 2014 IDC Directions report. If you consider the increased rate at which disposable applications (like mobile apps supporting occasional events like the World Cup or the Final Four -- go Wisconsin!) are becoming the norm and the potential explosion of the number of "apps" on the network thanks to microservice architectures, those numbers might be considered conservative. Very conservative, in fact.
Either way, the reality is that the load on IT -- and increasingly on the network -- is growing. Quickly. That means that IT has to figure out how to shoulder that burden. In particular, the network has to figure out how to shoulder its share of the increasing burden, because by every account, the network is what's (still) in the way of achieving operational excellence. In 2014, EMA Research pinpointed "slow, manual processes to reconfigure infrastructure to accommodate change" as a key pain point for 39% of its respondents.
In addition, a recent Avaya survey focused on SDN expectations noted:
Eight-three percent say they have issues to do with service configuration, and this includes: dealing with the complexity of configuring services and applications; manual configuration of servers and switches; making virtual machine moves between data centers; poor integration of physical and virtual systems; requirement for maintenance windows; limited or complicated access control (including BYOD); and complexity of Spanning Tree and multiple network protocols.
Let's face it, scaling the modern data center -- and specifically its network -- is challenging. A large part of the reason for that is because we've spent years deploying point products to solve one problem or the other. The result is complex, fragile system topologies comprised of hundreds of individual boxes, each with its own unique set of CLI, GUI, API and management methods, each that must be balanced against unique upgrade, patch and maintenance schedules.
What a mess.
And now we're proposing even more variance and fragility by adding in applications based on microservice architectures and microsegmented networks and application services and -- well, it's no wonder that our backs are breaking under the weight.
Because we haven't learned to lift with our knees. The reality is that it's not the load that breaks you down; it's how you carry it. That's one of the things software-defined approaches to scaling operations associated with DevOps teach you: how to carry the load safely.
Sometimes that means sharing the burden by collaborating with them; the folks on the other side of the wall. Sometimes it means approaching the app deployment pipeline with an eye toward the processes that govern the pipeline flow and ferretting out inefficiencies along the way. And sometimes it means adopting automation to eliminate repetitious (and admit it, tedious) tasks that distribute the weight more evenly and allow you to take on a heavier burden.
A DevOps approach isn't just for developers or cloud, or for application and web server infrastructure; it's for network infrastructure, too. Whether that infrastructure is physical or virtual, there's a great deal of operational efficiency that can be gained from applying the principles associated with DevOps to the network.
We'll be exploring (and demonstrating) some of those principles in more detail at our 3-hour workshop Achieving Operational Excellence Through DevOps later this month at Interop Las Vegas. Join us on April 27 to learn more.