"Old-school tools demanded you owned all the pieces of infrastructure in order to measure and monitor it," Matt Stevens, CTO at AppNeta in Boston, explains. "At the end of the day, there is no single network. It's a combination of multiple networks and the single biggest challenge is how to understand end-to-end performance across networks that I don't own and yet still have to rely on."
Bob Laliberte, senior analyst at Enterprise Strategy Group (ESG), agreed that end-to-end visibility is very important, especially in highly dynamic virtualized environments. In many cases, workloads are shifting based on CPU and memory constraints without any regard to the impact on the network.
"If you only have one to three VMs per machine this is probably not an issue, but as organizations increase VM density to five to 10, as most have now, and then 25 or more over the next couple of years, this could be a significant problem," he says. "Organizations need to have complete visibility up to the VMs or, ideally, applications, to better understand where they can be moved to in order to guarantee or optimize its performance."
The emphasis on application performance monitoring (APM) is on the rise. Once viewed as something of a luxury, it's fast becoming a critical need for organizations to get a better handle on locating the sources of bottlenecks. And as more companies migrate to the cloud, the top considerations include the ability to monitor, track and report on real-time and historical network traffic data.
"The reason why network performance is generally failing is because it's been too down in the weeds, too network-centric, frankly," says Stevens. "It needs to be more end-user focused and application-centric. At the end of the day, we build networks to support applications, not the other way around. Too many network tools are designed in and of themselves to look at the network as if it was a standalone element, and it really isn't."
Has that led to an increase in complaints from end users? Probably, according to Laliberte. But that's also likely a reflection on how much more dependent the compute infrastructure is on the network nowadays, he says.
"In order to take advantage of all the great server virtualization mobility features, there needs to be a networked storage environment. Organizations are consolidating data centers and, therefore, hosting more applications centrally and remote offices and branch offices access them over the WAN; the consumerization of IT has led to a proliferation of mobile devices entering the workplace and straining campus networks," he says. "Even SaaS and public cloud offerings rely on the network to deliver a solid experience. So when you look at all these factors, it wouldn't surprise me that complaints may be up."
Does that mean network monitoring is failing? That all depends on what organizations have deployed, if anything at all.
"In the past, the environments were much more static and monitoring tools were used in a much more reactive or fire-fighting mode," he said. "With organizations moving at a much faster pace and being more virtualized, network monitoring tools need to adapt to the new paradigm. They are becoming more application aware, more real-time and provide much greater visibility and deeper levels of granularity. In order to take advantage of these capabilities, though, organizations need to make the up-front investment to purchase, deploy and train staff on these solutions."
Next: How Enterprises Can Address Network Performance Monitoring