Applications

03:59 PM
Mike Fratto
Mike Fratto
Commentary
Connect Directly
Twitter
LinkedIn
Google+
RSS
E-Mail
50%
50%

Application Performance Monitoring Coming of Age at Cisco Live

Cisco's focusing on delivering apps when they're needed--which will lead to better application performance monitoring, according to our blogger.

One thing that became clear at Cisco Live is that the IT industry is starting to focus on delivering applications. Yes, that's always been the goal--it's what IT does--but there's a bigger emphasis on delivering applications, and I think the next wave of innovation is going to focus on better application performance monitoring (APM), resulting in better application delivery.

That was certainly the motivation behind some of Cisco's announcements, like:

  • Cloud Connectors, which split processing locally and in a cloud service;
  • AppNav, which brings elasticity to application optimization;
  • OnePK, which offers integration with Cisco's products; and
  • the UCS-E blade, which can be used to host numerous network services in branches.

Collectively, these new products show that Cisco is taking more steps down the path that, arguably, can be traced back to the start of its Borderless Networks program--which focuses on delivering apps where they're needed.

Of course, Cisco isn't alone in delivering apps. Akamai and Riverbed created a service that tackles application performance woes. Riverbed's Steelhead appliances are placed in front of SaaS providers like Salesforce and Microsoft 365 on Akamai's network and in branch offices, and the SaaS applications are optimized over the WAN using application-specific features and data deduplication. The result is more responsive services.

But there's more going on. VCE CTO Trey Layton talked about how the company scopes Vblocks and, through testing and deployment, assesses the capacity required for various workloads. He also discussed how VCE then provides a Vblock that will meet the estimated demands. That's capacity planning 101.

But workloads rarely remain static--they change over time. Therefore, the next phase for VCE is to capture that expert analysis and provide recommendations for when to scale the resources dedicated to an application up or down, as well as deep analysis into what the bottlenecks are and how best to manually or automatically address them. If your application is resource-starved, it doesn't matter how optimized your network is.

How do you measure performance? Wait until your users call IT support because this or that application is slow? Application performance monitoring has historically been viewed as something of a luxury. The systems are expensive to acquire and maintain. Depending on what kind of APM you acquire--agent-based, network probe, flow-based, application-based or a combination--one or more IT gains lead to more operational burden in the care and feeding of APMs. In a number of cases, the results were not that useful in pinpointing the sources of bottlenecks.

At least one Cisco executive, David Ward, VP and CTO of the service provider technology group, articulated why the company wants to expand the definition of software-defined networking. Cisco is looking beyond mere command and control to an SDN definition that includes deep integration so the wealth of data that's collected on switches and routers can be extracted and analyzed elsewhere.

"Getting the data [collected in performance counters] off the switch may be more important than setting a configuration," he told a panel on SDN. "There are numerous stats available on the CLI that aren't transferred via SNMP, which is primarily port counters. SNMP is polled and your [data] resolution is limited by the polling interval. We have a goldmine of data that we can get out in a standard manner. Once you get data out and have programmatic control, you have a feedback loop. If you can get the data and analyze the activity, then you can create and enforce a SLA via orchestrated changes to the network. It is what goes on in the mobile network today. We want to move to [those methods to enterprise, service provider and carrier] wireline, provider edge and elsewhere."

If you've been an application performance monitoring proponent, I imagine you've just executed a face palm. Of course application performance is important: You've been trying to get people to listen for years. Sometimes it just takes an unrelated turn of events to get your message heard. In a recent private cloud poll, the No. 1 most important features in selecting and building a private cloud is APM--and that's likely because if IT is going to get the most value from private clouds ensuring that the applications are performing at their best, IT needs to collect and analyze more data than just help desk tickets.

The analysis side of APM--the products and services that collect and report on application performance--is improving from even a few years ago. I spent some time getting a demo of ExtraHop, and I was impressed with what the company could report on and how it reported its findings. Since the company uses network sensors for packet analysis, it can see the entire transactions from l2-L7--so yes, you need multiple span ports or network taps to capture the traffic, but once you do following transactions, even multi-tier transactions, let you correlate a web request to its associated data base query.

It gets you started, but you may still need to see inside the server or hypervisor to see if there are virtual or physical hardware issues. Just like APM vendors moved to a combined agent and agent-less architecture to gain increased visibility into server performance, at the private-cloud level, APM systems will need to analyze and collect data from the network, servers, hypervisors, storage, OS's, applications and clients to make sense of performance and orchestrate responses.

The challenge is going to be collecting and reporting data without adding undue administrative burden to IT. That was, and still is, one of the problems with application performance monitoring, and more data will only exacerbate collection and analysis.

Mike Fratto is a principal analyst at Current Analysis, covering the Enterprise Networking and Data Center Technology markets. Prior to that, Mike was with UBM Tech for 15 years, and served as editor of Network Computing. He was also lead analyst for InformationWeek Analytics ... View Full Bio
Comment  | 
Print  | 
More Insights
Cartoon
Slideshows
Audio Interviews
Archived Audio Interviews
Jeremy Schulman, founder of Schprockits, a network automation startup operating in stealth mode, joins us to explore whether networking professionals all need to learn programming in order to remain employed.
White Papers
Register for Network Computing Newsletters
Current Issue
Video
Twitter Feed