Blueprint For Application Performance Management

Managing performance of Web services-based applications requires new tools. What's needed is a holistic approach to application performance management.

January 26, 2008

4 Min Read
Network Computing logo

Changes in the methods for building and deploying applications have rendered impotent many of the techniques historically used to manage application delivery infrastructures. Gone are the days when managing a database server's transaction performance equated to performance management. With the advent of Web services and now Web 2.0 technologies like mashups, today's applications are too complex to manage with last-generation tools and methodologies. New IT governance standards gaining acceptance among technology leaders also require that IT resources be managed more cohesively and proactively.InformationWeek Reports

What's needed is a holistic approach to application performance management, one that employs systems that work across application layers as well as across distributed enterprises. Key to getting application performance right is understanding the bigger-picture needs of the organization as defined through practices such as ITIL and COBIT (see "Standards For IT Governance," Dec. 10, p. 35). These governance processes help IT understand what's important. In this Blueprint, we'll explore how to keep those important services up and running.

The need for APM aligns closely with macro trends confronting IT. Application decomposition may well enable organizations to leverage information stored in previously inaccessible silos, but the real-time Web services required to make that data available demand their own management techniques to function properly. What's more, some data and systems may be located outside IT's immediate purview.

So-called Webification of existing enterprise applications often brings to light the need for new management systems and mind-sets. NetForecast, an APM consulting firm, has found that on average, resources from six servers are required to compose a mashup on a user's desktop, says NetForecast president Peter Sevcik. Depending on how important that mashup is, each service, as well as the collection of services, requires monitoring and management.Yet, managing those Web services by piecing together data from conventional point management products won't cut it. Polling individual devices for SNMP alerts can't provide sufficient information to control real-time process flows that by their nature are ephemeral. In short, guaranteeing the performance of tomorrow's distributed Web services applications won't be possible without monitoring and managing the entire application flow.

APM has other drivers, too. To extract additional value out of IT investments and improve customer experience, executives are looking at managing IT end to end through governance and process specifications, such as COBIT and ITIL. While these specifications are excellent for pulling together IT business process, they require tools to implement the ideas set out in them. APM closely aligns with ITIL because it postulates a unified system for analyzing application performance problems, notes Dennis Drogseth, VP at IT consulting and analyst firm Enterprise Management Associates.

In fact, APM aligns neatly with at least four of the 14 ITIL service operation activities, Sevcik says, ticking off Incident Management, Availability Management, Capacity Management, and Service Level Management. In short, APM can be viewed as the tool by which ITIL gets implemented in the network (see diagram below).

diagram: The APM Architecture

(click image for larger view)

The APM architecture is built on three elements that enable testing and incident investigation capabilities: data collectors, analysis engines, and reporting stations. These elements come together to build a set of tools that proactively monitor systems and resolve application problems. In some cases, problems are diagnosed through active synthetic transaction monitors, while others may require passive agent or agentless monitoring.

Synthetic transaction monitors measure application performance by simulating user activity using predefined transactions. They can identify many user-perceived performance problems, but often can't determine where the actual problem is occurring. What's more, they require unique programming for each application monitored. Perhaps their most important use is for reporting user experience data, which can be trendable over long periods and through application revisions. Such data can be extremely useful for reporting on IT's service-level agreements.Alternatively, or in addition to synthetic transaction monitors, IT can capture application performance data passively by deploying software agents and hardware probes. While these provide a more detailed picture of the underlying applica- tion operation, they also can incur significant deployment and installation costs, and take more day-to-day attention. Such systems are likely to observe and record events that actually cause undesired application performance, but finding those events and correlating them back to an observed performance issue is an evolving science.

Hardware probes attach at key network junctures, such as Internet access points or via switch monitoring ports, and are normally passive. They also connect to core switches and collect NetFlow statistics to gain a more complete view of the IP infrastructure. As such, these probes can gather a lot of data. To prevent the that data from inundating the network--particularly WAN links--analysis engines must be deployed throughout the infrastructure. These systems aggregate and process the data from the various probes and, depending on the size of the organization, consolidate data from a number of sites.

diagram: Practical Application Of ITIL: How application performance management aligns with at least four ITIL concepts

Photograph by Jupiter Images

Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like

More Insights