The Application Performance RFI
Over the past few weeks, we have gotten to know Jim - a veteran IT manager who needs to deploy an application performance management (APM) solution. His objectives are to establish key performance indicators around the application including performance metrics and service licensing agreements (SLA). His billing system also has service catalog and fairly detailed security and reporting modules built into the system. As a web-based application, the backend database is distributed, and the application services just under 2,000 users. He knew there was value in APM, but it was hard to quantify. Jim created a detailed requirements document and created the business case. With the CIO, it was time to present his case to the CFO.
The CFO had some tough questions; however, with an available budget, detailed requirements and a quantifiable business case, the CFO approved the plan. Critical to the decision was ensuring that there was the proper amount of project oversight and independent verification and validation to make the project successful. With the level of effort provided as an estimate, the CIO and Jim will still need to present the final vendor costs to the CFO, but they are not that concerned at the moment.
Niw, it is finally time to examine the vendors. Since a detailed requirements document has been created, it was easy to add some columns to the document to create a vendor evaluation matrix. Jim created a small RFI and sent it to 12 vendors that were identified.
Jim is most concerned about the response time of his billing application's transactions. One of the major decision points was how to collect the data. Jim was leaning toward active monitoring APM tools that generate synthetic transactions to simulate the user experience and report on that performance level. Jim liked the fact that with synthetic transactions, he would avoid the need to deploy agents to detect the performance problem. As the long-term support was a concern, Jim felt that these products are less intrusive than an agent
However, Jim also realized that there are application changes that needed to occur in the billing app to fully monitor systems using synthetic transactions. He would need to create synthetic users appropriate for the transaction being monitored and accounts that would have development impacts on some of the other applications that were integrated into the billing system.
Another approach Jim considered was to use passive tools that collect actual data from the applications from the network packets. These appliances would reside at multiple locations in Jim's enterprise. This is appealing, as these tools monitor application traffic on the network and avoid any additional load on the billing application. Jim could also decide to extend monitoring to other application in the future. As network packet capture technologies track and measure end-user response time without a specific application or server agent, this approach also has low overhead for maintaining the system.
Recommended For You
There isn’t a standard way of performing an application baselining or profiling. Here is a how-to video with suggestions on how to work through the process.
Hybrid and edge data centers are expanding the role of the traditional data center. This makes DCIM more important today. As with any management software, organizations need to know when it makes sense to keep it on-premise versus going with cloud-based DCIM.
The Interop 2019 speaker discusses ways that enterprises explore DevOps, the skills gap, and the rise of security as code.
Composable infrastructure provides a cloud-like experience for provisioning resources. Understand how it works and how it differs from Infrastructure as Code.
Big changes are happening with data center management as emphasis shifts from core to edge operations. The core is no less important, but the move to the edge opens new challenges as the environment becomes more complex.
Video overview on how to use a portable WAN emulator to validate bandwidth requirements to a backup server.