Almost a month ago, we got to know an IT manager, Jim, who was selecting an application performance management (APM) product. After releasing his RFI to 14 vendors, he received six responses. Jim structured the RFI around three major sections: data collection, visualization and reporting. For data collection, Jim wanted different options for monitoring the components of his applications, including components that might reside in a cloud environment. Data collection was also important for an application mapping topology that understands all aspects of the components of the system. He also wanted to be able to integrate network management data into the APM system, so that key measures such as packet loss and latency could be layered on top of the application-specific metrics.
In regard to visualization, the CIO and business management team wanted specific key performance indicators (KPIs), so Jim needed the APM application to provide these KPIs natively within the tool. Jim knew that other dashboard tools and business intelligence tools were available, but with a limited staff he felt that fewer tools would be better. As for reporting, Jim wanted several out-of-the box reports that could be deployed on day one. He also wanted to develop custom reports without learning a database query or reporting language to get what was needed.
Jim selected three vendors for a pilot project. While he liked many of the solutions that provided deep application discovery and mapping, he also realized those solutions were a bit too much for the organization to support over time. As the original driver was to obtain performance metrics (not identify the root cause of a problem) Jim decided to focus on synthetic transaction tools. Synthetic transaction tools simulate the user experience and report on that performance level. Synthetic transactions avoid the need to deploy a specific agent to detect the performance problem. Jim felt that these products were less intrusive than agents and would still work as he moved into the cloud environment.
While all three vendors performed well in the pilot, one was extremely easy to use and was priced right for Jim's budget. It offered the reports Jim wanted right away and had an acceptable user interface. That vendor also offered agents and some cause analysis capability if Jim needed it in the future. The product lacked many of the bells and whistles of other vendors, but by sacrificing these features, Jim also avoided complex integration challenges. Overall, he felt this vendor was the best choice given the maturity level of the IT organization. The vendor also offered a hosted version that could be deployed without any premises hardware or software. However, Jim was concerned about the security of this option, and neither he nor the CIO wanted to open the firewall to this kind of monitoring.
After the product was procured, Jim had a fully functionally APM product. The process was generally smooth, which Jim attributes to the upfront work he did around business and technology requirements. This allowed him to remain focused on what he really needed and balance competing features with price and complexity. Without this upfront work, Jim thinks he would have leaned toward a product that offered more features, but would have been too complex to maintain over time. We'll check in with Jim later on to see if he's still happy and whether he would have done anything differently.