Last year was a tremendous challenge for the world, and its tremors will be felt for a long time to come. In the realm of business, it has become clear that operating digitally has become an advantage. Companies that have embraced digital transformation early have fared better than their peers.
As a result, more companies than ever are now rushing to accelerate their digital transformation efforts: moving more workloads to the public cloud, adopting SaaS applications, and embracing modern distributed application architectures.
But this transformation brings new challenges to IT organizations. In particular, many pieces of the applications and services delivered to customers and employees are increasingly not owned or controlled by IT teams. Yet, they continue to be accountable for the experience delivered, namely performance, uptime, and overall reliability. In a recent study, Gartner stated that by 2023, 60% of digital business initiatives will require IT to report on users' digital experience, up from less than 15% today.
Enter Digital Experience Monitoring (DEM).
What is DEM?
Digital experience monitoring is a form of proactive, black-box monitoring. Rather than rely on capturing metrics, events, flows, or alerts from devices, DEM aims to simulate the experience of an end-user with the infrastructure, application, or service, such as someone watching a streaming movie or ordering a pizza online.
There are essentially two layers to DEM: network and application. The former measures how the network that supports an application or service is performing. The latter adds application-level transactions, oftentimes scripted (e.g., login, adding an item to a digital shopping cart, etc.) to simulate how a human (or machine) perceives the performance of an application or service.
How to get started with DEM?
As with most undertakings in life, a journey starts with an initial step. For Network Operations (NetOps) teams in particular, starting with the network on the journey toward 2-layer DEM is a great first step. Of course, much of the network infrastructure of interest lies beyond the direct ownership and control of the NetOps team, which means that traditional approaches such as SNMP or NetFlow that NetOps teams have relied on for decades can't be used. To gain the visibility required, NetOps teams must adopt and deploy a network synthetics measurement tool alongside their existing network observability solutions.
What to watch out for
Finding the right tooling for the job is never an easy task, but here are a few guiding principles that will help NetOps teams to navigate through the DEM hype:
First, make sure your choice follows the principle of the "Three E's." Network synthetics tools must be easy to try, easy to use, and easy to maintain. For example, if it takes more than a few minutes to stand up a trial of the product and get first measurement results, you can bet that your initial production deployment will be complex, consuming time and resources that you and your team really don't have.
Second, always think about day two right from the start. If a tool requires a lot of manual test adjustments and maintenance as your environment changes (which it will!), your insights into the digital experience will come at a high cost. For example, if your business wants to measure only applications that generate network traffic above a certain threshold, or TopN, you want the tool to be able to turn synthetic tests on and off automatically, without operator involvement.
Third, you want to ensure that your network synthetics tool supports the test frequencies required by your business. Of course, there are use cases where one or five-minute test intervals are sufficient. But there are many others where that is not the case. Consider certain IoT scenarios where seconds of downtime or millisecond changes in latency truly matter. In these scenarios, monitoring infrequently is the same as flying blind: the ability to perform sub-minute, or continuous, testing matters for your critical applications and use cases.
Finally, watch out for your wallet. Many providers out there are like robber barons, charging you exorbitant prices, even for relatively infrequent testing. Do your homework and map out the number of monthly tests you will require, keeping in mind that that number can and will grow quickly. For example, testing your network between 21 sites (three regions each for three public cloud providers, ten branch offices, and two data centers) at 1-minute test intervals will amount to about 18 million tests each month. Add a few applications and SaaS services, or do more frequent testing, and you will be in the hundreds of millions of monthly tests very quickly. Right-size your monitoring by testing only what matters and as frequently as you must, and no more.
When organizations have successfully complemented their traditional network observability stack with DEM tooling, they will be afforded insights into the performance and availability of networks and services beyond their corporate firewalls. Network operations teams will find they are more proactive in handling potential customer experience issues, such as higher-than-usual latency within a critical application. By understanding that a shift is needed and identifying the right tools for the job, network operations teams can visualize and analyze the end-user experience and understand how to protect it effectively.
Christoph Pfister is Chief Product Officer at Kentik.