Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Can You See Me Now? What You Need for Zero-Lag Video Calls

Video calls
(Source: Unsplash)

Since the world moved online, we’ve become increasingly dependent on Zoom, WebEx, BlueJeans, Google Meet, and more for everything from business meetings to happy hours to weddings. And with the increased usage, millions of us are wondering: when will we have perfect call quality? When will we arrive at the virtual utopia of zero lag, high-quality audio and video, live response to collaboration tools, and more? If we can stream footage from the surface of Mars, certainly, we can have a better experience on our own planet, right?

Technologists have been looking for the holy grail of a zero-latency solution for a long time — or rather, since zero-latency solutions are physically impossible, a solution that is so fast that any lag is virtually imperceptible to humans. The user perceived latency is a sum of latency introduced by all the components in the data transmission, storage access, and delivery network. Therefore, it should not surprise anyone that improving latency experience is a broader exercise involving: boosting home/branch Wi-Fi routers, improved access network capacity, geo-located high-speed content edge, high-speed transit networks, and high-capacity scale-out data centers. Of course, various components are independently evolving – we examine the fundamental reasons that contribute to the higher latency, and how fixing those can result in a highly improved user experience.

1) Better user network devices: A home/branch router is often the most primitive infrastructure equipment in the entire chain of networked infrastructure. These devices don’t often support the needed features for reducing latency: traffic prioritization, absorb bursts, and leverage high-speed network data paths. Remote users will need to ensure that they have the most advanced home routers that support Wi-Fi 6 or mesh networking, more importantly giving latency-sensitive traffic the deemed prioritization over other content. Above all, this should be done without any specific configuration, rather based on the communication patterns. For example, an interactive session has a very different traffic pattern than a file/movie download.

2) Improved access networks: Most broadband connections (e.g., fiber, cable, or fixed wireless, or roaming wireless) need to improve throughput, lower latency, and reduce jitter (i.e., latency variation) for a superior user experience. This is going to require service providers using higher bandwidth, low latency transmission than they have been so far. Latency and lag get introduced by various network functions implementing, security, metering, routing, and data access functions on all the traffic that enters the provider’s network and the rest of the internet. Access networks are often overloaded because users were never expected to use peak capacity simultaneously, thus challenging the oversubscription assumptions made. That’s no longer the case as we all now work/study/socialize/play from home and thus need much higher capacity.

3) Transit networks: This infrastructure is a complicated set of internet/network/service provider’s equipment that makes up the Internet fabric, acting as the bridges between individual company networks. This network equipment doesn't perform feature-rich services that access/edge networks do; therefore, they are not a bottle-neck in the entire chain.

4) Geo-located, high-speed content edge: Improving latency is one aspect of it, but improving latency with scale is the real challenge. Content edge helps servers and end-user devices respond to requests for frequently accessed content as quickly as possible. Since the edge is a cache and distribution layer in front of the content/data, it can help reduce traffic by storing content and applications that are frequently accessed by end-users. It also can be location-aware, providing quicker access via a content delivery network consisting of a mesh of networked devices with access to the replicated data for parallelizing and improving scale with latency.

5) High-capacity data centers: Most cloud services, including video conferencing, takes user traffic to cloud service’s data centers. As the traffic arrives at the data center, it must be routed/load-balanced to the appropriate servers for processing incoming requests, which then fetch data and respond to users accordingly within the shortest possible time. Data centers must be designed very meticulously to provide a good distribution for various user’s requests, scale-out architecture to avoid latency variation due to increasing usage, use caching to improve response time, use layered superior storage, and perform all functions securely and with high throughput, lowest possible latency and predictable network, storage and compute operations. Users depend on the private and public cloud infrastructure serving those functions.

This might seem like a major league investment just to make video calls more pleasant, but if that’s what users care for and if it improves the experience of remote working, and provide a meaningful/ viable alternative to very expensive brick-and-mortar office working option. Therefore, higher lag is not just an annoyance; it slows down meetings, worsens user experience, slows down productivity, and increases costs. Furthermore, workers get discouraged from collaborating if meetings become frustrating and disruptive instead of a transition from working face-to-face. A high capacity, low latency services (access, network, storage, distribution) infrastructure is the promise of a highly connected world -- a safe and productive world.