Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

7 Common Network Latency Culprits

  • Latency is the time delay experienced between when an action is performed and when it is completed. When looking at enterprise devices that communicate over a data network, latency can be caused by any number of factors that may or may not be network-related. Yet, no matter how latency forms, the result is the same. Application performance is either degraded or unusable. Modern applications and architectures – including the cloud -- are becoming increasingly sensitive to latency. This is due to apps requiring real-time communication between a client and server. Thus, any slowdown or disruption in service is going to be abundantly clear to end-users. That's why it's so important to be able to <a href="https://www.networkcomputing.com/networking/troubleshooting-network-latency-6-tools/242797888">troubleshoot and identify</a> the most common network latency culprits quickly.

    There are plenty of non-network related causes of application latency. For example, a misconfigured or faulty DNS server can severely degrade application performance. While not a true network latency issue, it certainly looks like one. Another common problem that's non-network related would be a backend database that's poorly optimized or over-utilized. This again will look and feel as though there is something wrong with the network when in reality it's a completely different part of the overall enterprise architecture. Lastly, latency can seem as though it’s the fault of the network, yet it ends up being that the end-user device is low on memory or CPU cycles to complete application instructions in a reasonable timeframe.

    That said, our list of seven common network latency culprits are focused squarely on the network components that move data from point A to point B. This includes physical cabling and any network components between the source and destination IP address. We're talking routers, switches, and Wi-Fi access points, as well as other network devices that sit in-line with the network including application load balancers and security devices such as firewalls and Intrusion Prevention Systems (IPS). Really, anything that a typical network administrator would be responsible for making sure that data was flowing properly through these devices. And it's here where seven common latency offenders make a regular appearance. If you can quickly identify and remediate these, applications are far more likely to operate as intended.

  • Number of hops or distance between source and destination

    By far the biggest and most difficult-to-remediate culprit of network latency deals with the number of hops between communicating devices or the physical distance between the two. Because networks use either electrical signals over copper cabling, light waves over fiber optics, or wireless waves, it obviously takes more time to deliver data across longer distances. Hopping between layer 2 and layer three switches and routers also slows down communication. When the switch must perform a MAC address table lookup or a router has to look at the routing table to determine the next destination for a packet, these tasks take time to complete. Thus, the more device hops between communicating devices, the more lookups and the more time to send data.

    (Image: Pixabay)

  • Bottlenecks

    It’s common to have network links that have the throughput capacity to transport more data than others. On most networks, the access layer provides 1 Gbps Ethernet to each desktop or end device. Uplinks can also be 1 Gbps Ethernet or aggregated Gigabit links to form 2, 4, or 8 Gbps uplinks. Beyond that, larger networks utilize 10, 40, and 100 Gbps Ethernet for uplinks and backbones. Connectivity to remote sites can also consist of private WAN and VPN tunnels across the Internet. At any one point on a network, the amount of data sent or received can exceed the maximum throughput available. This is known as a bottleneck. When bottlenecks occur – and when there is no congestion management – packets get dropped or queued in a buffer which delays transmission. If the transmission is TCP, the sender/receiver must request and resend data. This retransmission and buffering process can cause significant data latency.

    (Image: Pixabay)

  • QoS that's non-existent or ineffective

    As we mentioned in the previous slide, bottlenecks are a common reason for network latency. Yet, if properly configured, quality of service (QoS) can help prioritize critical data so it is far less likely to get buffered by interfaces experiencing network congestion. However, if QoS policies are not properly configured, or if they not configured at all, the interface treats all data the same during congestion and drops both critical and non-critical packets.

    (Image: Pixabay)

  • Network device CPU/Memory spikes

    We all know that running out of memory or CPU resources on a desktop computer can cause significant delays in processing. The same can happen to network components. Only in this case, a router or switch suffering from high CPU or low memory can impact a significant number of users. Enterprise networks that transport tens or hundreds of Gbps must make sure that the hardware can support that amount of simultaneous transmission given the CPU and memory available on the networking hardware.

    (Image: Pixabay)

  • Suboptimal routing

    To have resilient networks that continue to operate when network components or links fail, enterprise infrastructures are designed to be fully redundant. However, misconfigurations in dynamic routing or spanning tree protocols can create sub-optimal paths to the destination. Ultimately, this sub-optimal path selection can increase the time it takes for data to be received compared to optimal path selection.

    (Image: Pixabay)

  • Wired vs. Wi-Fi

    Latency on wired connections are far more consistent than compared to Wi-Fi links. With Wi-Fi, latency can increase or decrease based on the distance between the transmitting and receiving device, whether there are any physical obstructions between the two or how congested the wireless network is. Thus, high latency on Wi-Fi is a far more common occurrence compared to devices connected via more stable wired connections.

    (Image: Pixabay)

  • Problematic in-line devices

    Lastly, network latency can be the fault of other in-line infrastructure components. In-line means that data flows must pass through a non-routing or switching intermediary device to reach a destination. Common examples include network load balancers, firewalls, and intrusion prevention systems (IPS). Any misconfigurations or malfunctions on these devices can also cause considerable latency problems on a network.

    (Image: Pixabay)