Packet Loss Vs. Latency: Analyzing the Impact
I perform a lot of network performance troubleshooting and in most cases, I find the root cause is related to packet loss or excessive latency. In this video, I explain how to use Wireshark to analyze both problems.
Packet loss is literally when you do not receive a packet. This can be caused by a variety of factors such as corrupted frames, RF interference, duplex mismatches, dirty fiber connectors, oversubscribed links, and routing issues.
Packet loss causes performance problems since TCP-based protocols will have to wait and retransmit lost frames. The key word here is "wait" since waiting implies you are no longer transmitting. For example, a 500-millisecond delay on 10Mbps link means you lost the opportunity to transmit 5Mbps within that 500ms time frame. If your application is UDP based, all bets are off and the application decides what to do. I’ve seen UDP-based applications react to packet loss by terminating the connection, resending data or corrupting data. With a VoIP application, you'll hear an echo and distorted audio.
Latency is simply another word for delay, and to be clear, I should say excessive latency since latency is always present. Excessive latency is an issue for a few reasons. First, excessive latency, like packet loss, robs the sender of the opportunity to transmit data. A good way to think about it is if I have 10% more latency, I will probably get 10% less performance.
When latency is part of the network design or physical location of the devices, then the delta times will be fairly consistent. If the delay is excessive enough, the sender may believe the packet wasn’t received and retransmit, making it look like packet loss.
Recommended For You
In honor of St. Patrick’s Day, there’s no better time to reflect on those instants when life threw us a curveball, but we were able to hit a home run.
The success of modern enterprises, especially those utilizing real-time communications solutions, is highly reliant on IT infrastructure availability.
To understand the critical role of HTTP/2 in streamlining operations, we must look back at the technologies and implementation gaps that got us where we are today.
A video overview and best practices on how to reduce broadcasts and find other things to tune.
This is a great example of the perfect storm of variables coming together to cause performance issues. Watch the video to see how the problem was found.
Providers should be making infrastructure work for everyone in 2019, improving efficiency and opening up networks for all apps on their infrastructure.