The Importance Of Correcting Packet Loss In VDI

Recently, on the LinkedIN WAN optimization professionals group, I participated in a conversation around whether virtual desktop infrastructure (VDI) is ready for the WAN. Face it, delivering responsive VDI over the WAN is going to be a challenge. One of the interesting points that came up was the importance of correcting for packet loss when considering WAN optimizers.

David Greenfield

March 30, 2011

6 Min Read
Network Computing logo

Recently, on the LinkedIN WAN optimization professionals group, I participated in a conversation around whether virtual desktop infrastructure (VDI) is ready for the WAN. Face it, delivering responsive VDI over the WAN is going to be a challenge. One of the interesting points that came up was the importance of correcting for packet loss when considering WAN optimizers.

Riverbed argued heavily that special features aimed at fixing packet loss should not be a significant factor when considering a WAN optimizer. Packet loss is "mostly a technical distraction that sounds scary and is handled by TCP for the vast majority of people the vast majority of the time," says Steve Smoot, Riverbed's VP of technical operations.

At first glance, I dismissed Mr. Smoot's comments as mere vendor positioning. After all, I spent a decade tracking carrier services where countless network engineers had complained to me about the congestion problems on the Net. What's more, Riverbed built its brand on compensating for packet loss (and other networking problems) by aggressively retransmitting TCP. No wonder Mr. Smoot would be opposed to requiring additional packet loss techniques.

The problem, though, is that TCP retransmission won't address the challenges faced by today's real-time applications. Many such applications--VoIP, video and, to some extent, VDI (VMware's, most notably)--use UDP, not TCP. Improving TCP flows on a link will have some impact on other flows, but can't address problems specific to the UDP flows. The point was made in the discussion that today's carrier networks are stable enough to obviate the need for specific packet loss compensation for UDP. Are vendors downplaying the packet loss problem for their own gain, or do they have a legitimate point?

To find out, I went back to my sources at some of the major pharmaceutical and manufacturing firms to hear their experiences on their own global WANs. I also checked the Internet Health Report several times, a pretty good measure of various network statistics over the past 24 hours.In all my conversations, I heard a fairly consistent story. My sources hadn't noted any significant packet loss on their private networks. As for the public Internet, NTT aside, packet loss for the most part within a network had been very low--sub 1 percent. NTT has shown packet loss at times close to 5 percent, likely because of the events in Japan. (It's a wonder actually that packet loss wasn't higher.)

The bigger challenge came at the exchanges, where providers buffer incoming traffic for insertion into their own networks. Packet loss at these exchanges can be significantly higher, reaching above 1 percent and peaking over 5 percent. As of this time, Cogent-to-AT&T, for example, shows packet loss rates of 1.32 percent.

To put this in context, though, many voice codecs today include error correction and jitter compensation technologies so they work on even lossier networks. "Any decent system will tolerate a 5 percent loss with no issues," says Cullen Jennings a distinguished engineer at Cisco systems who's been heavily involved in Cisco's VoIP and video communication standardization effort.

At first glance, these numbers would appear to support Mr. Smoot's conclusion that special packet loss compensation technology should not be a factor when selecting a WAN optimizer. If organizations can keep their offices on a single backbone, average packet loss would not seem to be an issue.

But enterprises often aren't able to locate all of their offices on a single provider, especially when those offices are located around the world. IT managers need the freedom to choose a local provider right for them--not just for technical reasons, but also for fiscal and operational reasons. "Packet loss rates differ from region to region," notes David Schwartz, CTO of XConnect, a neutral provider of Interconnect 2.0 services. "For instance, loss rates are probably lower in Japan and Korea than in U.S."XConnect's network touches the Internet at dozens of points around the globe, giving Schwartz unique insight into real-world traffic conditions. And while packet loss rates might be better in more industrialized countries, infrastructure demands are also higher, heightening the effect of packet loss.

"With the ever-increasing set of applications engaging in real-time audio and video communications, we can expect network based packet loss to be a critical factor in the future growth of this industry." What's more, packet loss statistics like these, and particularly the ones cited by carriers, can be enormously suspect. Many service providers only cite statistics for their carefully managed backbones, not for the last mile, where loss is likely to be greatest.

Even when packet loss statistics are measured, they're averaged over a month or even a year, diluting the true impact of a single event on a user's experience. In other words, they measure average loss without accounting for the damaging effects of peak loss. "The fine print on the 5 percent packet loss is that if the loss is randomly distributed, 5 percent is no big deal. But, typically, loss is not randomly distributed. Often you lose a clump of packets. So if you lose three packets in a row, voice can sound like crap even though it is less than 5 percent packet loss," he says.

Dave Taht agrees. "You want to avoid bursty packet loss. Losing five packets in a row results in a major dropout. Most codecs can compensate for three OK, and two fairly well, but it is codec- dependent." Taht is the CEO of Teklibre, a research and development organization advocating for Internet development within the Third World. Taht developed one of the first embedded, Linux-based wireless routers.

There's also the problem of variance in packet loss. "If most of your packets are taking 60 to 75ms to reach the other end, and suddenly one packet takes 140ms, that packet arrives so late that is close to the same as lost from a voice-quality point of view." And this is on voice. Video we know is far more suspect. "We've seen video degradation at .25 percent packet loss," says Damon Ennis, VP of product management at Silver Peak Systems.Ultimately, the raison d'etre of corporate applications isn't that they look great, which they might, or even that they offer the most bells and whistles, which they often don't. No, it's that to the extent possible, enterprise applications work--every day, any day. They're dependable, even under adverse conditions. Whether an application runs on TCP or UDP really matters not one wit to the CEO looking to place a call or run a video conference. She or he only cares whether the service will perform and perform well.

All of this means that WAN optimizers are more than simply throttles sitting at the edge of WAN capacity. They are mechanisms by which network engineers ensure--to the extent that it's possible--that IP applications can live up to that appellation of "enterprise-grade." It follows then that buyers of those WAN optimizers should pay close attention to how those devices correct for the underlying network problems, such as packet loss and jitter, across all of their applications.

About the Author(s)

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights