It seems that every few years there's yet another prognosticator that the Internet is about to collapse. Once it was the stellar growth in bandwidth demand driven by the phenomenal increase in Internet-connected devices. At other times, it was the lack of Net neutrality (see this video). Still other times, it was sinister attacks on BGP or the fact that we've run out of IPv4 addresses.
So, here we are again, and yet another problem has been unearthed that could dismantle the Net. It's BufferBloat, and while the hype might be hot, the challenge is very real. So much so that it's got some of the biggest guns in Internet research today thinking about the issue. It's got me thinking about an often overlooked buying criteria that WAN optimization buyers need to be watching.
The chief preacher of the BufferBloat gospel is Jim Gettys, a researcher at Alcatel-Lucent Bell Labs. Gettys presented his findings to the IETF last month and talked about how he identified the problem. Gettys first noticed the issue at home when he was seeing extremely high latency, despite the fact that he was on a high-speed Internet connection.
After extensive research, he found that packets were being stored in various devices in his path--in transmit queues often used for traffic classification, in ring buffers within device drivers, and in some cases in end notes. (He points to the Marvell One Laptop Per Child system, for example, that was found to hold four packets.) As packets were queued, latency mounted. To put that in perspective, if 256 packets were buffered on a 10Mbps line, that would equate to 3 million bits or a third of a second of delay.
But delay is only part of the story. As Gettys showed, the added latency disrupts higher layer protocols. TCP's Round Trip Time (RTT) estimator, for example, can't determine the correct bandwidth to send data. TCP ends up trying to send too much data, so buffers fill up, increasing latency. You can read a detailed account of Gettys' efforts here.