Larry Roberts Says 'Net Broken, Proposes Flow Management
Lawrence Roberts, the illustrious engineer who led the development of ARPAnet, is back with a provocatively titled IEEE Spectrum article, "The Internet Is Broken". The piece makes the case for his long-standing proposal to ease Internet congestion with a bandwidth-management technique called flow management.
July 7, 2009
Lawrence Roberts, the illustrious engineer who led the development of ARPAnet, is back with a provocatively titled IEEE Spectrum article, The Internet Is Broken. The piece makes the case for his long-standing proposal to ease Internet congestion with a bandwidth-management technique called flow management.
Roberts has a big chunk of self-interest here - you could also say he's putting his money where his mouth is - because flow management is the raison d'etre for Anagran Inc., the two-year-old startup of which he's CEO, president, and chairman. And Roberts's views merit attention because of his place in Internet history.
So what exactly is this "flow management" idea he's pushing? Before we get to that, let's look at the problem he purports to solve. This is all complex stuff, and I'm being extremely reductionist, so I commend you to the full article when you're done with this post. Roberts has several complaints with the current state of affairs, all more or less pointing toward the succinct thought that traditional routers suck. (How's that for reductionist?)
First, he says that traffic is exploding, mostly in the form of videostreams from videoconferencing, Skype, and YouTube. He adds that P2P networks are sucking up a lot of network bandwidth. ("P2P participants may constitute only 5 percent of the users in some networks, while consuming 75 percent of the bandwidth.")
Then he laments the fact that traditional IP packet routers can't guaranteed good QoS for video streams. (Since those streams are becoming more important, because of videoconferencing and Unified Communications, this is a really big deal.)As a result, he writes "things are already dire for many Internet service providers and network operators. Keeping up with bandwidth demand has required huge outlays of cash to build an infrastructure that remains underutilized." That last clause refers to average vs. peak issues. I'm not even going to get into the tangent on whether bandwidth demands are at this point a terminally serious problem which can't submit to traffic shaping, or an opportunity for ISPs to extract extra monies from their customers.
Anyway, so back to the main thread. For all of the above reasons, along with the fact that traffic has been doubling every year since 1970, Roberts proposes that "flow management can solve this capacity crunch."
What is flow management? Here's where it gets a little funky, since much of Roberts' literature focuses on its effect (or more precisely, how it degrades performance more elegantly under conditions of network stress) rather on precisely how it works, because the latter explanation is procedural and lengthy.
To the latter point, flow management purportedly discards packets more effectively, as in selectively rather than en masse when crunch time comes. Or, as Roberts explains it in this interesting Internet Evolution post from 2008: "What is really necessary is to detect just the flows that need to slow down, and selectively discard just one packet at the right time, but not more, per TCP cycle. Discarding too many will cause a flow to stall."
As to the HOW it works part, the quickest way to digest it is to say that his technique basically relieves the device (call it a router, call it an Anagran IP traffic manager) of examining every packet's destination address. Basically, flow management performs a fancy algorithmic analysis of the first packet, sticks it in a hash table, and then, by doing quick hash-lookups on subsequent packets, determines whether they're part of the same flow. If so, voila, they're routed. Er, excuse me, they're switched directly to the output port. (Again, go to the IEEE article for a nice chart.)Summary
In conclusion, I found Roberts's article really interesting, but I don't know enough about the fine points of routing (versus flow) to judge his argument. In the Internet Evolution post, there was an interest response that said that all ISPs use some form of traffic shaping to deal with congestion issues, so therefore this isn't really necessary.
Another big issue pertaining to Roberts' approach is, where does this stuff (aka his appliance) sit on the network, and how does it not in an of itself become a bottleneck?
Lastly, however, it's widely recognized that TCP -- being a generalized protocol -- does have limitations. And an change to the way TCP networks work will be a major change. Maybe it's time for such an upheaval. On the other hand, perhaps the safer route is to refine various intelligent-packet-dropping schemes, in search of better guaranteed QoS as the 'Net (all networks, in fact) tilts towards ever higher usage levels for video and UC streams.
About the Author
You May Also Like