Hoping to counter claims that demand for residential bandwidth is growing rapidly, Karl Bode uncritically accepts some dubious analysis from file-sharer Robb Topoloski:
Complete congestion is a technical fantasy which only exists in the minds of people who do not understand TCP congestion control and how Additive Increase/Multiplicative Decrease (AIMD) works in TCP Congestion avoidance works, he says. “AIMD allows a linear growth of bandwidth utilization until loss occurs, at which time an exponential reduction takes place. This slow-start, fast-fallback ensures congestion cannot cause gridlock.
Topolski’s grasp of protocols is consistent with his grasp of arithmetic. Cutting a flow rate in half is not “an exponential decrease,” it’s an arithmetic one. Data links can become saturated by Internet traffic in two ways that aren’t mitigated by TCP flow control: by increasing the number of TCP streams beyond the manageable limit for the datalink, such that requests for bandwidth collide with each other, and by using UDP, which is completely uninvolved in the TCP sliding window scheme. P2P subverts congestion control by using excessive numbers of streams, and that’s why TCP RSTs manage it so effectively.
Larry Roberts, the designer of the ARPANET, explains it like this:
…P2P expands to fill any capacity. In fact, as I have been testing and modeling P2P I find it taking up even higher fractions of the capacity as the total capacity expands. This is because each P2P app. can get more capacity and it is designed to take all it can. In the Universities we have measured, the P2P grows to between 95-98% of their Internet usage. It does this by reducing the rate per flow lower and lower, which by virtue of the current network design where all flows get equal capacity, drives the average rate per flow for average users down to their rate. They then win by virtue of having more flows, up to 1000 per user.
So who are you going to believe, file-sharer Topolski or one of the icons of packet switching?
Bode’s other error is to assume that demands for bandwidth are always nicely linear and orderly. The history of the Internet suggests otherwise. There are periods of time in which a new application reaches a tipping point, and demands for bandwidth on particular links increase rapidly. This has happened with P2P, as demands for upstream ISP bandwidth by P2P are roughly a thousand times more than they are for web browsing. It’s an effect like “punctuated equilibrium” is in evolution.
In normal times, bandwidth appetites grow steadily, but genuine innovations kick this growth into high gear for short periods of time.
Let’s use historical perspective, boys and girls, not anti-corporate hysteria to analyze our tech phenomena.
Technorati Tags: net+neutrality
Larry Roberts opinion is correct. P2P creates a network layer using flooding or DHT to connect to other peers. This is not only harmful to the internet but also costly and dangerous to ISP because Flooding and DHT do not follow the default routing of the internet ( BGP).