The Internet’s Big Idea (part 2)

In the first part of this series I tried to explain that pooling communications bandwidth is the central fact of Internet architecture, and that the problem raised by pooling – fairness – hasn’t been resolved in the general sense. The Internet has a mechanism that prevents it from becoming unstable, but that mechanism (TCP backoff) … Continue reading “The Internet’s Big Idea (part 2)”

In the first part of this series I tried to explain that pooling communications bandwidth is the central fact of Internet architecture, and that the problem raised by pooling – fairness – hasn’t been resolved in the general sense. The Internet has a mechanism that prevents it from becoming unstable, but that mechanism (TCP backoff) doesn’t ensure that users have equal access to bandwidth. In fact, the amount of bandwidth each user has is directly proportional to what he tries to get. The greedy flourish, in other words.

TCP backoff was a reasonable means of allocating bandwidth in the early days because there were social controls acting on the Internet as well as technical ones. The early users were all academics or network engineers who were well aware of the effects that large file transfers (using ftp) had on the network, and who were sufficiently joined to a community of common interest that they didn’t want to abuse the system. There was also no spam on the Internet in those days (the 1980s) for the same reason that there was no bandwidth hogging; everybody knew everybody, there were no anonymous users, and nobody wanted to get a bad reputation. We’ve obviously come a long way since then.

Ftp is a fairly simple program. The client opens a connection to a server, locates a file to download, starts the process and waits until it’s done. The download (or upload, the process is the same) runs over a single TCP connection, with a second connection providing for control. If the network becomes congested, IP drops a packet, TCP backs off to a slower rate of transfer, and eventually speeds up again if network conditions become more favorable. When the network is congested, ftp is slow and that’s just the way it is. Ftp users are encouraged not to run too many downloads at once, and ftp servers place a hard limit on the number of downloads they’ll provide at any given time. When the limit is reached, it stops accepting new connections. Ftp is an example of a good network citizen.

BitTorrent is a product of a different era. The weakness of ftp is its centralization. Clients and Servers play very different roles, and the role of the server requires a great deal of bandwidth and processing power. For large-scale distribution, servers have to be capable of handling hundreds of simultaneous connections, driving the bandwidth bill to the roof because wholesale bandwidth is sold by the usage unit, not by flat rate. BitTorrent (and other P2P applications, they’re all the same) exploits the fact that broadband consumer accounts are typically flat rate and broadband consumer networks typically have unused bandwidth. And it also exploits the fact that software and movie pirates are willing to trade with each other as long as they can remain anonymous (yes, I know there are legitimate uses of P2P as well, but who are we kidding when we ignore the fact that P2P’s primary uses are illegal?)

If ftp is sedate and reserved, BitTorrent is hyperactive and frenetic. It connects to multiple peers for downloading, and is always looking for faster ones. In terms of network overhead, it’s a much less efficient protocol than ftp, because the ratio of protocol-related chatter to actual file data is much, much higher. But in terms of economic overhead, BitTorrent is sweet, trading pay-per-use wholesale data pipes for flat-rate residential ones. That’s its rationale, that’s what it does best, and that’s why it’s a problem for every ISP in the world.

ISPs, like the Internet as a whole, depend on users sharing a common infrastructure in a predictable way, and tend to have problems when they don’t. The predictions that held good until BitTorrent came along were that downloads would happen over flat-rate links and uploads over wholesale metered links, hence the residential network should be asymmetrical, allowing more download than upload. This wasn’t theft or deception, it was (and largely still is) a rational appraisal of network traffic. And it was a system that largely regulated itself because the wholesale links were the economic limit on the traffic that could enter an ISP. Nobody was able to put traffic on the network for free, but lots of people were able to take it off the network for no additional fee above their basic subscription fee.

So what happens when P2P becomes truly mainstream and uploads are free? I think I’ll take that up in part 3.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.