We’ve all heard that the Internet is an end-to-end network. This means that it’s different in some major way from the telephone network, and different from other types of packet networks that we might very well be dependent upon had it not been for the success of the Internet, which of course is a result of its superior, end-to-end architecture. And end-to-end network is one in which “intelligence” is concentrated at the end points, with the network itself operating as a “magic cloud” that simply delivers packets as the end-points dictate. End points control delivery destination and rate, and the network simply passes them through.
But is this really true? In many important respects the Internet actually gives up much less end-user control than the telephone network. Routing, for example, is not conducted under end-point control. It could be, if we used the technique known as “source routing” where the transmitter of a packet doesn’t just specify where a packet should go but how to get there, hop-by-hop. The IBM Token Ring used this technique, and there’s a form of it in the RFCs that describe the Internet’s operation but it’s never been more than experimental. The phone network actually allows the user much more control over the routing of calls than the Internet does. I can choose any long-distance carrier I want for each call that I make by dialing a certain prefix before the phone number. So I can use one carrier for regional long distance, another for national long-distance, and different ones for each country I dial. That’s end-user control.
If I had that kind of end-to-end control on the Internet, I could select one NSP for bulk data transfers such as BitTorrent that would be really cheap and another NSP for VoIP that had to be really regular.
The Internet puts control of network congestion at the end-points, but that doesn’t do anything for the user as it’s all a magic cloud to him. It compromises the integrity of the network, however, as the health of thousands of internal links – selected by the network and not by the user – is dependent on good behavior at all of the end points. We’ve talked about how this works before. When a queue overflows, TCP eventually notices packet loss and throttles back its send rate, which eventually alleviates the overload condition. It’s the same logic that’s supposed to operate when the electric grid is overloaded because we’re all air-conditioning like mad. The power company tells us to turn off our air-conditioners and enjoy the heat. Some do, and others don’t. TCP’s good neighbor policy is just as easily defeated as the power company’s, so the good neighbors have to throttle back twice as hard to make up for those who don’t throttle back at all.
So it’s actually quite easy to argue that the Internet has botched the location of major control functions. Routing has great significance to the user and less to the network, but it’s all under network control, while congestion is just the opposite.
This dubious assignment of functions is exactly what net neutrality is meant to protect. It has real impact on future applications. We use a lot of mobile devices today, a big departure from the way we did things in the 70s when the Internet was designed and the PC was not even a pipe dream. Mobile devices – laptops and phones – should be reachable wherever they’re attached, but the Internet doesn’t allow this as their physical location is encoded into the addresses they use. Your IP address isn’t just a device address, it’s a network attachment point address. This is not a sound way to do things today, but having the network keep track of where you are is a “smart network” feature, a heresy in the religion of end-to-end.
These tradeoffs may have appeared sensible in the 1970s, but they don’t any longer, and no religion should force us to accept them indefinitely.