Here are my opening remarks from Media Access Project’s Innovation ’08 in Santa Clara this morning. A DVD will be available shortly. This was a lively discussion, with Google and Vuze on the case.
The remarks are cross-posted to CircleID and were Slash-dotted. One Slashdot reader said: Thank you, I finally read a post from someone who gets it. I didn’t think that would ever happen. That’s not a bad response.
Good morning and welcome. My name is Richard Bennett and I’m a network engineer. I’ve built networking products for 30 years and contributed to a dozen networking standards, including Ethernet and Wi-Fi. I was one of the witnesses at the FCC hearing at Harvard, and I wrote one of the dueling Op-Ed’s on net neutrality that ran in the Mercury News the day of the Stanford hearing.
I’m opposed to net neutrality regulations because they foreclose some engineering options that we’re going to need for the Internet to become the one true general-purpose network that links all of us to each other, connects all our devices to all our information, and makes the world a better place. Let me explain.
The neutrality framework doesn’t mesh with technical reality: The Internet is too neutral in some places, and not neutral enough in others.
For one thing, it has an application bias. Tim Wu, the law professor who coined the term network neutrality, admitted this in his original paper: “In a universe of applications, including both latency-sensitive and insensitive applications, it is difficult to regard the IP suite as truly neutral.” The distinction Professor Wu makes between latency-sensitive and insensitive applications isn’t precisely correct: the real distinction is between jitter sensitive and insensitive applications, as I explained to the FCC. VoIP wants its packets to have small but consistent gaps, and file transfer applications simply care about the time between the request for the file and the time the last bit is received. In between, it doesn’t matter if the packets are timed by a metronome or if they arrive in clumps. Jitter is the engineering term for variations in delay.
The IP suite is good for transferring short files, and for doing things that are similar to short file transfers.
It’s less good for handling phone calls, video-conferencing, and moving really large files. And it’s especially bad at doing a lot of different kinds of things at the same time.
The Internet’s congestion avoidance mechanism, an afterthought that was tacked-on in the late 80’s, reduces and increases the rate of TCP streams to match available network resources, but it doesn’t molest UDP at all. So the Internet is not neutral with respect to its two transport protocols.
The Internet also has a location bias. The Internet’s traffic system gives preferential treatment to short communication paths. The technical term is “round-trip time effect.†The shorter your RTT, the faster TCP speeds up and the more traffic you can deliver. That’s why we have content delivery networks like Akamai and the vast Google server farms. Putting the content close to the consumer on a really fast computer gives you an advantage, effectively putting you in the fast lane.
The Internet is non-neutral with respect to applications and to location, but it’s overly neutral with respect to content, which causes gross inefficiency as we move into the large-scale transfer of HDTV over the Internet. Over-the-air delivery of TV programming moves one copy of each show regardless of the number of people watching, but the Internet transmits one copy per viewer, because the transport system doesn’t know anything about the content. Hit TV shows are viewed by tens of millions of viewers, and their size is increasing as HDTV catches on, so there’s major engineering to do to adapt the Internet to this mission.
Until this is done, HDTV is trouble for the Internet.
Internet traffic follows the model where the more you ask for, the more you get, and when you share resources with others, as we all do on packet networks, that can be a problem if you have a neighbor who wants an awful lot. This is what happens with peer-to-peer, the system designed for transferring very large files, a work-around for the Internet’s content inefficiency.
According to Comcast, increased use of P2P BitTorrent two or three years ago caused the performance of VoIP to tank. They noticed because customers called in and complained that their Vonage and Skype calls were distorted and impaired. Activists accused ISPs of bagging VoIP in order to sell phone services, but the problem was actually caused by a sudden increase in traffic. Their response was to install equipment that limited the delay that P2P could cause VoIP through various traffic management techniques, such as fair queuing and caps. As the amount of traffic generated by P2P increased, the cable Internet companies increased the power of their traffic management systems to the Sandvine system that caused the current dust-up. Cable Internet is more vulnerable than DSL and Fiber to the delays caused by P2P because the first mile is shared.
This problem is not going to be solved by adding bandwidth to the network, any more than the problem of slow web page loading evaporated on its own in the late 90’s or the Internet meltdown problem disappeared spontaneously in the 80’s. What we need to do is engineer a better interface between P2P and the Internet, such that each can share information with the other to find the best way to copy desired content.
Where do we turn when we need enhancements to Internet protocols and the applications that use them? Not to the FCC. The Commission has done a bang-up job in creating the regulations that enabled Wi-Fi and UWB, two of the systems I’ve helped develop, but this help had a distinct character: they removed licensing requirements, set standards for transmit power levels and duty cycles, and backed-off. They didn’t get into the protocols and format the Beacon and dictate the aggregation parameters. Industry did that, in forums like the IEEE 802 and the Wi-Fi Alliance.
Presently, the P2P problem is being worked by the DCIA in its P4P Forum, and in the IETF in the P2P Infrastructure group. P2PI held a meeting last month at MIT, and will most likely meet again in Dublin the last week of July. They have an active e-mail list, and are charitably receiving complaints and suggestions about the best way to handle P2P interaction with Internet core protocols. The system is working, and with any luck these efforts will address some the unsolved problems in network architecture that have emerged in the last 15 to 20 years.
We need to let the system that has governed the Internet for the last 35 years continue to work. The legislation that has been introduced has been described by some of its sponsors as “blunt instruments†designed to “send a message.†The message that it sends to me is that some people believe that the Internet’s core protocols have reached such a refined level of perfection that we don’t need to improve them any more.
I know that’s not true. The Internet has some real problems today, such as address exhaustion, the transition to IPv6, support for mobile devices and popular video content, and the financing of capacity increases. Network neutrality isn’t one of them.