Nagle’s Answer

Slashdot picked up George Ou’s latest piece on the problems with TCP and Peer-to-Peer congestion that I’ve been writing about lo these many months, attracting one interesting comment in a sea of chaff: As the one who devised much of this congestion control strategy (see my RFC 896 and RFC 970, years before Van Jacobson), … Continue reading “Nagle’s Answer”

Slashdot picked up George Ou’s latest piece on the problems with TCP and Peer-to-Peer congestion that I’ve been writing about lo these many months, attracting one interesting comment in a sea of chaff:

As the one who devised much of this congestion control strategy (see my RFC 896 and RFC 970, years before Van Jacobson), I suppose should say something.

The way this was supposed to work is that TCP needs to be well-behaved because it is to the advantage of the endpoint to be well-behaved. What makes this work is enforcement of fair queuing at the first router entering the network. Fair queuing balances load by IP address, not TCP connection, and “weighted fair queueing” allows quality of service controls to be imposed at the entry router.

The problem now is that the DOCSIS approach to cable modems, at least in its earlier versions, doesn’t impose fair queuing at entry to the network from the subscriber side. So congestion occurs further upstream, near the cable headend, in the “middle” of the network. By then, there are too many flows through the routers to do anything intelligent on a per-flow basis.

We still don’t know how to handle congestion in the middle of an IP network. The best we have is “random early drop”, but that’s a hack. The whole Internet depends on stopping congestion near the entry point of the network. The cable guys didn’t get this right in the upstream direction, and now they’re hurting.

I’d argue for weighted fair queuing and QOS in the cable box. Try hard to push the congestion control out to the first router. DOCSIS 3 is a step in the right direction, if configured properly. But DOCSIS 3 is a huge collection of tuning parameters in search of a policy, and is likely to be grossly misconfigured.

The trick with quality of service is to offer either high-bandwidth or low latency service, but not both together. If you request low latency, your packets go into a per-IP queue with a high priority but a low queue length. Send too much and you lose packets. Send a little, and they get through fast. If you request high bandwidth, you get lower priority but a longer queue length, so you can fill up the pipe and wait for an ACK.

But I have no idea what to do about streaming video on demand, other than heavy buffering. Multicast works for broadcast (non-on-demand) video, but other than for sports fans who want to watch in real time, it doesn’t help much. (I’ve previously suggested, sort of as a joke, that when a stream runs low on buffered content, the player should insert a pre-stored commercial while allowing the stream to catch up. Someone will probably try that.)

John Nagle

.
I actually suggested the technique John proposes directly to Comcast engineering: drop packets before the first hop. They didn’t appear to have considered it before, but it actually is the answer. Unfortunately, the cable modem is not an IP device so it doesn’t understand when and how to do this presently, so it becomes a piece of housekeeping for the DOCSIS 3.0 upgrade.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.