DOCSIS vs. BitTorrent

A couple of weeks ago, I mentioned an academic paper on cable modem (DOCSIS) – TCP interaction which highlighted a couple of problems. The authors maintain that web browsing doesn’t interact efficiently with DOCSIS, and that DOCSIS is vulnerable to a DoS attack based on packet rate rather than data volume. DOCSIS mechanisms that cap … Continue reading “DOCSIS vs. BitTorrent”

A couple of weeks ago, I mentioned an academic paper on cable modem (DOCSIS) – TCP interaction which highlighted a couple of problems. The authors maintain that web browsing doesn’t interact efficiently with DOCSIS, and that DOCSIS is vulnerable to a DoS attack based on packet rate rather than data volume. DOCSIS mechanisms that cap downstream traffic don’t protect against the DoS attack, which is based simply on packet rate rather than volume. I said:

In effect, several BT streams in the DOCSIS return path mimics a DoS attack to non-BT users. That’s not cool.

It’s not clear to all of my network analyzing colleagues that I was correct in drawing a parallel between BitTorrent and the DoS attack, so here’s a little context from the original paper:

Denial of Service Study
The previous analysis showed that downstream TCP transfers are impacted by the DOCSIS MAC layer’s upstream best effort transmission service. In this section we show that it is possible for a hacker to take advantage of this inefficiency by initiating a denial of service attack on CMs that can cause high levels of upstream collisions resulting in serious performance degradation. To accomplish the denial of service attack, a host located outside the network must learn the IP address of a number of CMs that share the same downstream and upstream channels. The attacker simply needs to ping or send a TCP SYN packet to the CMs at a frequency that is on the order of the MAP_TIME setting. The actual frequency, which might range from once per MAP_TIME to once every 5 MAP_TIMEs, is a parameter of the attack.

A couple of things will help clarify. The researchers say it’s only necessary to send TCP SYNs at a frequency that resembles a multiple of the network’s scheduling period. A TCP SYN is a connection request, the thing that the infamous TCP Reset (RST) cancels. It’s part of the fabulous three-way handshake that starts a TCP connection (SYN -> SYN/ACK -> ACK) and is a very frequent part of BitTorrent interactions during seeding, as leeches are connecting to seeders and seeing what sort of rate they can get. The significance is that these are short packets which, in high frequency, cause a large demand for upstream transmit opportunities, a scarce commodity in DOCSIS.

So a relatively small number of BitTorrent seeds can place a high load on the upstream path with very little data, and can’t be controlled by bandwidth caps. DOCSIS allows piggybacking of bandwidth requests, which alleviates the problem of contention slot exhaustion for steady streams, but it’s only effective when a lot of data is queued. If several modems are dealing with a large number of responses to connect requests, other modems that are simply supporting web surfing will starve because they too will have to compete for limited contention slots to ACK the data they’re getting.

This is a very different scenario than the Internet congestion case that’s addressed by dropping packets and backing-off on TCP pipelining. The response rate to connection requests is only governed by the rate at which the connecton requests arrive, and dropping packets on established connections doesn’t affect it. And there’s the further complication that this is a first-hop congestion scenario, while Internet congestion is an intermediate hop scenario. The rule of congestion is to drop before the congested link, and if that happens to be the first link, the dropping agent is the customer’s computer or the BitTorrent leech who’s trying to connect to it.

So this can only be addressed by limiting connection requests, which can be done in real-time by routers that can inspect every incoming TCP packet for the SYN bit and keep track of total connections. The Comcast alternative is to asynchronously monitor traffic and destroy connections after the fact. It’s not as efficient as stateful packet inspection, but the gear to do it is a lot cheaper. Given their Terms of Service, which ban servers on their network, it’s sensible.

So the debate comes back to the question of the legality of Comcast’s TOS. The FCC says ISPs can’t limit the applications that customers can run, and BitTorrent is certainly an application. It strikes me as unreasonable to demand that every ISP satisfy every application requirement, and it’s a certain path to the destruction of VoIP if they must. These asymmetrical residential networks aren’t going to do well with lots of VoIP and lots of Torrents, so something has to give if the law is going to insist on this Utopian goal.

I hope that clears things up.

8 thoughts on “DOCSIS vs. BitTorrent”

  1. Thank you for bringing some more technical information into the discussion of BitTorrent on DOCSIS networks—I’ve enjoyed reading your thoughts and the referenced academic papers. Your argument/explanation for/of network operator behavior seems to be:

    It has been shown (in referenced papers) that DOCSIS networks are vulnerable to a DoS attack resulting in potentially severe service degradation to [other] clients of the shared uplink/downlink channels.
    Seeding BitTorrent clients create network traffic analogous to the DoS attack.
    Network operators (e.g., Comcast) can offer a better (more ‘fair’) experience to their customer base if they avoid the DoS by preventing excessive BitTorrent seeding behavior. The cost-effective way for an operator to do this is to destroy BitTorrent connections (e.g., “forged RST” from a Sandvine network appliance).

    While I see the similarity [in (b)] between network traffic resulting from BitTorrent seeders and the DoS attack, I’m still not convinced that the “forged RST” solution solves anything more than the P2P bandwidth consumption issue[1]. Specifically, these “forged RST” packets are injected into the TCP flow after the connection is established[2]–the RST packets do not eliminate uplink traffic or uplink contention. If this is the case[3], I would prefer to see operators address this through application-independent traffic shaping techniques that don’t attempt to classify traffic/applications… I believe it is this “deep classification” behavior, not traffic shaping issues, that most irks network neutrality proponents.


    [1]: I realize that bandwidth consumption by BitTorrent users is also be a big problem on DOCSIS networks as described in the Martin/Westall paper. While BitTorrent is efficient at utilizing uplink bandwidth, this is not a BitTorrent-specific issue and does not, to me, justify destroying TCP connections as a traffic shaping technique.

    [2]: Example TCP dump.

    [3]: Any pointers to data and/or research examining this effect would be very welcome.

  2. The effect of injecting TCP Resets in the data stream is the demotion of the seeder in the BitTorrent tracker’s list of eligible seeders, which reduces the number of connection requests it’s going to see. This is the behavior that Comcast wants, a near-total reduction in file servers operating on residential accounts. There isn’t another method that accomplishes this goal as efficiently, as IP lacks support for traffic shaping and bandwidth hog throttling in particular.

    The fact that the Resets are asynchronous to the data stream allows the manufacturer (Sandvine, in this case) to use cheaper hardware than an in-line packet-dropping solution. Money matters.

  3. Looking at the BitTorrent specification (unofficial, yet detailed, wiki) and at a couple of tracker implementations, it doesn’t appear that BitTorrent trackers “demote” anything in practice. Yes, TCP resets will [inexpensively] kill “file server” behavior (asymmetric upstream bandwidth), but this does not necessarily reduce the number of contentious network requests[1]. Do you know otherwise?

    If we’re back to just a bandwidth usage argument[2], wouldn’t the more “neutral” method/solution be a network appliance that identifies bad behavior and clamps the user’s upstream bandwidth at the cable headend[3]? This method is also asynchronous and is as general as the “bad behavior” detection algorithm. The big con is that it requires tighter integration with the cable infrastructure (likely a greater expense depending on infrastructure “openness” [3],[4]). In your initial post, you claim that such a method/solution (bandwidth cap) won’t work; I still don’t buy this argument as connection requests and DOCSIS bandwidth allocation contention shouldn’t drastically change with a TCP reset-based solution.


    [1]: Sandvine apparently doesn’t terminate tracker connections so a seeder will continue to show up in the tracker’s peerlist

    [2]: The argument that BitTorrent users hog the upstream pipe degrading the usability of the service for [other] paying customers

    [3]: Comcast seems to have this capability (in reverse)–PowerBoost temporarily increases the allowed upstream bandwidth.

    [4]: It could be argued that this is an opportunity for cable infrastructure vendors. More solutions could conceivably drive prices down…

  4. BitTorrent clients choose the servers with the best reported data rates, don’t they? The tracker simply facilitates this. Again, Comcast’s goal is to limit the traffic that gets to the cable headend, not drop it after it’s already got there.

  5. BitTorrent clients do not know what data rate they will achieve with a given peer before trying. The BitTorrent tracker does not store/communicate this information; communication is facilitated simply by maintaining and providing a list of peers. This protocol behavior is why reseting TCP connections won’t stop/slow connection attempts and associated “request contention” of the cable modem link.

    Correcting some ambiguity in my previous comment, I meant to suggest having the cable headend (CMTS) clamp the user’s upstream bandwidth through its “bandwidth request response” algorithm. If fewer upstream bandwidth requests are approved, the cable modem will limit the traffic that gets to the cable headend causing less impact on other users–this addresses the “upstream bandwidth contention” issue in an application-independent way.

  6. The TCP RST will prevent the leecher from retrying the connection, at some point, so it is effective; a number of measurements confirm this, in fact. The leecher decides the seeder is down and moves on. When the Tracker is next updated, nobody has connections with the throttled seeder to report, so his popularity declines.

    Your alternative suggestion actually aggravates the contention problem. If the subscriber’s modem doesn’t get a reservation when it asks for it, it will simply retry (following some backoff interval.) The requests themselves are subject to collision, and that’s one thing Comcast wants to minimize.

    The cable modem operator doesn’t want to put down application-independent throttles on all upstream bandwidth requests from subscribers because most of them are legitimate. The 20% who consume 80% of the bandwidth are the target.

  7. I don’t disagree that the TCP RST works.

    Do note that the BitTorrent tracker doesn’t monitor which peers are connected to each other, nor does it measure peer popularity. As long as the seeder-tracker connection is established (Sandvine, for example, does not terminate this connection), new leechers that don’t know any better will still attempt to contact the seeder.

    With respect to CMTS upstream bandwidth clamping making the problem worse, I’m not sure it does long term. Yes, you will have additional “bandwidth request contention” when the CMTS starts ignoring requests, but TCP (even multiple TCP streams) will adjust to this by reducing the transmit rate [eventually].

    I also wanted to clarify that I am suggesting an upstream bandwidth throttle on a per-user, not on a system level. The 80% who don’t consume won’t be affected. The 20% who do consume will be affected, but this, to me, is fine–they are the ones sucking up the bandwidth and/or violating the TOS/AUP.

    It probably goes without saying that I think operators need to be very clear about their expectations and what sort of actions they take (e.g., violate the AUP and you are cut-off/throttled).

  8. The tracker doesn’t monitor performance, it monitors who has the file parts. Performance is a distributed problem. Is a leecher going to retry a seeder who sends him Resets, or is he going to try another one? The data indicates that it tries another.

    The problem that we ultimately run into is this: the TCP sliding window mechanism solves Internet congestion at the expense of fairness. Sessions that don’t get their packets randomly dropped have a larger chunk of bandwidth than those that don’t, and the more data a connection carries, the larger its window.

    Good fairness control works in exactly the opposite way: the more data a stream offers, the lower its priority should go. And that’s the root of the problem that carriers have to solve.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.