BitTorrent net meltdown delayed

See The Register for my follow-up on the BitTorrent meltdown story: The internet’s TCP/IP protocol doesn’t work very well. As the internet’s traffic cop, it’s supposed to prevent applications from overloading the network, but it’s at a loss when it comes to managing P2P applications. This deficiency, generally known to network engineers but denied by … Continue reading “BitTorrent net meltdown delayed”

See The Register for my follow-up on the BitTorrent meltdown story:

The internet’s TCP/IP protocol doesn’t work very well. As the internet’s traffic cop, it’s supposed to prevent applications from overloading the network, but it’s at a loss when it comes to managing P2P applications. This deficiency, generally known to network engineers but denied by net neutrality advocates, has been a central issue in the net neutrality debate. BitTorrent Inc has now weighed in on the side of the TCP/IP critics.

The next official release of the uTorrent client – currently in alpha test – replaces TCP with a custom-built transport protocol called uTP, layered over the same UDP protocol used by VoIP and gaming. According to BitTorrent marketing manager Simon Morris, the motivation for this switch (which I incorrectly characterized in The Register earlier this week as merely another attempt to escape traffic shaping) is to better detect and avoid network congestion.

Morris also told the media this week that TCP only reduces its sending rate in response to packet loss, a common but erroneous belief. Like uTP, Microsoft’s Compound TCP begins to slow down when it detects latency increases. Even though TCP is capable of being just as polite as BitTorrent wants uTP to be, the fact that it hides its delay measurements from applications makes it troublesome for P2P clients with many paths to choose from. But it’s sensible to explore alternatives to TCP, as we’ve said on these pages many times, and we’re glad BitTorrent finally agrees.

We strive to be fair and balanced. The nut is that we don’t actually know whether BitTorrent’s new protocol is going to work any better than TCP, as there’s no hard data available on it.

Technorati Tags: ,

Note about UDP

One of the more amusing criticisms of my article on BitTorrent over UDP is that I’m a clueless dork for saying UDP was designed for real-time applications since there was no such thing as VoIP back in the day. This is generally accompanied by the charge that I don’t know the first thing about the … Continue reading “Note about UDP”

One of the more amusing criticisms of my article on BitTorrent over UDP is that I’m a clueless dork for saying UDP was designed for real-time applications since there was no such thing as VoIP back in the day. This is generally accompanied by the charge that I don’t know the first thing about the Internet, etc. So for the record, here’s a statement of the design goals for UDP by one of the people involved, the lovable David Reed:

A group of us, interested in a mix of real-time telephony, local area networks, distributed operating systems, and communications security, argued for several years for a datagram based network, rather than a virtual circuit based network…[UDP] was a placeholder that enabled all the non-virtual-circuit protocols since then to be invented, including encapsulation, RTP, DNS, …, without having to negotiate for permission either to define a new protocol or to extend TCP by adding “features”.

Any questions?

Reaction to BitTorrent story

My article in The Register yesterday about BitTorrent and UDP got some attention. It was a primary article on Techmeme and was Slash-dotted. Here’s the Techmeme link set: Slyck, DSLreports, TorrentFreak, Ars Technica, Icrontic, Joho the Blog, TMCnet, GigaOM, Industry Standard, TechSpot. While most of the discussion went to questions of motivation – I’m alleged … Continue reading “Reaction to BitTorrent story”

My article in The Register yesterday about BitTorrent and UDP got some attention. It was a primary article on Techmeme and was Slash-dotted. Here’s the Techmeme link set: Slyck, DSLreports, TorrentFreak, Ars Technica, Icrontic, Joho the Blog, TMCnet, GigaOM, Industry Standard, TechSpot.

While most of the discussion went to questions of motivation – I’m alleged to be a telco shill for criticizing a system the telcos are OK with – some was actually quite substantial. It’s good to get these issues under the microscope.

More links: Canadian Broadcasting Company, Slashdot, Tales of the Sausage Factory; a few hundred more at Google.

I talked to a couple of the BitTorrent guys today – chief engineer and nemesis Stanislav Shalunov not among them, unfortunately – and they vehemently denied they had any intention of evading the Bell Canada traffic shaping system. Reports from Canada that motivated me to write the piece say the system actually does in fact evade Bell Can’s filters, which will have to be updated as the use of uTorrent 1.9 becomes more widespread, or replaced with more capable equipment.

It remains to be seen whether that upgrade will also catch VoIP and gamers in the throttling net. It’s interesting that the author of the reports on Canada, Karl Bode, is now playing dumb, all the better to be left out of the counter-PR campaign.

Alarming Title: BitTorrent declares war on the Internet

See The Register for my analysis of the latest tweak in Bittorrent: Gamers, VoIP and video conference users beware. The leading BitTorrent software authors have declared war on you – and any users wanting to wring high performance out of their networks. A key design change in the P2P application promises to make the headaches … Continue reading “Alarming Title: BitTorrent declares war on the Internet”

See The Register for my analysis of the latest tweak in Bittorrent:

Gamers, VoIP and video conference users beware. The leading BitTorrent software authors have declared war on you – and any users wanting to wring high performance out of their networks. A key design change in the P2P application promises to make the headaches faced by ISPs so far look like a party game. So what’s happened, and why does it matter?

Upset about Bell Canada’s system for allocating bandwidth fairly among internet users, the developers of the uTorrent P2P application have decided to make the UDP protocol the default transport protocol for file transfers. BitTorrent implementations have long used UDP to exchange tracker information – the addresses of the computers where files could be found – but the new release uses it in preference to TCP for the actual transfer of files. The implications of this change are enormous.

As BitTorrent implementations follow uTorrent’s lead – and they will, since uTorrent is owned by BitTorrent Inc, and is regarded as the canonical implementation – the burden of reducing network load during periods of congestion will shift to the remaining TCP uses, the most important of which are web browsing and video streaming.

Several commentors are upset with the article, mostly because ISPs don’t provide them with unlimited bandwidth. There’s not much I can do for those folks.

A few others claim that BitTorrent over UDP has a congestion control algorithm which they feel is in some way equivalent to the TCP algorithm, but this argument is flawed on a couple of levels. For one, many routers have tweaks in their discard logic that prefers UDP over TCP. This is a key problem with the widespread use of UDP for purposes other than those for which it was intended.

UPDATE: The explicit goal of this new tweak is actually to create a more friendly variant of the existing TCP congestion avoidance algorithm, and it was the pirates at Broadband Reports who said otherwise.

What remains to be seen, however, is what the actual effects will be in large-scale deployment, and whether the genie can be forced back in the bottle if they’re undesirable.

UPDATE 2: I’ve added “Alarming Title” to the title. This piece is getting a lot of people excited.

Technorati Tags: ,

A good synopsis of the Internet

Catherine Rosenberg, a professor with the University of Waterloo’s Department of Electrical and Computer Engineering, has written a great synopsis of the Internet for our cousins to the North: The founding principle of the Internet is resource sharing and hence to deliver an appropriate end-to-end service, some level of co-ordination and traffic control is needed … Continue reading “A good synopsis of the Internet”

Catherine Rosenberg, a professor with the University of Waterloo’s Department of Electrical and Computer Engineering, has written a great synopsis of the Internet for our cousins to the North:

The founding principle of the Internet is resource sharing and hence to deliver an appropriate end-to-end service, some level of co-ordination and traffic control is needed to ensure network performance does not collapse. This is even more true now as the last few years have seen massive increases in Internet traffic due in large part to the proliferation of “bandwidth hungry” applications such as games, peer-to-peer file transfers and increasingly complex, enriched web pages. Added to this is the “all you can eat” economic model promoted by the ISPs, an approach that entices users to always consume more, and of course the fact that the number of Internet users keeps on increasing.

So what does controlling the traffic mean? It means keeping the traffic entering the network under a certain threshold to avoid performance collapses that would affect everyone. And this is what traffic shaping does, by, for example, limiting the bandwidth available for certain types of applications that are less time sensitive in order to keep more bandwidth available for other applications that are more time sensitive, and used by the greater number of subscribers.

While some would argue that this is done “naturally” with Transmission Control Protocol, the reality is that TCP alone is not enough to avoid congestion and spread the burden of congestion as fairly as possible to all those using the congested area.

It’s so refreshing to read something like this after slogging through all the nonsense that our law professors have written about the Internet for our net neutrality debate. I highly recommend you read the Whole Thing.

H/T Brett Glass.

Technorati Tags: , ,

Mumbai Massacre

The terror attack on Mumbai is an outrage, of course; it’s India’s 9/11 and 7/7. The terrorists attacked India’s most open city, entering by boat and killing random people at locations carefully chosen for traffic and impact. Indian security forces and heroic hotel service workers put down the terrorists, restoring order in a few days. … Continue reading “Mumbai Massacre”

The terror attack on Mumbai is an outrage, of course; it’s India’s 9/11 and 7/7. The terrorists attacked India’s most open city, entering by boat and killing random people at locations carefully chosen for traffic and impact. Indian security forces and heroic hotel service workers put down the terrorists, restoring order in a few days. This was kind of personal for me, since I’ve been through Mumbai (or “Bombay,” as we used to call it) something like 50 times over the years, occassionaly staying in the hotels that the terrorist scum attacked.

The press reports are now saying that the terrorist attack squad consisted of a mere 10 people. That’s a pretty small number to kill 200 people over the course of three days, so they must have had some local help. I’m waiting to see the rest of the story unfold.

Twitter played an essential role in increasing the terror and the confusion over the attack, as it served as the amplifier for every bogus rumor in circulation and offered exactly zero help with the fundamentals of the “story:” who, where, and why. Nonetheless, the “citizen media” crowd is crowing about the greatness of Twitter-enabled mobs. Sad. The Economist that came in the mail Friday was more authoritative than Twitter as to what actually happened in Mumbai and why.

The appropriate response to this massacre is to take a trip to Mumbai, and failing that to at least go eat at an Indian restaurant. The latter is symbolic only, but if that’s all you can do, at least do that. The civilized world has to hang together in the face of religious-fanatic barbarity, or surely we’ll hang separately.

And yes, I do believe that the Pakistan ISI had a hand in this attack.

Regulation and the Internet

Here’s a little speech I gave to members of the EU Parliament in Brussels on Oct. 14th. The cousins are contemplating a set of Internet access account regulations that would mandate a minimum QoS level and also ban most forms of stream discrimination. This explains why such rules are a bad (and utterly impractical) idea. … Continue reading “Regulation and the Internet”

Here’s a little speech I gave to members of the EU Parliament in Brussels on Oct. 14th. The cousins are contemplating a set of Internet access account regulations that would mandate a minimum QoS level and also ban most forms of stream discrimination. This explains why such rules are a bad (and utterly impractical) idea.

The Internet is a global network, and regulating it properly is a matter of global concern. I’d like to share a view of the technical underpinnings of the question, to better inform the legal and political discussion that follows and to point out some of the pitfalls that lie in wait.

Why manage network traffic?

Network management, or more properly network traffic management, is a central focus of the current controversy. The consumer-friendly statements of policy, such as the Four Freedoms crafted by Senator McCain’s technology adviser Mike Powell, represent lofty goals, but they’re constrained by the all-important exception for network management. In fact, you could easily simplify the Four Freedoms as “you can do anything you want except break the law or break the network.” Network management prevents you from breaking the network, which you principally do by using up network resources.

Every networking technology has to deal with the fact that the demand for resources often exceeds supply. On the circuit-switched PSTN, resources are allocated when a call is setup, and if they aren’t available your call doesn’t get connected. This is a very inefficient technology that allocates bandwidth in fixed amounts, regardless of the consumer’s need or his usage once the call is connected. A modem connected over the PSTN sends and receives at the same time, but people talking generally take turns. This network doesn’t allow you to save up bandwidth and to use it later, for example. Telecom regulations are based on the PSTN and its unique properties. In network engineering, we call it an “isochronous network” to distinguish it from technologies like the old Ethernet that was the model link layer technology when the DoD protocol suite was designed.

The Internet uses packet switching technology, where users share communications facilities and bandwidth is allocated dynamically. Dynamic bandwidth allocation, wire-sharing, and asynchrony mean that congestion appears and disappears on random, sub-second intervals. Packets don’t always arrive at switching points at the most convenient times, just as cars don’t run on the same rigorous schedules as trains.
Continue reading “Regulation and the Internet”

How fast is Internet traffic growing?

It depends on whose numbers you like. Andrew Odlyzko claims it’s up 50-60% over last year, a slower rate of growth than we’ve seen in recent years. Odlyzko’s method is flawed, however, as he only looks at public data, and there is good reason to believed that more and more traffic is moving off the … Continue reading “How fast is Internet traffic growing?”

It depends on whose numbers you like. Andrew Odlyzko claims it’s up 50-60% over last year, a slower rate of growth than we’ve seen in recent years. Odlyzko’s method is flawed, however, as he only looks at public data, and there is good reason to believed that more and more traffic is moving off the public Internet and its public exchange points to private peering centers. Nemertes collects at least some data on private exchanges and claims a growth rate somewhere between 50-100%.

The rate of growth matters to the ongoing debates about Internet regulation. If Odlyzko is right, the rate of growth is lower than the rate at which Moore’s Law makes digital parts faster and cheaper, so no problem, routine replacement of equipment will keep up with demand (leaving out the analog costs that aren’t reduced by Moore’s Law.) If Nemertes is right, user demand outstrips Moore’s Law and additional investment is needed in network infrastructure. Increased investment needs to be covered by government subsidies or by the extraction of additional value from the networks by their owner/operators. Subsidy isn’t going to happen while the economy teeters on the edge of collapse, so the high growth conclusion argues against regulations designed to preserve the legacy service model. It’s a vital question.

A couple of new data points emerged this week. Switch and Data, operator of PAIX public exchange points in Palo Alto and New York, says its traffic grew 112% last year:

International networks are making the decision to peer in the United States to reduce transit time between countries and accelerate the performance of U.S. and other global websites in their home markets. This is important due to the explosive growth of Web 2.0 with its bandwidth intensive websites for social networking, rich digital content, and business software applications. Exchanging traffic directly between content and end user networks also significantly reduces Internet transit expense which has been a rapidly growing cost for companies as their traffic volumes soar.

At the Switch and Data New York peering center, traffic was up an astonishing 295%.

Combining these numbers with what we know about the Content Delivery Networks that deliver as much as half of the Internet’s traffic, I think we can reasonably conclude that comprehensive measurement of Internet traffic would support the theory that traffic still grows at an increasing rate. One side effect of the increased use of CDNs and private peering is less certainty about the overall state of Internet traffic. Studies confined to public data are less and less useful, as many researchers have been saying for years.

At any rate, there’s considerable uncertainty about this question at the moment, which argues that the Internet needs a Nate Silver to pierce the fog of conflicting polls.

Cross-posted at CircleID.

Army loves video games

This does not surprise me at all: Army to spend $50M on video games The U.S. Army plans to spend some $50 million over five years on combat video games to train soldiers, according to a report in Stars and Stripes. Next, the CIA will spend a few million on reruns of 24.

This does not surprise me at all: Army to spend $50M on video games

The U.S. Army plans to spend some $50 million over five years on combat video games to train soldiers, according to a report in Stars and Stripes.

Next, the CIA will spend a few million on reruns of 24.