Reaction to BitTorrent story

My article in The Register yesterday about BitTorrent and UDP got some attention. It was a primary article on Techmeme and was Slash-dotted. Here’s the Techmeme link set: Slyck, DSLreports, TorrentFreak, Ars Technica, Icrontic, Joho the Blog, TMCnet, GigaOM, Industry Standard, TechSpot. While most of the discussion went to questions of motivation – I’m alleged … Continue reading “Reaction to BitTorrent story”

My article in The Register yesterday about BitTorrent and UDP got some attention. It was a primary article on Techmeme and was Slash-dotted. Here’s the Techmeme link set: Slyck, DSLreports, TorrentFreak, Ars Technica, Icrontic, Joho the Blog, TMCnet, GigaOM, Industry Standard, TechSpot.

While most of the discussion went to questions of motivation – I’m alleged to be a telco shill for criticizing a system the telcos are OK with – some was actually quite substantial. It’s good to get these issues under the microscope.

More links: Canadian Broadcasting Company, Slashdot, Tales of the Sausage Factory; a few hundred more at Google.

I talked to a couple of the BitTorrent guys today – chief engineer and nemesis Stanislav Shalunov not among them, unfortunately – and they vehemently denied they had any intention of evading the Bell Canada traffic shaping system. Reports from Canada that motivated me to write the piece say the system actually does in fact evade Bell Can’s filters, which will have to be updated as the use of uTorrent 1.9 becomes more widespread, or replaced with more capable equipment.

It remains to be seen whether that upgrade will also catch VoIP and gamers in the throttling net. It’s interesting that the author of the reports on Canada, Karl Bode, is now playing dumb, all the better to be left out of the counter-PR campaign.

Alarming Title: BitTorrent declares war on the Internet

See The Register for my analysis of the latest tweak in Bittorrent: Gamers, VoIP and video conference users beware. The leading BitTorrent software authors have declared war on you – and any users wanting to wring high performance out of their networks. A key design change in the P2P application promises to make the headaches … Continue reading “Alarming Title: BitTorrent declares war on the Internet”

See The Register for my analysis of the latest tweak in Bittorrent:

Gamers, VoIP and video conference users beware. The leading BitTorrent software authors have declared war on you – and any users wanting to wring high performance out of their networks. A key design change in the P2P application promises to make the headaches faced by ISPs so far look like a party game. So what’s happened, and why does it matter?

Upset about Bell Canada’s system for allocating bandwidth fairly among internet users, the developers of the uTorrent P2P application have decided to make the UDP protocol the default transport protocol for file transfers. BitTorrent implementations have long used UDP to exchange tracker information – the addresses of the computers where files could be found – but the new release uses it in preference to TCP for the actual transfer of files. The implications of this change are enormous.

As BitTorrent implementations follow uTorrent’s lead – and they will, since uTorrent is owned by BitTorrent Inc, and is regarded as the canonical implementation – the burden of reducing network load during periods of congestion will shift to the remaining TCP uses, the most important of which are web browsing and video streaming.

Several commentors are upset with the article, mostly because ISPs don’t provide them with unlimited bandwidth. There’s not much I can do for those folks.

A few others claim that BitTorrent over UDP has a congestion control algorithm which they feel is in some way equivalent to the TCP algorithm, but this argument is flawed on a couple of levels. For one, many routers have tweaks in their discard logic that prefers UDP over TCP. This is a key problem with the widespread use of UDP for purposes other than those for which it was intended.

UPDATE: The explicit goal of this new tweak is actually to create a more friendly variant of the existing TCP congestion avoidance algorithm, and it was the pirates at Broadband Reports who said otherwise.

What remains to be seen, however, is what the actual effects will be in large-scale deployment, and whether the genie can be forced back in the bottle if they’re undesirable.

UPDATE 2: I’ve added “Alarming Title” to the title. This piece is getting a lot of people excited.

Technorati Tags: ,

A good synopsis of the Internet

Catherine Rosenberg, a professor with the University of Waterloo’s Department of Electrical and Computer Engineering, has written a great synopsis of the Internet for our cousins to the North: The founding principle of the Internet is resource sharing and hence to deliver an appropriate end-to-end service, some level of co-ordination and traffic control is needed … Continue reading “A good synopsis of the Internet”

Catherine Rosenberg, a professor with the University of Waterloo’s Department of Electrical and Computer Engineering, has written a great synopsis of the Internet for our cousins to the North:

The founding principle of the Internet is resource sharing and hence to deliver an appropriate end-to-end service, some level of co-ordination and traffic control is needed to ensure network performance does not collapse. This is even more true now as the last few years have seen massive increases in Internet traffic due in large part to the proliferation of “bandwidth hungry” applications such as games, peer-to-peer file transfers and increasingly complex, enriched web pages. Added to this is the “all you can eat” economic model promoted by the ISPs, an approach that entices users to always consume more, and of course the fact that the number of Internet users keeps on increasing.

So what does controlling the traffic mean? It means keeping the traffic entering the network under a certain threshold to avoid performance collapses that would affect everyone. And this is what traffic shaping does, by, for example, limiting the bandwidth available for certain types of applications that are less time sensitive in order to keep more bandwidth available for other applications that are more time sensitive, and used by the greater number of subscribers.

While some would argue that this is done “naturally” with Transmission Control Protocol, the reality is that TCP alone is not enough to avoid congestion and spread the burden of congestion as fairly as possible to all those using the congested area.

It’s so refreshing to read something like this after slogging through all the nonsense that our law professors have written about the Internet for our net neutrality debate. I highly recommend you read the Whole Thing.

H/T Brett Glass.

Technorati Tags: , ,

Regulation and the Internet

Here’s a little speech I gave to members of the EU Parliament in Brussels on Oct. 14th. The cousins are contemplating a set of Internet access account regulations that would mandate a minimum QoS level and also ban most forms of stream discrimination. This explains why such rules are a bad (and utterly impractical) idea. … Continue reading “Regulation and the Internet”

Here’s a little speech I gave to members of the EU Parliament in Brussels on Oct. 14th. The cousins are contemplating a set of Internet access account regulations that would mandate a minimum QoS level and also ban most forms of stream discrimination. This explains why such rules are a bad (and utterly impractical) idea.

The Internet is a global network, and regulating it properly is a matter of global concern. I’d like to share a view of the technical underpinnings of the question, to better inform the legal and political discussion that follows and to point out some of the pitfalls that lie in wait.

Why manage network traffic?

Network management, or more properly network traffic management, is a central focus of the current controversy. The consumer-friendly statements of policy, such as the Four Freedoms crafted by Senator McCain’s technology adviser Mike Powell, represent lofty goals, but they’re constrained by the all-important exception for network management. In fact, you could easily simplify the Four Freedoms as “you can do anything you want except break the law or break the network.” Network management prevents you from breaking the network, which you principally do by using up network resources.

Every networking technology has to deal with the fact that the demand for resources often exceeds supply. On the circuit-switched PSTN, resources are allocated when a call is setup, and if they aren’t available your call doesn’t get connected. This is a very inefficient technology that allocates bandwidth in fixed amounts, regardless of the consumer’s need or his usage once the call is connected. A modem connected over the PSTN sends and receives at the same time, but people talking generally take turns. This network doesn’t allow you to save up bandwidth and to use it later, for example. Telecom regulations are based on the PSTN and its unique properties. In network engineering, we call it an “isochronous network” to distinguish it from technologies like the old Ethernet that was the model link layer technology when the DoD protocol suite was designed.

The Internet uses packet switching technology, where users share communications facilities and bandwidth is allocated dynamically. Dynamic bandwidth allocation, wire-sharing, and asynchrony mean that congestion appears and disappears on random, sub-second intervals. Packets don’t always arrive at switching points at the most convenient times, just as cars don’t run on the same rigorous schedules as trains.
Continue reading “Regulation and the Internet”

Canadian regulators smarter than Americans

Canada’s Internet users have won a measure of victory over bandwidth hogs. In a ruling from the CRTC, Canada’s FCC, Bell Canada is permitted to continue managing network over-use: Bell Canada today won a largely clear victory in an anti-throttling lawsuit filed with the Canadian Radio-television and Telecommunications Commission (CRTC). The government body has issued … Continue reading “Canadian regulators smarter than Americans”

Canada’s Internet users have won a measure of victory over bandwidth hogs. In a ruling from the CRTC, Canada’s FCC, Bell Canada is permitted to continue managing network over-use:

Bell Canada today won a largely clear victory in an anti-throttling lawsuit filed with the Canadian Radio-television and Telecommunications Commission (CRTC). The government body has issued a ruling dismissing claims by Internet providers using part of Bell’s network that accused the carrier of unfairly throttling the connection speeds of their services while also constricting its own. These rivals, represented by the Canadian Association of Internet Providers (CAIP), had accused Bell of trying to hinder competition and violating the basic concepts of net neutrality by discouraging large transfers.

The CRTC’s dismissal is based on the observation that peer-to-peer usage does appear to have a detrimental impact on Bell’s network and so requires at least some level of control to keep service running properly for all users. It also rejects neutrality concerns by claim that Bell’s throttling system, which uses deep packet inspection to investigate traffic, is adjusting speed and doesn’t restrict the content itself.

Bell hails its successful defense as proof that those running online networks are “in the best position” to judge how their networks are managed.

Canada’s Larry Lessig, a populist/demagogue law professor named Michael Geist, was heart-broken over the decision, and pro-piracy web site Ars Technica shed a few tears as well:

The proceeding was also notable for the frank admissions from other large ISPs like Rogers—they admitted that they throttle traffic on a discriminatory basis, too. It also produced wild allegations from companies like Cisco that “even if more bandwidth were added to the network, P2P file-sharing applications are designed to use up that bandwidth.” Such assertions allow the ISPs to claim that they must be able to throttle specific protocols simply to stay afloat—survival is at stake.

This is (to put it politely) highly debatable.

Actually it’s not debatable, not by sane people anyhow. Residential broadband is as cheap as it is only because ISPs can count on people sharing the wires in a civilized fashion. People who keep their broadband pipes constantly saturated take resources away from their neighbors. There are alternatives, of course. You can buy a T-1 line with a Service Level Agreement that you can saturate with all the traffic you want. In the US, count on paying $400/mo for 1.5 Mb/s upload and download. Want something cheaper? Learn to share.

Canada is widely regarded as a more left wing, business-hostile country than the US. How to account for the fact that the CRTC got this issue right while Bush’s FCC got it wrong in the Comcast case?

Technorati Tags:

Just Another Utility

Critics of Obama FCC transitioner Susan Crawford have correctly pointed out that she’s made some odd-sounding remarks to the effect that Internet access is just another utility, like water, power, or sewer service. If the intent of this remark is to suggest that everybody needs access to the Internet these days, just as much (or … Continue reading “Just Another Utility”

Critics of Obama FCC transitioner Susan Crawford have correctly pointed out that she’s made some odd-sounding remarks to the effect that Internet access is just another utility, like water, power, or sewer service. If the intent of this remark is to suggest that everybody needs access to the Internet these days, just as much (or nearly as much) as we need electricity and running water, nobody has much of a problem with this observation, other than standard hyperbole objections.

But utilities in the USA tend to be provided by the government, so it’s reasonable to draw the implication from Crawford’s comparison that the government should be providing Internet access. This interpretation is underscored by her frequent complaints about the US’s ranking versus other countries in the broadband speed and price sweepstakes.

If you continually advocate for more aggressive spending to win a supposed international broadband arms race, you minimize the effectiveness of private investment, and you tout the planet’s fastest and cheapest Internet service as a necessity of life, you’re going to be regarded as either a garden-variety socialist or an impractical dreamer.
Continue reading “Just Another Utility”

Honorably Mentioned

The Sidecut Reports ranking of the Top 10 Net Neutrality Influencers has some interesting honorables: Honorable Mention: Tim Wu, Columbia Law School; Kyle McSlarrow, NCTA; Eric Schmidt, Google; Chris Libertelli, eBay/Skype; Gigi Sohn, Public Knowledge; Jessica Rosenworcel, Senate Commerce Committee; Jonathan Adelstein, FCC; Phil Weiser, University of Colorado; Richard Bennett, blogger/independent network engineer and self-confessed … Continue reading “Honorably Mentioned”

The Sidecut Reports ranking of the Top 10 Net Neutrality Influencers has some interesting honorables:

Honorable Mention: Tim Wu, Columbia Law School; Kyle McSlarrow, NCTA; Eric Schmidt, Google; Chris Libertelli, eBay/Skype; Gigi Sohn, Public Knowledge; Jessica Rosenworcel, Senate Commerce Committee; Jonathan Adelstein, FCC; Phil Weiser, University of Colorado; Richard Bennett, blogger/independent network engineer and self-confessed geek.

Hmmm…I don’t know if this is entirely credible. But you never know.

Technorati Tags:

Thirty Profiles

Dave Burstein of DSL Prime has posted profiles of 30 FCC candidates to his web site, including one transition team member: Susan Crawford, now teaching at Michigan, also has enormous respect from her peers and would bring international perspective from her role at ICANN setting world Internet policy The selection of Crawford to join Kevin … Continue reading “Thirty Profiles”

Dave Burstein of DSL Prime has posted profiles of 30 FCC candidates to his web site, including one transition team member:

Susan Crawford, now teaching at Michigan, also has enormous respect from her peers and would bring international perspective from her role at ICANN setting world Internet policy

The selection of Crawford to join Kevin Werbach on the FCC transition team has already gotten some of my colleagues on the deregulatory side pretty excited, as she has the image of being a fierce advocate of a highly-regulated Internet. And indeed, she has written some strong stuff in favor of the “stupid network” construct that demands all packets be treated as equals inside the network. The critics are missing something that’s very important, however: both Werbach and Crawford are “Internet people” rather than “telecom people” and that’s a very important thing. While we may not like Crawford’s willingness to embrace a neutral routing mandate in the past, the more interesting question is how she comes down on a couple of issues that trump neutral routing, network management and multi-service routing.

We all know by now that the network management exception is more powerful than Powell’s “Four Freedoms” where the rubber meets the road, but we lack any clear guidance to ISPs as to how their management practices will be evaluated. Clarification of the rules is as much a benefit to carriers as it is to consumers. The one way to ensure that we all lose is to keep lumbering along in the murk of uncertain authority and secret rules. Internet people are going to ask the right questions to their candidates, and anybody who can satisfy both Werbach and Crawford will have to be a good choice. Check Werbach’s web site for his papers. Unfotunately, the most interesting of them is not yet in print, “The Centripetal Network: How the Internet Holds Itself Together, and the Forces Tearing it Apart”, UC Davis Law Review, forthcoming 2008. Perhaps he’ll post a draft.

The question of multi-service routing is also very important. Crawford has written and testified to the effect that the Internet is the first global, digital, multi-service network, which is substantially correct. The Internet is not fully multi-service today, however, and can’t be unless it exposes multiple service levels at the end points for applications to use easily. The generic public Internet has a single transport service which has to meet the needs of diverse applications today, which is not really an achievable goal in the peer-to-peer world.
Continue reading “Thirty Profiles”

Missing the point of the Internet

Network neutrality advocates have been preening and cooing since the election as they expect the Obama FCC and the new Democratic Party-dominated Congress to enact new laws and regulations advancing their pet cause. They got some support today from an unexpected quarter when the Cato Institute published a paper by graduate student Tim Lee echoing … Continue reading “Missing the point of the Internet”

Network neutrality advocates have been preening and cooing since the election as they expect the Obama FCC and the new Democratic Party-dominated Congress to enact new laws and regulations advancing their pet cause. They got some support today from an unexpected quarter when the Cato Institute published a paper by graduate student Tim Lee echoing and supporting their main argument:

An important reason for the Internet’s remarkable growth over the last quarter century is the “end-to-end” principle that networks should confine themselves to transmitting generic packets without worrying about their contents. Not only has this made deployment of internet infrastructure cheap and efficient, but it has created fertile ground for entrepreneurship. On a network that respects the end-to-end principle, prior approval from network owners is not needed to launch new applications, services, or content.

Tim Lee, bless his heart, is wrong about the importance of the Internet’s end-to-end architecture. While the Internet, along with all other computer-based networks, certainly does have such an architecture, it’s not the only architecture or even the most important one in the mix. The most important part of the Internet is its “network-to-network” architecture, because that’s the part that makes it what it is. The Internet is only an internet because network operators have agreed to exchange traffic with each other according to terms that they develop among themselves without government interference. This exchange of traffic is what makes it an interesting place.

Internetwork packet exchange is not as simplistic as network neutrality advocates make it out to be. Network operators do not simply forward packets first-come-first-served to anybody and everybody for the end-to-end layer to sort out; they discriminate in all sorts of ways to provide good service to as many people as possible at a reasonable price. Some network operators offer different tiers of service to different customers, and exchange traffic with other networks accordingly. This is good, but it’s not the “stupid network” that our regulators want to see.

Network neutrality is an attempt to shackle the Internet with regulations that mirror a failed model of network architecture, to give a victory to a failed vision by government fiat that it could not achieve in the market. The government should not be picking winners and losers in the competition among network architectures.

Even if you don’t accept that argument, there’s another reason that the proposed regulations should be rejected: the Internet is a technology, and technologies can always be expected to improve over time as parts to build them become cheaper and faster. Net neutrality is a backward-looking agenda that seeks to freeze the Internet core at a particular level of technology. This can only have the effect of hastening its obsolescence, and make no mistake about it, it will be obsolete some day. Nostalgia has no place in technology regulation.

Indeed, Tim’s argument against net neutrality regulations is weak and non-specific. It’s a good reminder that advocates only make arguments about unintended consequences, slippery slopes, and camel’s noses when they’ve lost the argument.

Any attempt to add new regulations to the Internet should be examined from a bias against regulation. If a case can be made that new regulations will make things better, well and good. But arguments about restoring a once golden status quo should be rejected out of hand as incoherent and reactionary.

Technorati Tags: , ,

AT&T’s Dubious Behavior

You may not have noticed in the crush of events, but AT&T announced a new broadband service option last week, up to 18 Mb/s DSL: AT&T Inc. (NYSE:T) today announced it will launch AT&T U-verseSM High Speed Internet Max 18 on Nov. 9, offering speeds of up to 18 Mbps downstream. Exclusively available for AT&T … Continue reading “AT&T’s Dubious Behavior”

You may not have noticed in the crush of events, but AT&T announced a new broadband service option last week, up to 18 Mb/s DSL:

AT&T Inc. (NYSE:T) today announced it will launch AT&T U-verseSM High Speed Internet Max 18 on Nov. 9, offering speeds of up to 18 Mbps downstream. Exclusively available for AT&T U-verse TV customers, Max 18 is the fastest high speed Internet package available from the nation’s leading provider of broadband services.

Apparently this is simply a pricing option for existing U-Verse TV customers that allows them to use more of their pipe for downloading when they aren’t using it for TV. The general data rate of the AT&T pipe is 25 Mb/s without pair bonding, of which 2 – 16 Mb/s is used for TV. Under the old plan, Internet downloads were capped at 12 Mb/s, which generally left enough for two HDTV streams, except when it didn’t, and under those circumstances AT&T borrowed from Internet capacity to make the TV keep looking fairly good. AT&T should be able to offer a 25 Mb/s download tier without changing any hardware, but they don’t.

Generally speaking, we’re all in favor of faster downloads whenever possible, but this announcement is troubling for one very big reason: the only way you can get this service is to buy AT&T’s TV service. This bundling sets the giant of the telcos apart from competitors Verizon, Comcast, and Qwest and raises concerns that should have the consumer groups who’ve promoted the net neutrality agenda hopping mad.

The two aspects of network operation that deserve regulatory scrutiny are disclosure and anti-competitive practices, and this behavior falls squarely in the anti-competitive nexus. The other providers of triple- and quad-play services will gladly sell all tiers of Internet service to anyone in the service areas regardless of which other services they choose to buy. They typically discount Internet service for TV and phone customers, but it’s certainly available without purchasing the other services, and for less than it would cost to buy them as well.

This mandatory bundling is unfortunately consistent with AT&T’s role as the black sheep of net neutrality. It was their CEO’s remarks, after all, that set off the current controversy back in 2005: Ed Whiteacre said Google and Vonage weren’t going to “use his pipes for free.” This got Google engaged in a regulatory program and unleashed a massive infusion of cash into the debate over the regulation of Internet access services, not to mention an army of Google-friendly advocates such as Larry Lessig and Tim Wu’s Free Press organization, the muscle behind the Save the Internet blog. And when the FCC overstepped its authority in and slapped Comcast on the wrist, AT&T insisted the cable company should accept its fate silently and take one for the team instead of challenging the unlawful order in court. Their gall is breathtaking.

The consumer advocates have been strangely silent about this clearly anti-competitive bundling. Why should I have to buy AT&T’s TV service to get the top tier of their Internet access service? For years I bought Internet access from Comcast and TV from DirecTV, and was very pleased with the result. I would probably still do that if DirecTV had not ended their relationship with TiVo and tried to force their sub-standard DVR on me. And if I choose to do so today, I can buy the highest tier Comcast offers in my neighborhood without signing up for their TV service, and at a fairly reasonable price.

So why is AT&T trying to gouge the consumer, and why is the net neutrality movement silent about it? Consumer’s Union is all up in arms about cable companies converting analog customers to digital along with the rest of the country in February, a painfully silly campaign that argues for unfair regulation. Why not address a real issue instead?