European Event

There’s nothing like a quick trip to the Old Country to advise regulators on the folly of our New World ways. I’ll be speaking in Brussels on Oct. 14 and in London on the 15th to help our cousins structure their telecom regulations appropriately. These events are coordinated by my friends at the Institute for … Continue reading “European Event”

There’s nothing like a quick trip to the Old Country to advise regulators on the folly of our New World ways. I’ll be speaking in Brussels on Oct. 14 and in London on the 15th to help our cousins structure their telecom regulations appropriately. These events are coordinated by my friends at the Institute for Policy Innovation, an excellent free market think tank.

Comcast was right, FCC was wrong

A fellow named Paul Korzeniowski has written a very good, concise piece on the Comcast action at the FCC for Forbes, Feds And Internet Service Providers Don’t Mix. He manages to describe the controversy in clear and unemotional language, which contrasts sharply with the neutralists who constantly use emotionally-charged terms such as “blocking,” “Deep Packet … Continue reading “Comcast was right, FCC was wrong”

A fellow named Paul Korzeniowski has written a very good, concise piece on the Comcast action at the FCC for Forbes, Feds And Internet Service Providers Don’t Mix. He manages to describe the controversy in clear and unemotional language, which contrasts sharply with the neutralists who constantly use emotionally-charged terms such as “blocking,” “Deep Packet Inspection,” “forgery,” and “monopoly” to describe their discomfort.

What Comcast actually did, and still does today, is simply limit the amount of free upstream bandwidth P2P servers can use to 50% of capacity. This isn’t “blocking” or “censorship,” it’s rational network management:

Cable giant Comcast is at the center of a very important controversy for small businesses. In the summer of 2007, it became clear that the carrier was putting restrictions on how much information selected customers could transmit. BitTorrent, a P2P application-sharing company, had been using lots of bandwidth, so the ISP throttled back some its transmissions.

“Throttled back some of its transmissions” is correct. Comcast doesn’t throttle back P2P downloads, which you can prove to yourself if you happen to have a Comcast account: download a large file using P2P and notice that it moves faster than it possibly can on any flavor of DSL. My recent tests with Linux have files downloading at 16 Mb/s, the advertised maximum for my account.

Korzeniowski then explains the facts of life:

The reality is that all ISPs are overbooked–they have sold more bandwidth than they can support.

This overbooking has been an issue since the old Public Switched Telephone Network (PSTN) days. In that situation, individuals would receive a busy signal when the network was overloaded. Because the Internet has an antithetical design, ISPs don’t have a busy signal option.

ISP’s actually do have a “busy signal option:” it’s the Reset packet that Comcast uses to limit active upstream sessions. But neutrality regulationists call it “forgery” and abhor it.

“Overbooking” bandwidth isn’t a bad thing, and in fact it’s central to the economics of packet-switching. The PSTN forces each caller into a bandwidth ghetto where he is allocated a small chunk of bandwidth, 4 KHz, regardless of how much he currently requires. If you’re on the phone and have to set it down to check on your chili, you have 4 KHz. If you’re blasting files over a modem connection, you have 4 KHz. It doesn’t matter how many other callers are on-line and what they’re doing: you each get 4 KHz. That’s the law.

But packet switching, of which the Internet is an example, allows your bandwidth allocation to float depending on what you need to do and what other people are doing. You share network facilities with your neighbors (and this is true whether you use DSL or cable, you just share at different points on these technologies), so you can get a larger chunk of bandwidth when they’re idle than when they’re banging the net hard.

Overbooking allows you to use very large amounts of bandwidth for short periods of time, which is ideal for web surfing: you click on a link, you get a ton of graphics sent to you computer. While you’re reading, your neighbors get to use the bandwidth that would be wasted if you had PSTN connections. It works for everybody, most of the time. It works so well, in fact, that ISPs haven’t bothered to meter actual bandwidth use: the resource is so abundant, and the demands so few (especially in the upstream direction, where your clicks move) that there’s never been a need to control or meter it.

Enter P2P, a method of moving large files across networks that relies on free upstream bandwidth. Now the abundant broadband upstream is constantly occupied, not by an interactive application that sends a click now and click 5 seconds from now and a click a minute from now, you’ve got applications running that constantly stream traffic up the wire, to the detriment of the others in the neighborhood. Something has to give.

One approach is to cap upstream traffic:

However, the “all you can eat” model may no longer be viable–a change the government seems to be ignoring. ISPs could use the open salad bar model when users were mainly transmitting small textual data. But with video becoming more common, users increasingly transmit very large high-definition files.

In response, Comcast plans to cap customer usage at 250 GB of data each month. That translates to about 50 million e-mails, 62,500 songs, 125 standard-definition movies, or 25,000 high-resolution digital photos. That amount would seem to meet the needs of most customers, including small and midsize businesses. The only folks affected would be companies such as BitTorrent, that have based their business on the “all you can eat” model, and hackers, who routinely spew out tons of unwanted solicitations and malware.

Capping has its critics, mostly the same people who object to traffic management as well:

For whatever reason, some believe ISPs should not be able to put any restrictions on the volume of information that any user transmits. That’s absurd. Per-bit and per-byte pricing models have long been used for data transmissions. In trying to build and sustain their businesses, carriers constantly balance their attractiveness and viability versus unlimited usage pricing models. By government decree, they no longer have that option. In effect, the FCC has decided to tell ISPs how to run their networks.

Capping frees up bandwidth for sharing by taking free bandwidth off the table for P2P. But it’s not a technically elegant approach. Humans respond to caps month-by-month, but networks experience congestion and overload millisecond-by-millisecond. So the sensible engineering approach is to manage traffic in pretty much the way that Comcast does it today: identify the bandwidth requirements of applications, and allocate bandwidth to those that need it the most, as we would with any scarce resource: grant transmission opportunities (that’s a technical term we use in network architecture) to highly interactive applications such as VoIP ahead of non-interactive applications such has HDTV file transfers. This is sound practice, but the FCC has now said it’s illegal. The FCC is anti-consumer.

Net neutrality supporters have pressured the FCC because they believe cable companies are unfairly monopolizing the Internet access marketplace. This conveniently ignores a couple of factors. First, there is no Internet access monopoly. A small or midsize business can get access from cable companies, telcos or wireless suppliers. True, there are not 50 choices, as you might have when buying a new pair of pants, but there is a reason why so few companies compete in the Internet access arena–it’s not a great business.

In fact, net neutrality advocates have turned a blind eye to the history of the dot-com bubble. Internet access start-ups burned through more cash with fewer positive results than any market sector in memory–and perhaps ever. Providing Internet access requires a lot of capital for the network and support infrastructure, and there’s not a lot of money to be made when customers pay about $20 a month for unlimited access.

The alternative to application-sensitive traffic management is a crude user-based system that treats all of each user’s traffic the same. This means, for example, that your VoIP streams get the same service from your ISP as your web clicks and your file transfers. This is insane.

Each Internet user should be able to multitask. We should be allowed to share files with P2P or any other non-standard protocol of our choice at the same time that we’re video-chatting or surfing the web. The heavy-handed FCC ruling that all packets must be treated the same undermines the economics of packet switching and delays the day when the Internet will make the PSTN and the cable TV systems obsolete.

Comcast was right to take the ruling to the courts to get it overturned. ISPs should be allowed to deploy a traffic system that combines elements of the protocol-aware system currently in use at Comcast with the new “protocol-agnostic” system that’s under test, such that each customer has a quota for each class of traffic. This is sound network engineering, but the current state of law makes it illegal.

This is not good.

Cross-posted to CircleID.

UPDATE: See Adam Thierer’s comments on this article at Tech Lib.


Technorati Tags: , ,

Google’s Telephony Patent Application not Novel

Google has apparently filed an application for a system that allows bandwidth provider to bid on phone calls: Google’s patent is called “Flexible Communication Systems and Methods” and the abstract says: “A method of initiating a telecommunication session for a communication device include submitting to one or more telecommunication carriers a proposal for a telecommunication … Continue reading “Google’s Telephony Patent Application not Novel”

Google has apparently filed an application for a system that allows bandwidth provider to bid on phone calls:

Google’s patent is called “Flexible Communication Systems and Methods” and the abstract says:

“A method of initiating a telecommunication session for a communication device include submitting to one or more telecommunication carriers a proposal for a telecommunication session, receiving from at least one of the one or more of telecommunication carriers a bid to carry the telecommunications session, and automatically selecting one of the telecommunications carriers from the carriers submitting a bid, and initiating the telecommunication session through the selected telecommunication carrier.”

Read the full patent here

The thing I find interesting about this is that I invented a similar technique in 1997, motivated by the desire to get bandwidth-on-demand for video conferences. If this is granted, it certainly won’t survive a court challenge.

I’ll post some details on my invention, which was never patented, shortly.

Technorati Tags: , , ,

Why I don’t like One Web Day

Today is OneWebDay, the annual exercise in promoting the World Wide Web and touting its many benefits. Each year the event has a theme, and this year’s is something to do with the American election, which is a fine, if somewhat parochial issue for a global event. OWD is the brainchild of law professor Susan … Continue reading “Why I don’t like One Web Day”

Today is OneWebDay, the annual exercise in promoting the World Wide Web and touting its many benefits. Each year the event has a theme, and this year’s is something to do with the American election, which is a fine, if somewhat parochial issue for a global event.

OWD is the brainchild of law professor Susan Crawford, one of the more passionate advocates of a stupid Internet (their expression) in which ISPs and Internet wholesalers have to treat all packets the same way. While Crawford is sincere, I think the exercise is misguided.

There is more to the Internet than the Web: the Internet is a general-purpose network that needs to carry real-time communications such as VoIP and Video Chat alongside Web traffic, P2P,and other kinds of large file transfer systems.

The call for a monolithic traffic handling and regulatory system comes from the misperception that all forms of traffic look and act like web traffic. This is clearly not the case, as we’ve argued until we’re blue in the face on this blog and in print.

One Web Day privileges web use over these other equally important uses of the Internet, and reinforces the myth that a dumb Internet is essential to the economy, politics, freedom, and the like. In fact, a functional network forms the basis of all human uses, for good and for ill.

Next year I’d like to see a “One Internet Day” that touts the projects that aim to improve the Internet. I’d make a sign and go to a rally for that. But “One Web Day” doesn’t do it for me.

Technorati Tags: , ,

Secret laws are not law

While looking for the essence of Lessig’s “code is law” formulation, I happened on this little gem: If there is one thing clear about the value we demand of East Coast Code, it is transparency. Secret laws are not law. And if there is one thing clear about the recent panic about privacy, it is … Continue reading “Secret laws are not law”

While looking for the essence of Lessig’s “code is law” formulation, I happened on this little gem:

If there is one thing clear about the value we demand of East Coast Code, it is transparency. Secret laws are not law. And if there is one thing clear about the recent panic about privacy, it is that much of the anxiety was about the secrets hidden within closed code. Closed code hides its systems of control; open code can’t. Any encryption or identification system built into open code is transparent to those who can read the code, just as laws are transparent to those who can read Congress’ code – lawyers.

(“East Coast code” means laws and government regulations) Kinda makes you wonder why Lessig wasn’t critical of the rabbit-out-of-the-hat regulations the FCC imposed on Comcast.

Oh well.

Technorati Tags: , , ,

Comcast files their compliance plan

Today was the deadline for Comcast to tell the FCC how its existing congestion management system works, as well as how its “protocol agnostic” replacement is going to work. To the dismay of some critics, they’ve done just that in a filing that was hand-delivered as well as electronically filed today. It will be posted … Continue reading “Comcast files their compliance plan”

Today was the deadline for Comcast to tell the FCC how its existing congestion management system works, as well as how its “protocol agnostic” replacement is going to work. To the dismay of some critics, they’ve done just that in a filing that was hand-delivered as well as electronically filed today. It will be posted to the Comcast web site shortly.

The filing corrects some of the false allegations made by critics with respect to privacy, making it very clear that the existing system simply inspects protocol headers (“envelopes”) and not personal data. David Reed in particular got himself worked into a tizzy over the idea that Comcast was deciding which streams to delay based on content, but this is clearly not the case. Inside the IP envelope sits a TCP envelope, and inside that sits a BitTorrent envelope. User data is inside the BitTorrent (or equivalent) envelope, and Comcast doesn’t look at it.

The current system sets a bandwidth quota for P2P, and prevents P2P as a group from crossing the threshold from this quota (about 50% of total upstream bandwidth) with new uni-directional upload (AKA, file-server-like) streams by tearing down requested new streams with the TCP Reset bit. The system is a bit heavy-handed, but reserving 50% of the network for one class of application seems pretty reasonable, given that no more than 20% of customers use P2P at all.

Nonetheless, the new system will not look at any headers, and will simply be triggered by the volume of traffic each user puts on the network and the overall congestion state of the network segment. If the segment goes over 70% utilization in the upload direction for a fifteen-minute sample period, congestion management will take effect.

In the management state, traffic volume measurement will determine which users are causing the near-congestion, and only those using high amounts of bandwidth will be managed. The way they’re going to be managed is going to raise some eyebrows, but it’s perfectly consistent with the FCC’s order.

High-traffic users – those who’ve used over 70% of their account’s limit for the last fifteen minutes – will have all of their traffic de-prioritized for the next fifteen minutes. While de-prioritized, they still have access to the network, but only after the conforming users have transmitted their packets. So instead of bidding on the first 70% of network bandwidth, they’ll essentially bid on the 30% that remains. This will be a bummer for people who are banging out files as fast as they can only to have a Skype call come in. Even if they stop BitTorrent, the first fifteen minutes of Skyping are going to be rough. A more pleasant approach would be to let excessive users out of QoS jail with credit for good behavior – if their utilization drops to Skype level, let them out in a few seconds, because it’s clear they’ve turned off their file sharing program. This may be easier said than done, and it may raise the ire of Kevin Martin, given how irrational he is with this anti-cable vendetta.

The user can prevent this situation from arising, of course, if he wants to. All he has to do is set the upload and download limits in BitTorrent low enough that he doesn’t consume enough bandwidth to land in the “heavy user” classification and he won’t have to put up with bad VoIP quality. I predict that P2P applications and home gateways are going to incorporate controls to enforce “Comcast friendly” operation to prevent de-prioritization. There are other more refined approaches to this problem, of course.

At the end of the day, Comcast’s fifteen/fifteen system provides users with the incentive to control their own bandwidth appetites, which makes it an “end-to-end” solution. The neutralitarians should be happy about that, but it remains to be seen how they’re going to react.

It looks pretty cool to me.

UPDATE: Comcast-hater Nate Anderson tries to explain the system at Ars Technica. He has some of it right, but doesn’t seem to appreciate any of its implications. While the new system will not look at protocol headers (the evil “Deep Packet Inspection” that gets network neophytes and cranks so excited) , and it won’t use TCP Resets, that doesn’t mean that P2P won’t be throttled; it will.

That’s simply because P2P contributes most of the load on residential networks. So if you throttle the heaviest users, you’re in effect throttling the heaviest P2P users, because the set of heavy users and the set of heavy P2P users is the same set. So the “disparate impact” will remain even though the “disparate treatment” will end.

But the FCC has to like it, because it conforms to all of Kevin Martin’s rabbit-out-the-hat rules. The equipment Comcast had had to purchase for this exercise in aesthetic reform will have utility down the road, but for now it’s simply a tax imposed by out-of-control regulators.

Ars Technica botches another story

Why is it so hard for the tech press report on the broadband business with some semblance of accuracy? I know some of this stuff is complicated, but if it’s your business to explain technology and business developments to the public, isn’t it reasonable to suppose you’re going to get the facts right most of … Continue reading “Ars Technica botches another story”

Why is it so hard for the tech press report on the broadband business with some semblance of accuracy? I know some of this stuff is complicated, but if it’s your business to explain technology and business developments to the public, isn’t it reasonable to suppose you’re going to get the facts right most of the time?

Case in point is Matthew Lasar at Ars Technica, the hugely popular tech e-zine that was recently purchased by Conde Nast/Wired for $25 million, healthy bucks for a web site. Lasar is a self-appointed FCC watcher who seems to consistently botch the details on targets of FCC action. The most recent example is a story about a clarification to AT&T’s terms of use for its U-Verse triple play service. The update advises customers that they may see a temporary reduction in their Internet download speed if they’re using non-Internet U-Verse television or telephone services that consume a lot of bandwidth. Lasar has no idea what this means, so he turns to Gizmodo and Public Knowledge for explanation, and neither of them gets it either. So he accepts a garbled interpretation of some AT&T speak filtered through Gizmodo’s misinterpretation as the gospel truth of the matter:

Ars contacted AT&T and was told by company spokesperson Brad Mays that the firm has no intention of “squeezing” its U-verse customers. “It’s more a matter of the way data comes into and travels around a home,” Mays said. “There are things (use of PCs, video, etc.) that can impact the throughput speed a customer gets. We are not doing anything to degrade the speed, it’s just a fact of the way data travels.”

The AT&T guy is trying to explain to Lasar that U-Verse TV uses the same cable as U-Verse Internet, but U-Verse TV has first call on the bandwidth. The cable’s bandwidth is roughly 25 Mb/s, and HDTV streams are roughly 8 Mb/s. If somebody in your house is watching two HDTV shows, 16 of that 25 is gone, and Internet can only use the remaining 9, which is a step down from the 10 Mb/s that it can get if you’re running one HDTV stream alongside an SDTV stream.

This isn’t a very complicated issue, and it shouldn’t be so muddled after multiple calls to AT&T if the writers in question were mildly up-to-speed on IPTV.

Lasar botched another recent story on Comcast’s agreement with the Florida Attorney General to make its monthly bandwidth cap explicit as well, claiming that Comcast had adopted the explicit cap in a vain attempt to avoid a fine:

Ars contacted the Florida AG about this issue, and received the following terse reply: “We believe the change pursuant to our concerns was posted during our investigation.” When asked whether this means that when the AG’s probe began, Comcast didn’t post that 250GB figure, we were told that the aforementioned one sentence response explains everything and to have a nice day.

In fact, the cap was part of its agreement with Florida, as the AG’s office explains on its web site:

Under today’s settlement, reached with Comcast’s full cooperation, the company has agreed not to enforce the excessive use policy without prior clear and conspicuous disclosure of the specific amount of bandwidth usage that would be considered in violation of the policy. The new policy will take effect no later than January, 1, 2009.

And everybody who follows the industry knows that. The Comcast cap is also less meaningful than Lasar reports, since Comcast says they’re only going to get tough on customers in excess of the cap who are also in the top 1% of bandwidth consumers, so simply going over 250 GB won’t get you in trouble at the future date in which everyone is doing it.

The tech press in general and Ars Technica in particular needs to upgrade its reporting standards. It’s bad enough when Ars trots out opinion pieces on network neutrality by Nate Anderson thinly disguised as reporting; most sensible readers understand that Anderson is an advocate, and take his “reporting” with the necessary mix of sodium chloride. But Anderson doesn’t consistently get his facts wrong the way Lasar does.

It would be wise for Ars to spend some of the Conde Nast money on some fact-checkers, the better to avoid further embarassment. We understand that Gizmodo is simply a gadget site that can’t be counted on for deep analysis, and that Public Knowledge is a spin machine, but journalists should be held to a higher standard.

Technorati Tags: , ,

Our Efforts Bearing Fruit

Regular readers are aware of my Op-Ed criticizing Google’s rapacious ways, written for the San Francisco Chronicle and subsequently re-printed in the Washington Times. That doesn’t happen too often, BTW. The Wall St. Journal reports that the Justice Department is paying attention: WASHINGTON — The Justice Department has quietly hired one of the nation’s best-known … Continue reading “Our Efforts Bearing Fruit”

Regular readers are aware of my Op-Ed criticizing Google’s rapacious ways, written for the San Francisco Chronicle and subsequently re-printed in the Washington Times. That doesn’t happen too often, BTW. The Wall St. Journal reports that the Justice Department is paying attention:

WASHINGTON — The Justice Department has quietly hired one of the nation’s best-known litigators, former Walt Disney Co. vice chairman Sanford Litvack, for a possible antitrust challenge to Google Inc.’s growing power in advertising.

Mr. Litvack’s hiring is the strongest signal yet that the U.S. is preparing to take court action against Google and its search-advertising deal with Yahoo Inc. The two companies combined would account for more than 80% of U.S. online-search ads.

Google shares tumbled 5.5%, or $24.30, to $419.95 in 4 p.m. trading on the Nasdaq Stock Market, while Yahoo shares were up 18 cents to $18.26.

For weeks, U.S. lawyers have been deposing witnesses and issuing subpoenas for documents to support a challenge to the deal, lawyers close to the review said. Such efforts don’t always mean a case will be brought, however.

An 80% market share in search ads is not good for democracy, of course, so we applaud the impending suit in advance.

Technorati Tags: , ,

Kevin Martin threatens Comcast

Kevin Martin is upset that Comcast has challenged his authority by filing a lawsuit against the FCC for making up law out of thin air. The Chairman of the FCC expressed his scorn by releasing a statement that makes him sound like one of the dumbest men in America: “Given Comcast’s past failure to disclose … Continue reading “Kevin Martin threatens Comcast”

Kevin Martin is upset that Comcast has challenged his authority by filing a lawsuit against the FCC for making up law out of thin air. The Chairman of the FCC expressed his scorn by releasing a statement that makes him sound like one of the dumbest men in America:

“Given Comcast’s past failure to disclose its network management practices to its customers, it is important Comcast respond to the many still-unanswered questions about its new management techniques,” Martin warned in a statement released this afternoon. Most notably, what exactly does Comcast mean when it says it will have a “protocol agnostic” management system in place by the end of the year?

And as for the bandwidth limits that Comcast has now announced: “How will consumers know if they are close to a limit?” Martin asked. “If a consumer exceeds a limit, is his traffic slowed? Is it terminated? Is his service turned off?”

Let’s see if we can help the Chairman:

1. The “end of the year” is December 31, at midnight. In urban areas, people will make noise and drink a lot. It would be good for Kevin Martin to be among them.

2. Comcast has said they’ll write a *very mean letter* to customers over the 250 GB limit and among the top 1% in bandwidth consumption. It was in the papers, but not on the funny page.

3. I won’t define “protocol agnostic” as that subject was covered, at length, the order the FCC’s lawyers wrote in the Comcast matter. Martin should have one of them explain it to him.

Where did Bush find this person?

Comcast Appeals

Comcast has appealed the FCC’s crazy order in the DC Circuit today. Here’s the statement: Although we are seeking review and reversal of the Commission’s network management order in federal court, we intend to comply fully with the requirements established in that order, which essentially codify the voluntary commitments that we have already announced, and … Continue reading “Comcast Appeals”

Comcast has appealed the FCC’s crazy order in the DC Circuit today. Here’s the statement:

Although we are seeking review and reversal of the Commission’s network management order in federal court, we intend to comply fully with the requirements established in that order, which essentially codify the voluntary commitments that we have already announced, and to continue to act in accord with the Commission’s Internet Policy Statement. Thus, we intend to make the required filings and disclosures, and we will follow through on our longstanding commitment to transition to protocol-agnostic network congestion management practices by the end of this year. We also remain committed to bringing our customers a superior Internet experience.

We filed this appeal in order to protect our legal rights and to challenge the basis on which the Commission found that Comcast violated federal policy in the absence of pre-existing legally enforceable standards or rules. We continue to recognize that the Commission has jurisdiction over Internet service providers and may regulate them in appropriate circumstances and in accordance with appropriate procedures. However, we are compelled to appeal because we strongly believe that, in this particular case, the Commission’s action was legally inappropriate and its findings were not justified by the record.

It’s a little odd that they have to appeal to resolve the procedural irregularities despite planning to follow the order anyhow. But that’s life.

Media Access Project has already filed appeals in the 2nd, 3rd, and 9th circuits, in an attempt to create a jurisdiction fight that would have to be resolved by the Supremes. MAP wants the court to waive the phase out period for Comcast’s Sandvine system, but that’s simply a pretext for the jurisdiction fight.

Story in Broadcasting and Cable.