Notable debates in the House of Lords

We’re quite fond of Sir Tim Berners-Lee. As the first web designer, he personally converted the Internet from an odd curiosity of network engineering into a generally useful vehicle for social intercourse, changing the world. That this was a contribution of inestimable value goes without saying. It’s therefore distressing to read that he’s been mumbling … Continue reading “Notable debates in the House of Lords”

We’re quite fond of Sir Tim Berners-Lee. As the first web designer, he personally converted the Internet from an odd curiosity of network engineering into a generally useful vehicle for social intercourse, changing the world. That this was a contribution of inestimable value goes without saying. It’s therefore distressing to read that he’s been mumbling nonsense in public fora about Internet management practices.

For all his brilliance, Sir Tim has never really been on top of the whole traffic thing. His invention, HTTP 1.0, did strange things to the Internet’s traffic handling system: his decision to chunk segments into 512 byte pieces tripled the number of packets the Internet had to carry per unit of information transfer, and his decision to open a unique TCP stream for every object (section of text or graphic image) on a web page required each part of each page to load in TCP’s “slow start” mode. Carriers massively expanded the capacity of their pipes in a vain attempt to speed up web pages, as poor performance was designed into Sir Tim’s protocol. Hence the term “world-wide wait” had to be coined to describe the system, and more experienced engineers had to produce HTTP 1.1 to eliminate the tortured delay. This is not to bash His Eminence, but rather to point out that all of us, even the geniuses, have limited knowledge.

At a House of Lords roundtable last week, Sir Tim took up a new cause by way of complaining about one of the ways that personal information may be obtained on the Internet:

Speaking at a House of Lords event on the 20th anniversary of the invention of the World Wide Web, Berners-Lee said that deep packet inspection was the electronic equivalent of opening people’s mail.

“This is very important to me, as what is at stake is the integrity of the internet as a communications medium,” Berners-Lee said on Wednesday. “Clearly we must not interfere with the internet, and we must not snoop on the internet. If we snoop on clicks and data, we can find out a lot more information about people than if we listen to their conversations.”

Deep packet inspection involves examining both the data and the header of an information packet as it passes a ‘black box’ on a network, in order to reveal the content of the communication.

Like many opponents of the scary-sounding “deep packet inspection,” His Eminence confuses means and ends. There are many ways to obtain personal information on the Internet; the preceding post was about one of them. Given the choice, most of us would gladly surrender some level of information in order to obtain free services or simply better-targeted ads. As long as the Internet is considered a bastion of “free-” (actually, “advertising-supported-“) culture and information, personal information gathering will be the coin of the realm. So it doesn’t much matter if my privacy is violated by a silly packet-snooping system that I can easily thwart by encrypting my data or by an overly-invasive ad placement system, it’s gone either way. So if he’s manic about privacy, he should address the practice of information-gathering itself and not simply one means of doing it.

Nonsense is not unknown in the House of Lords, however. One of the most entertaining debates in the history of Western democracy took place in that august body, the infamous UFO debate:

The big day came on 18 January 1979 in the middle of a national rail strike. But the industrial crisis did nothing to dampen interest in UFOs. The debate was one of the best attended ever held in the Lords, with sixty peers and hundreds of onlookers – including several famous UFOlogists – packing the public gallery.

Lord Clancarty opened the three hour session at 7pm “to call attention to the increasing number of sightings and landings on a world wide scale of UFOs, and to the need for an intra-governmental study of UFOs.” He wound up his speech by asking the Government reveal publicly what they knew about the phenomenon. And he appealed to the Labour Minister of Defence, Fred Mulley, to give a TV broadcast on the issue in the same way his French counterpart, M. Robert Galley, had done in 1974.

The pro-UFO lobby was supported eloquently by the Earl of Kimberley, a former Liberal spokesman on aerospace, who drew upon a briefing by the Aetherius Society for his UFO facts (see obituary, FT 199:24). Kimberley’s views were evident from an intervention he made when a Tory peer referred to the Jodrell Bank radio telescope’s failure to detect a single UFO: “Does the noble Lord not think it conceivable that Jodrell Bank says there are no UFOs because that is what it has been told to say?”

More than a dozen peers, including two eminent retired scientists, made contributions to the debate. Several reported their own sightings including Lord Gainford who gave a good description of the Cosmos rocket, “a bright white ball” like a comet flying low over the Scottish hills on New Year’s Eve. Others referred to the link between belief in UFOs and religious cults. In his contribution the Bishop of Norwich said he was concerned the UFO mystery “is in danger of producing a 20th century superstition” that sought to undermine the Christian faith.

Perhaps their Lordships will invite His Eminence to observe an actual debate on Internet privacy, now that he’s set the stage with the roundtable. I think it would be absolutely smashing to see 40 of Bertie Wooster’s elderly uncles re-design the Web. Maybe they can add a comprehensive security model to the darned thing.

On a related note, Robb Topolski presented the worthies with a vision of the Web in a parallel universe that sent many scurrying back to their country estates to look after their hedgehogs. Topolski actually spoke about North American gophers, but the general discussion brings to mind the hedgehog’s dilemma of an open, advertising-supported Internet: a system that depends on making the private public is easily exploited.

UPDATE: Incidentally, Topolski’s revisionist history of the Web has been harshly slapped-down by the Boing-Boing readers who should be a friendly audience:

Huh? What a bizarre claim. Is he saying that network admins weren’t capable of blocking port 80 when HTTP was getting off its feet?!?

Wha? Even ignoring the fact that network admins at the time _did_ have the tools to block/filter this kind of traffic, this would still have little or nothing to do with endpoint computing power.

Oh, man. This is defintely junk.

Revisionist history in the name of greater freedom is still a lie.

Follow this link to a discussion from 1993 about how to make a Cisco firewall block or permit access to various Internet services by port. HTTP isn’t in the example, but the same rules apply. The power was clearly there.

Welcome to the NAF, Robb, do your homework next time.

, ,

A little bit breathless

The UK has offered some language to the EU regulators on Internet services that would clarify the relationship between users and providers and require full disclosure of management practices by the latter. The measure address the prime source of friction between the package of end user freedoms and the network management exception that we currently … Continue reading “A little bit breathless”

The UK has offered some language to the EU regulators on Internet services that would clarify the relationship between users and providers and require full disclosure of management practices by the latter. The measure address the prime source of friction between the package of end user freedoms and the network management exception that we currently have in the US, absent a coherent regulatory framework for Internet services.

Most of us would probably say, after reading the whole package, that consumer rights are advanced by it. But most of us aren’t fire-breathing neutrality monsters who can’t be bothered with the practical realities of network operation. The actual document the Brits are circulating is here; pay special attention to the Rationale.

The operative language establishes the principle that there are in fact limits to “running the application of your choice” and “accessing and sharing the information of your choice” on the Internet, which is simply stating some of the facts of life. If you’re not allowed to engage in identity theft in real life, you’re also not allowed to do so on the Internet; if you’re not allowed to violate copyright in real life, you’re also not allowed to do so on the Internet; and so on. Similarly, while you’re allowed to access the legal content and services of your choice, you’re not allowed to access them at rates that exceed the capacity of the Internet or any of its component links at any given moment, nor without the finite delays inherent in moving a packet through a mesh of switches, nor with such frequency as to pose a nuisance to the Internet Community as a whole or to your immediate neighbors. Such is life.

In the place of the current text which touts the freedoms without acknowledging the existing legal and practical limits on them, the amendment would require the carriers to disclose service plan limits and actual management practices.

So essentially what you have here is a retreat from a statement that does not accurately describe reasonable expectations of Internet experience with one that does. You can call it the adoption of a reality-based policy statement over a faith-based statement. Who could be upset about this?

Plenty of people, as it turns out. A blog called IPtegrity is hopping mad:

Amendments to the Telecoms Package circulated in Brussels by the UK government, seek to cross out users’ rights to access and distribute Internet content and services. And they want to replace it with a ‘principle’ that users can be told not only the conditions for access, but also the conditions for the use of applications and services.

…as is science fiction writer and blogger Cory Doctorow:

The UK government’s reps in the European Union are pushing to gut the right of Internet users to access and contribute to networked services, replacing it with the “right” to abide by EULAs.

…and Slashdot contributor Glyn Moody:

UK Government Wants To Kill Net Neutrality In EU
…The amendments, if carried, would reverse the principle of end-to-end connectivity which has underpinned not only the Internet, but also European telecommunications policy, to date.’

The general argument these folks make is that the Internet’s magic end-to-end argument isn’t just a guideline for developers of experimental protocols (as I’ve always thought it was,) but an all-powerful axiom that confers immunity from the laws of physics and economics as well as those of human legislative bodies. Seriously.

So what would you rather have, a policy statement that grants more freedoms to you than any carrier can actually provide, or one that honestly and truthfully discloses the actual limits to you? This, my friends, is a fundamental choice: live amongst the clouds railing at the facts or in a real world where up is up and down is down. Sometimes you have to choose.

H/T Hit and Run.

Perils of Content Neutrality

Via Scott Cleland I see that Adam O’Donnell has written a nice piece on the side-effects of net neutrality regulation, Why I am against pure net neutrality While it may sound like treating all ISP traffic equally is a good idea, mandating strict net neutrality hurts computer security for all of us. Adam was in … Continue reading “Perils of Content Neutrality”

Via Scott Cleland I see that Adam O’Donnell has written a nice piece on the side-effects of net neutrality regulation, Why I am against pure net neutrality

While it may sound like treating all ISP traffic equally is a good idea, mandating strict net neutrality hurts computer security for all of us.

Adam was in the audience at last week’s MAAWG panel on net neutrality, and raised an interesting question about Random Early Discard. The moderator cut us off before we were able to address his point (he was anxious to catch a plane) but the question deserves a response.

RED is a method of packet discard that’s intended to avoid the problems inherent in a packet drop discipline that simply uses tail-drop to prevent buffer overflow in routers. The tail drop mechanism tends to cause cycles in packet delivery rates:

1. A buffer overflows, and a whole set of transmitters throttles back.
2. Link utilization drops to 50%.
3. The transmitters as a group increase rate together, until buffer overflow occurs again.
4. Repeat.

The net result of this cycling behavior is that congested links have their effective capacity reduced to about 70% of link speed. RED is an attempt to reduce transmission rate more selectively in order to push the link toward the upper limit of capacity. RED algorithms have been under study since the late ’80s, and none is completely satisfactory. The IETF response was to draft an Internet Standard for something called ECN that enables the network to signal end systems that congestion was building, but it remains undeployed due to Microsoft’s concerns about home router compatibility. The follow-on to ECN is Bob Briscoe’s Re-ECN, which I’ve written about on these pages and in The Register.

The bottom line is that Internet congestion protocols are an area that needs a lot of additional work, which the proposed Net Neutrality laws would hamper or prevent.

Van Jacobson realizes this, per the remarks he makes in an interview in the ACM Queue magazine this month:

Also, we use buffer memory in such a way that it’s valuable only if it’s empty, because otherwise it doesn’t serve as a buffer. What we do is try to forget what we learned as soon as we possibly can; we have to do that to make our buffer memory empty.

For the Olympics (not the most recent, but the previous one), we got some data from the ISP downstream of NBC. That router was completely congested; it was falling over, dropping packets like crazy. If you looked inside its buffers, it had 4,000 copies of exactly the same data, but you couldn’t tell that it was the same because it was 4,000 different connections. It was a horrible waste of memory, because the conversations were all different but what they were about was the same. You should be able to use that memory so you don’t forget until you absolutely have to—that is, go to an LRU (least recently used) rather than MRU (most recently used) replacement policy. It’s the same memory; you just change the way you replace things in it, and then you’re able to use the content.

It wouldn’t be necessary for carriers to put disks in routers. They could just start using the existing buffer memory in a more efficient way, and any time the data was requested more than once, they would see a bandwidth reduction.

Strict neutralism would prevent this system from being implemented: it involves Deep Packet Inspection, and the fanatics have warned us that DPI is a great evil. So we’re faced with this choice: networks that are cheap and efficient, or networks that are bloated with silly ideology. Take your pick, you only get one.

Neutralism is the new S-word

Scott Cleland has done some interesting quote-diving from the works of the neutralists, offering up his findings in a new study. Without using the term “socialism” Scott provides the evidence that this is largely an anti-capitalist movement. The fact that a few highly-profitable capitalist enterprises have found a way to manipulate a rather traditional form … Continue reading “Neutralism is the new S-word”

Scott Cleland has done some interesting quote-diving from the works of the neutralists, offering up his findings in a new study. Without using the term “socialism” Scott provides the evidence that this is largely an anti-capitalist movement. The fact that a few highly-profitable capitalist enterprises have found a way to manipulate a rather traditional form of digital utopian socialism for their own ends is the real news, however.

Anyhow, enjoy Scott’s paper and think about the notion of a “digital kibbutz” while you’re doing it. Now that we live in a time where government owns the banking system, “socialism” isn’t a bad word in all contexts automatically, but we do have to understand that we need to apply different expectations to government-managed systems than we do to privately-managed ones. It’s not as obvious to me as it is to the neutralists that government is more likely to give us universal high-speed connectivity than is business.

UPDATE: See comments for a critique of Scott’s analysis by Brett Glass.

Nice Outings

My talk at the Messaging Anti-Abuse Working Group went very well. It was a huge room, seating probably 500 or so, and over half-full. I talked about how some of the crazier ideas about net neutrality are potentially becoming mainstream thanks to the politics in the nation’s capitol and some of the personnel choices made … Continue reading “Nice Outings”

My talk at the Messaging Anti-Abuse Working Group went very well. It was a huge room, seating probably 500 or so, and over half-full. I talked about how some of the crazier ideas about net neutrality are potentially becoming mainstream thanks to the politics in the nation’s capitol and some of the personnel choices made by the Obama Administration. The selection of Susan Crawford for the FCC Transition Team is a cause for alarm. Susan is as nice a person as you’ll ever want to meet, and quite bright and well-intentioned, but her position that ISPs and carriers have no business actively managing packets is poison. I got a healthy round of applause, and several people thanked me for my remarks afterwards. Very few people know how dependent e-mail is on the DNS Blacklists that members of this organization maintain, and that’s a real shame.

Last night I took the short trip up to Mountain View to see Jeff Jarvis’s talk about his book What Would Google Do? The audience, about 25 people more or less, was a lot less impressed with Google than Jeff is, and it occurred to me that Google really is vulnerable on the search front. I can imagine a much more effective search methodology than the one Google employs, but getting the venture capital to build a rival infrastructure isn’t going to happen.

I told Jeff (an old friend of the blog who’s driven a lot of traffic this way over the years) that what he likes about Google isn’t Google as much as it’s inherent qualities of the Internet. He more or less knows that, but the packaging of open networks, distributed computing, and free expression is easier when you concretize it, and that’s what his book does. I read it as a sequel to Cluetrain.

Speaking at MAAWG in Frisco tomorrow

I’m on a panel tomorrow at the General Meeting of the Messaging Anti-Abuse Working Group, the organization that keeps the Internet from being overrun by spam and malware: The Messaging Anti-Abuse Working Group is a global organization focusing on preserving electronic messaging from online exploits and abuse with the goal of enhancing user trust and … Continue reading “Speaking at MAAWG in Frisco tomorrow”

I’m on a panel tomorrow at the General Meeting of the Messaging Anti-Abuse Working Group, the organization that keeps the Internet from being overrun by spam and malware:

The Messaging Anti-Abuse Working Group is a global organization focusing on preserving electronic messaging from online exploits and abuse with the goal of enhancing user trust and confidence, while ensuring the deliverability of legitimate messages. With a broad base of Internet Service Providers (ISPs) and network operators representing almost one billion mailboxes, key technology providers and senders, MAAWG works to address messaging abuse by focusing on technology, industry collaboration and public policy initiatives

My panel is on Mail Filtering Transparency: The Impact of Network
Neutrality on Combating Abuse:

Network Neutrality (NN) means different things to different people. In 2008, much of the debate was focused on protecting P2P applications from various network management practices. In 2009, the debate is likely to expand to explore the impact of NN concepts on other applications, particularly email. We have already seen the strong reaction by some parties at the IETF to attempts to standardize DNS xBLs, which some claimed were discriminatory and lacking in transparency. We have also heard of claims that when ISPs block certain domains and servers that this may be discriminatory and could run afoul of NN concepts. This panel will explore the question of what NN means to email anti‐abuse, the increasing scrutiny that anti‐abuse policies will be under, the motivations behind the drive for greater transparency regarding such policies, and how all of those things should be balanced against the need to enforce strong anti‐abuse techniques.

Dave Crocker is on the panel, and I’m looking forward to meeting him, and I have it on good authority that Paul Vixie will be in attendance as well. The best thing about being an opinionated jerk like I am is the people you get to meet.

This organization is at the crossroads of “run any application you want” and “reasonable network management.” Spam prevention has always been a lightning rod because the very existence of spam highlights so many of the problems the current Internet architecture has. Its central assumption is that people will behave nicely all (or at least most) of the time, and the existence of botnets clearly calls that into question. It probably comes as no surprise that the filtering that spam reduction systems have to do makes net neuts nervous. Stupid networks may be nice in theory, but we live in a world of practice.

Doubts about Broadband Stimulus

The New York Times has a front page story today on the broadband stimulus bill which features an extensive quote from Brett: Critics like Mr. Glass say the legislation being developed in Congress is flawed in various ways that could mean much of the money is wasted, or potentially not spent at all — arguably … Continue reading “Doubts about Broadband Stimulus”

The New York Times has a front page story today on the broadband stimulus bill which features an extensive quote from Brett:

Critics like Mr. Glass say the legislation being developed in Congress is flawed in various ways that could mean much of the money is wasted, or potentially not spent at all — arguably just as bad an outcome given that the most immediate goal of the stimulus measure is to pump new spending into the economy.

An “open access” requirement in the bill might discourage some companies from applying for grants because any investments in broadband infrastructure could benefit competitors who would gain access to the network down the line.

Meeting minimum speed requirements set forth in the House version could force overly costly investments by essentially providing Cadillac service where an economy car would be just as useful. And some worry that government may pay for technology that will be obsolete even before the work is completed.

“Really the devil is in the details,” Mr. Glass said. “Yes, there is $9 billion worth of good that we can do, but the bill doesn’t target the funds toward those needs.”

The bill is still very rough. Some critics cite the bill’s preference for grants to large incumbents, others highlight the amorphous “open access” provisions and the arbitrary speed provisions as weaknesses. The only interest groups that appear altogether happy with it are Google’s boosters, such as Ben Scott of Free Press. This is a flip-flop for Free Press, who only last week was urging members to call Congress and ask that bill be killed.

A particularly odd reaction comes from friend of the blog Jeff Jarvis, who took time out from pitching his love letter to Google What Would Google Do? to tear into the article’s sourcing:

I found myself irritated by today’s story in the New York Times that asks whether putting money from the bailout toward broadband would be a waste. The question was its own answer. So was the placement of the story atop page one. The reporter creates generic groups of experts to say what the he wants to say (I know the trick; I used to be a reporter): “But experts warn…. Other critics say…. Other supporters said…”

I wish that every time he did that, the words “experts,” “critics,” and “supporters” were hyperlinked to a page that listed three of each.

It’s an obvious case of a story with an agenda: ‘I’m going to set out to poke a hole in this.’

The odd bit is that five people are named and quoted, and the terms “expert” and “critic” clearly refer to these named sources. It’s boring to repeat names over and over, so the writer simply uses these terms to avoid the tedium. It’s clear that Brett and Craig Settles are the critics and experts. Jeff seems not to have read the article carefully and simply goes off on his defensive tirade without any basis.

It’s a given in Google’s world that massive government subsidies for broadband are a good thing because they will inevitably lead to more searches, more ad sales, and more revenue for the Big G. But while that’s clearly the case, it doesn’t automatically follow that what’s good for Google is good for America, so it behooves our policy makers to ensure that the money is spent wisely, without too many gimmicks in favor of one technology over another or too many strings attached that don’t benefit the average citizen.

Raising questions about pending legislation and trying to improve it is as American as baseball, and the article in the Times is a step in the right direction. It may not be what Google would do, but it’s good journalism.

I want to make sure that the broadband money is spent efficiently, so I would bag the open access requirement (nobody knows what it means anyway) and give credit all improvements in infrastructure that increase speed and reduce latency.

The bill needs to support all technologies that have utility in the Internet access space, wireless, coax, and fiber, but should encourage the laying of new fiber where it’s appropriate, and high-speed wireless in less-populated areas. Eventually, homes and businesses are pretty much all going to have fiber at the doorstep, but that doesn’t need to happen overnight.

Professional Complainers Blast Cox

Cox Cable announced plans to test a new traffic management system intended to improve the Internet experience of most of their customers yesterday, and the reaction from the network neutrality lobby came fast and furious. The system will separate latency-sensitive traffic from bulk data transfers and adjust priorities appropriately, which is the sort of thing … Continue reading “Professional Complainers Blast Cox”

Cox Cable announced plans to test a new traffic management system intended to improve the Internet experience of most of their customers yesterday, and the reaction from the network neutrality lobby came fast and furious. The system will separate latency-sensitive traffic from bulk data transfers and adjust priorities appropriately, which is the sort of thing that Internet fans should cheer. In its essence, the Internet is a resource contention system that should, in most cases, resolve competing demands for bandwidth in favor of customer perception and experience. When I testified at the FCC’s first hearing on network management practices last February, I spent half my time on this point and all other witnesses agreed with me: applications have diverse needs, and the network should do its best to meet all of them. That’s what we expect from a “multi-purpose network”, after all.

So now that Cox wants to raise the priority of VoIP and gaming traffic over background file transfers, everybody should be happy. The neutralists have always said in public fora that they support boosting VoIP’s priority over P2P, and Kevin Martin’s press release about the Comcast order said he was OK with special treatment for VoIP. And in fact the failure of the new Comcast system to provide such special treatment is at the root of the FCC’s recent investigation of Comcast, which was praised by the neuts.

So how is it that the very people who complain about Comcast’s failure to boost VoIP priority are now complaining about Cox? Free Press’s general-purpose gadfly Ben Scott is practically jumping up and down pounding the table over it:

Consumer advocates certainly aren’t impressed. “The information provided by Cox gives little indication about how its new practices will impact Internet users, or if they comply with the FCC’s Internet Policy Statement,” says consumer advocacy firm Free Press in a statement. “As a general rule, we’re concerned about any cable or phone company picking winners and losers online.”

“Picking winners and losers” is bad, and failing to pick winners and losers is also bad. The only thread of consistency in the complaints against cable, DSL, and FTTH providers is a lack of consistency.

Make up your mind, Ben Scott, do you want an Internet in which Vuze can step all over Skype or don’t you?

UPDATE: For a little back-and-forth, see Cade Metz’ article on this The Register, the world’s finest tech site. Cade quotes EFF’s Peter Eckersley to the effect that Cox is “presuming to know what users want.” They are, but it’s not that hard to figure out that VoIP users want good-quality phone calls: a three-year-old knows that much.

Technorati Tags:

Damned if you do, screwed if you don’t

The FCC has finally noticed that reducing the Quality of Service of an Internet access service affects all the applications that use it, including VoIP. They’ve sent a harsh letter to Comcast seeking ammunition with which to pillory the cable giant, in one of Kevin Martin’s parting shots: Does Comcast give its own Internet phone … Continue reading “Damned if you do, screwed if you don’t”

The FCC has finally noticed that reducing the Quality of Service of an Internet access service affects all the applications that use it, including VoIP. They’ve sent a harsh letter to Comcast seeking ammunition with which to pillory the cable giant, in one of Kevin Martin’s parting shots:

Does Comcast give its own Internet phone service special treatment compared to VoIP competitors who use the ISP’s network? That’s basically the question that the Federal Communications Commission posed in a letter sent to the cable giant on Sunday. The agency has asked Comcast to provide “a detailed justification for Comcast’s disparate treatment of its own VoIP service as compared to that offered by other VoIP providers on its network.” The latest knock on the door comes from FCC Wireline Bureau Chief Dana Shaffer and agency General Counsel Matthew Berry.

Readers of this blog will remember that I raised this issue with the “protocol-agnostic” management scheme Comcast adopted in order to comply with the FCC’s over-reaction to the former application-aware scheme, which prevented P2P from over-consuming bandwidth needed by more latency-sensitive applications. My argument is that network management needs to operate in two stages, one that allocates bandwidth fairly among users, and a second that allocates it sensibly among the applications in use by each user. The old Comcast scheme did one part of this, and the new scheme does the other part. I’d like to see both at the same time, but it’s not at all clear that the FCC will allow that. So we’re left with various forms of compromise.

The fundamental error that the FCC is making in this instance is incorrectly identifying the “service” that it seeks to regulate according to a new attempt to regulate services (skip to 13:30) rather than technologies.

Comcast sells Internet service, telephone service, and TV service. It doesn’t sell “VoIP service” so there’s no basis to this complaint. The Commission has made it very difficult for Comcast to even identify applications running over the Internet service, and the Net Neuts have typically insisted it refrain from even trying to do so; recall David Reed’s fanatical envelope-waving exercise at the Harvard hearing last year.

The telephone service that Comcast and the telephone companies sell uses dedicated bandwidth, while the over-the-top VoIP service that Vonage and Skype offer uses shared bandwidth. I certainly hope that native phone service outperforms ad hoc VoIP; I pay good money to ensure that it does.

This action says a lot about what’s wrong with the FCC. Regardless of the regulatory model it brings to broadband, it lacks the technical expertise to apply it correctly. The result is “damned if you do, damned if you don’t” enforcement actions.

This is just plain silly. The only party the FCC has any right to take to task in this matter is itself.

The pirates who congregate at DSL Reports are in a big tizzy over this, naturally.

Keeping the Issue Alive

Friends of Broadband should be pleased with President-elect Obama’s proposed broadband stimulus program, which proposes $6 billion in grants for wireless and other forms of broadband infrastructure. Granted, the package isn’t as large as many had wished; Educause had asked for $32 billion, ITIF wanted $10 billion, and Free Press wanted $40 billion, but this … Continue reading “Keeping the Issue Alive”

Friends of Broadband should be pleased with President-elect Obama’s proposed broadband stimulus program, which proposes $6 billion in grants for wireless and other forms of broadband infrastructure. Granted, the package isn’t as large as many had wished; Educause had asked for $32 billion, ITIF wanted $10 billion, and Free Press wanted $40 billion, but this is a good start. Harold Feld puts the size of the grant package in perspective and praises it on his Tales of the Sausage Factory blog.

But there’s no pleasing some people. Free Press has mounted an Action Alert, asking its friends to oppose the stimulus package as it currently stands. The Freeps, who run the “Save the Internet” campaign, want strings attached to the money, insisting it only be given to projects that meet their requirements:

1. Universal: focused on connecting the nearly half of the country stuck on the wrong side of the digital divide.
2. Open: committed to free speech and without corporate gatekeepers, filters or discrimination.
3. Affordable: providing faster speeds at lower prices.
4. Innovative: dedicated to new projects only and available to new competitors, including municipalities and nonprofits.
5. Accountable: open to public scrutiny so we can ensure that our money isn’t being spent to prop up stock prices and support market monopolies.

These goals are not even consistent with each other. Half of America uses broadband today, and half doesn’t. Most of the unconnected half have chosen not to subscribe to services that reach their homes already, opting to remain outside the broadband revolution for their own reasons. So we can’t very well pursue numbers 1 and 4 at the same time. Most of this money will be spent in rural areas that are currently served by Wireless ISPs like Lariat. Rural population isn’t as large as urban population, so going into unserved or underserved areas isn’t going to do much for the digital divide-by-choice that plagues America’s inner cities.

I suspect there’s some self-interest involved here, such that Free Press wants to keep the issue of America’s place in the global ranking of broadband penetration about where it is (between 7th and 15th, depending on whose numbers you like) in order to raise money, have a soapbox, and keep on complaining.

I don’t see any other way to explain this.

UPDATE: Freep has sent letters to the committee chairs with much less incendiary language, but arguing the same line: the Internet is a telecom network and has to be regulated the way that telecom networks have always been regulated. This angle is clearly good if you’re a career telecom regulator, but it’s blind to the technical realities of IP network management. Making an IP network fair and functional requires “discrimination”, and the Freep doesn’t get that. Not even a little bit.

This organization has established an amazing ability to confuse its self-interest with the public interest in the short time that it’s been around. Freep’s first issue, after all, was a series of regulations designed to prevent the rapacious newspaper industry from taking over the television industry. They still push for limits on TV and newspaper cross-ownership, and only got into the Internet-as-telephone fight to advance their initial cause. The number of people who think free societies need to be protected from “powerful newspapers” is vanishingly small, or course, around the same size as the flat-earther demographic.

UPDATE 2: It gets even stranger. Open access provisions are already in the bill, as Matthew Lasar points out on the Ars Technica blog:

As for the net neutrality and open access ideas; well, they’re already in the bill (PDF; see p. 53). NTIA, the executive branch agency tasked with disbursing the broadband money, is required to ensure that all grant recipients operate both wired and wireless services on an “open access basis,” though it’s left up to NTIA to define what this means and how it works.

In addition, anyone taking grant money must “adhere to the principles contained in the Federal Communications Commission’s broadband policy statement,” which lays out four basic neutrality provisions for Internet companies. In other words, although “network neutrality” isn’t mentioned, it’s already in the bill in a basic way. (Note that the FCC policy statement only protects “legal content,” however; it’s not a pure “end-to-end” packet delivery guarantee.)

Here’s a suggestion for the Freep: before issuing your next mouth-breathing Action Alert about a pending bill, read the damn thing. You won’t look like such a bunch of knee-jerk alarmists if you do.

Technorati Tags: