Has the FCC Created a Stone Too Heavy for It to Lift?

After five years of bickering, the FCC passed an Open Internet Report & Order on a partisan 3-2 vote this week. The order is meant to guarantee that the Internet of the future will be just as free and open as the Internet of the past. Its success depends on how fast the Commission can … Continue reading “Has the FCC Created a Stone Too Heavy for It to Lift?”

After five years of bickering, the FCC passed an Open Internet Report & Order on a partisan 3-2 vote this week. The order is meant to guarantee that the Internet of the future will be just as free and open as the Internet of the past. Its success depends on how fast the Commission can transform itself from an old school telecom regulator wired to resist change into an innovation stimulator embracing opportunity. One thing we can be sure about is that the order hasn’t tamped down the hyperbole that’s fueled the fight to control the Internet’s constituent parts for all these years.

Advocates of net neutrality professed deep disappointment that the FCC’s rules weren’t more proscriptive and severe. Free Press called the order “fake net neutrality,” Public Knowledge said it “fell far short,” Media Access Project called it “inadequate and riddled with loopholes,” and New America Foundation accused the FCC of “caving to telecom lobbyists.” These were their official statements to the press; their Tweets were even harsher.

Free marketers were almost as angry: Cato denounced the order as “speech control,” Washington Policy Center said it “fundamentally changes many aspects of the infrastructure of the Internet,” and the Reason Foundation said it will lead to “quagmire after quagmire of technicalities, which as they add up will have a toll on investment, service and development.”

Republican Congressional leaders made no secret of their displeasure with the FCC’s disregard for their will: Rep. Fred Upton (R, Michigan,) the incoming Commerce Committee Chairman called it a “hostile action against innovation that can’t be allowed to stand,” Rep. Greg Walden (R, Oregon,) incoming Chairman of the Subcommittee on Communications and Technology called it a “power grab,” and vowed to hold hearings to overturn it, while Sen. Kay Bailey Hutchison (R, Texas,) Ranking Member of the Senate Commerce, Science, and Transportation Committee said the order “threatens the future economic growth of the Internet.” Setting Internet policy is indeed a Congressional prerogative rather than an agency matter, so the longer-term solution must come from the Hill, and sooner would be better than later.

Contrary to this criticism and to snarky blogger claims, not everyone was upset with the FCC’s action, coming as it did after a year-long proceeding on Internet regulation meant to fulfill an Obama campaign pledge to advance net neutrality. The President himself declared the FCC action an important part of his strategy to “advance American innovation, economic growth, and job creation,” and Senator John Kerry (D, Massachusetts) applauded the FCC for reaching consensus.

Technology industry reaction ranged from positive to resigned: Information Technology Industry Council President and CEO Dean Garfield declared the measure “ensures continued innovation and investment in the Internet,” TechNet supported it, and National Cable and Telecommunications Association head Kyle McSlarrow said it could have been much worse. At the Information Technology and Innovation Foundation, we were pleased by the promises of a relatively humble set of the rules, less so with the final details; we remain encouraged by the robust process the FCC intends to create for judging complaints, one that puts technical people on the front lines. In the end, the order got the support of the only majority that counts, three FCC commissioners.

Most of us who reacted favorably acknowledged the FCC’s order wasn’t exactly as we would have written it, but accepted it as a pragmatic political compromise that produces more positives than negatives. The hoped-for closing of the raucous debate will have immense benefits on its own, as simply bringing this distracting chapter in the Internet’s story to an end will allow more time for sober discussion about the directions we’d like the Internet to take in its future development. There is no shortage of policy issues that have been cramped by the tendency to view net neutrality as the one great magic wand with the power to solve all the Internet’s problems: The FCC has work to do on freeing up spectrum for mobile networking, the Universal Service Fund needs to be reformed, and the National Broadband Plan needs to be implemented.

If the FCC’s approach proves sound, it might well be exported to other countries, forming the basis of a consistent international approach to the oversight of an international network developed on consistent standards of its own. Such an outcome would have positive consequences for the Internet standards community, which has its own backlog of unfinished business such as scalable routing, congestion management, security, and the domestication of peer-to-peer file sharing and content delivery networks to resolve. This outcome is far from inevitable; last minute rule changes make it less likely than it might have been.

The most important thing the FCC can do in implementing its system of Internet oversight is to elevate process over proscriptive rules. The traditional approach to telecom regulation is to develop a thick sheath of regulations that govern everything from the insignias on the telephone repair person’s uniform to the colors of the insulators on RJ11 cables and apply them in top-down, command-and-control fashion. Many of those on the pro-net neutrality side are steeped in telecom tradition, and they expected such an approach from the FCC for the Internet; theirs are the angry reactions.

But the Internet isn’t a telecom network, and a foot-high stack of regulations certainly would produce the negative consequences for innovation and progress the FCC’s critics have forecast. The appropriate way to address Internet regulation as to follow the model that the Internet has developed for itself, based on a small number of abstract but meaningful principles (each of which is subject to change for good reason) applied by a broad-based community of experts in a collaborative, consultative setting. Internet standards are not devised in an adversarial setting populated by angels and devils locked into mortal combat; they come from a process that values “rough consensus and running code.”

The specifics of the FCC’s order nevertheless give pause to those well-schooled in networking. A few hours before the Commission’s vote, Commissioner Copps persuaded Chairman Genachowski to reverse the Waxman Bill’s presumption regarding the premium transport services that enable Internet TV and video conferencing to enjoy the same level of quality as cable TV. Where the early drafts permitted these services as long as they were offered for sale on a non-discriminatory basis, the final rule arbitrarily presumes them harmful.

The order makes hash of the relationship of the content accelerators provided by Akamai and others to the presumptively impermissible communication accelerators that ISPs might provide one day in order to enable HD group video conferencing and similar emerging applications. The Commission majority fears that allowing network operators to offer premium transport to leading edge apps will put the squeeze on generic transport, but fails to consider that such potential downsides of well-accepted technical practices for Quality of Service can be prevented by applying a simple quota limit on the percentage of a pipe that can be sold as “premium.” This fact, which is obvious to skilled protocol engineers, goes unmentioned in the order.

The poor reasoning for this rule casts doubt on the FCC’s ability to enforce it effectively without outside expertise. By rejecting Internet standards such as RFC 2475 and IEEE standards such as 802.1Q that don’t conform to the telecom activists’ nostalgic, “all packets are equal” vision of the Internet, the FCC chose to blind itself to one of the central points in Tim Wu’s “Network Neutrality, Broadband Discrimination” paper that started the fight: A neutral Internet favors content applications, as a class, over communication applications and is therefore not truly an open network. The only way to make a network neutral among all applications is to differentiate loss and delay among applications; preferably, this is done by user-controlled means. That’s not always possible, so other means are sometimes necessary as well.

All in all, the Commission has built a stone too heavy for it to lift all by itself. The rules have just enough flexibility that the outside technical advisory groups that will examine complaints may be able to correct the order’s errors, but to be effective, the advisors need much deeper technical knowledge than the FCC staffers who wrote the order can provide.

It’s difficult to ask the FCC – an institution with its own 75 year tradition in which it has served as the battleground for bitter disputes between monopolists and public interest warriors – to turn on a dime and embrace a new spirit of collaboration, but without such a far-reaching institutional transformation its Internet regulation project will not be successful. Those of us who work with the FCC are required to take a leap of faith to the effect that the Commission is committed to transforming itself from a hidebound analog regulator into a digital age shepherd of innovation. Now that the Open Internet Report & Order has passed, we have no choice but to put our shoulders to the rock to help push it along. There’s no turning back now.

[cross-posted from the Innovation Policy Blog]

How Markey III Hurts the Internet

Take a look at my analysis of Congressman Markey’s latest foray into Internet management on Internet Evolution. It’s the Big Report that will be up for a week or so. Here’s a teaser: Reading the latest version of Congressman Ed Markey’s (D-MA) Internet Freedom Preservation Act of 2009 is like going to your high school … Continue reading “How Markey III Hurts the Internet”

Take a look at my analysis of Congressman Markey’s latest foray into Internet management on Internet Evolution. It’s the Big Report that will be up for a week or so. Here’s a teaser:

Reading the latest version of Congressman Ed Markey’s (D-MA) Internet Freedom Preservation Act of 2009 is like going to your high school reunion: It forces you to think about issues that once appeared to be vitally important but which have faded into the background with time.

When the first version of this bill appeared, in 2005, the Internet policy community was abuzz with fears that the telcos were poised to make major changes to the Internet. Former SBC/AT&T chairman Ed Whiteacre was complaining about Vonage and Google “using his pipes for free,” and former BellSouth CEO Bill Smith was offering to accelerate Internet services for a fee.

Our friends in the public interest lobby warned us that, without immediate Congressional action, the Internet as we knew it would soon be a thing of the past.

In the intervening years, Congress did exactly nothing to shore up the regulatory system, and the Internet appears to be working as well as it ever has: New services are still coming online, the spam is still flowing, and the denial-of-service attacks are still a regular occurrence.

Enjoy.

, ,

Nostalgia Blues

San Jose Mercury News columnist Troy Wolverton engaged in a bit of nostalgia in Friday’s paper. He pines for the Golden Age of dial-up Internet access, when Internet users had a plethora of choices: A decade ago, when dial-up Internet access was the norm, you could choose from dozens of providers. With so many rivals, … Continue reading “Nostalgia Blues”

San Jose Mercury News columnist Troy Wolverton engaged in a bit of nostalgia in Friday’s paper. He pines for the Golden Age of dial-up Internet access, when Internet users had a plethora of choices:

A decade ago, when dial-up Internet access was the norm, you could choose from dozens of providers. With so many rivals, you could find Internet access at a reasonable price all by itself, without having to buy a bundle of other services with it.

There was competition because regulators forced the local phone giants to allow such services on their networks. But regulators backed away from open-access rules as the broadband era got under way. While local phone and cable companies could permit other companies to use their networks to offer competing services, regulators didn’t require them to do so and cable providers typically didn’t.

Wolverton’s chief complaint is that the DSL service he buys from Earthlink is slow and unreliable. He acknowledges that he could get cheaper service from AT&T and faster service from Comcast, but doesn’t choose to switch because he doesn’t want to “pay through the nose.”

The trouble with nostalgia is that the past never really was as rosy as we tend remember it, and the present is rarely as bad as it appears through the lens of imagination. Let’s consider the facts.

Back in the dial-up days, there were no more than three first-class ISPs in the Bay Area: Best Internet, Netcom, and Rahul. They charged $25-30/month, over the $15-20 we also paid for a phone line dedicated to Internet access; we didn’t want our friends to get a busy signal when we were on-line. So we paid roughly $45/month to access the Internet at 40 Kb/s download and 14 Kb/s or so upstream.

Now that the nirvana of dial-up competition (read: several companies selling Twinkies and nobody selling steak) has ended, what can we get for $45/month? One choice in the Bay Area is Comcast, who will gladly provide you with a 15 Mb/s service for a bit less than $45 ($42.95 after the promotion ends,) or a 20 Mb/s service for a bit more, $52.95. If this is “paying through the nose,” then what were we doing when we paid the same prices for 400 times less performance back in the Golden Age? And if you don’t want or need this much speed, you can get reasonable DSL-class service from a number of ISPs that’s 40 times faster and roughly half the price of dial-up.

Wolverton’s column is making the rounds of the Internet mailing lists and blogs where broadband service is discussed, to mixed reviews. Selective memory fails to provide a sound basis for broadband policy, and that’s really all that Wolverton provides.

, ,

Are the FCC Workshops Fair?

The FCC has run three days of workshops on the National Broadband Plan now, for the purpose of bringing a diverse set of perspectives on broadband technology and deployment issues to the attention of FCC staff. You can see the workshop agendas here. The collection of speakers is indeed richly varied. As you would expect, … Continue reading “Are the FCC Workshops Fair?”

The FCC has run three days of workshops on the National Broadband Plan now, for the purpose of bringing a diverse set of perspectives on broadband technology and deployment issues to the attention of FCC staff. You can see the workshop agendas here. The collection of speakers is indeed richly varied. As you would expect, the session on eGov featured a number of government people and a larger collection of folks from the non-profit sector, all but one of whom has a distinctly left-of-center orientation. Grass-roots devolution arguments have a leftish and populist flavor, so who better to make the argument than people from left-of-center think tanks?

Similarly, the sessions on technology featured a diverse set of voices, but emphasized speakers with actual technology backgrounds. Despite the technology focus, a good number of non-technologists were included, such as media historian Sascha Meinrath, Dave Burstein, Amazon lobbyist Paul Misener, and veteran telephone regulator Mark Cooper. A number of the technology speakers came from the non-profit or university sector, such as Victor Frost of the National Science Foundation, Henning Schulzrinne of Columbia University and IETF, and Bill St. Arnaud of Canarie. The ISPs spanned the range of big operators such as Verizon and Comcast down to a ISPs with fewer than 2000 customers.

Given these facts, it’s a bit odd that some of the public interest groups are claiming to have been left out. There aren’t more than a small handful of genuine technologists working for the public interest groups; you can practically count them on one hand without using the thumb, and there’s no question that their point of view was well represented on the first three days of panels. Sascha Meinrath’s comments at the mobile wireless session on European hobbyist networks were quite entertaining, although not particularly serious. Claiming that “hub-and-spoke” networks are less scalable and efficient than wireless meshes is not credible.

The complaint has the feel of “working the refs” in a basketball game, not as much a legitimate complaint as a tactical move to crowd out the technical voices in the panels to come.

I hope the FCC rolls its collective eyes and calls the game as it sees it. Solid policy positions aren’t contradicted by sound technical analysis, they’re reinforced by it. The advocates shouldn’t fear the FCC’s search for good technical data, they should embrace it.

Let a thousand flowers bloom, folks.

Cross-posted at CircleID.

Another Net Neutrality Meltdown

Over the weekend, a swarm of allegations hit the Internet to the effect that AT&T was blocking access to the the 4chan web site. This report from Techcrunch was fairly representative: As if AT&T wasn’t already bad enough. In an act that is sure to spark internet rebellions everywhere, AT&T has apparently declared war on … Continue reading “Another Net Neutrality Meltdown”

Over the weekend, a swarm of allegations hit the Internet to the effect that AT&T was blocking access to the the 4chan web site. This report from Techcrunch was fairly representative:

As if AT&T wasn’t already bad enough. In an act that is sure to spark internet rebellions everywhere, AT&T has apparently declared war on the extremely popular imageboard 4chan.org, blocking some of the site’s most popular message boards, including /r9k/ and the infamous /b/. moot, who started 4chan and continues to run the site, has posted a note to the 4chan status blog indicating that AT&T is in fact filtering/blocking the site for many of its customers (we’re still trying to confirm from AT&T’s side).

4chan, in case you didn’t know, is a picture-sharing site that serves as the on-line home to a lovable band of pranksters who like to launch DOS attacks and other forms of mischief against anyone who peeves them. The infamous “Anonymous” DOS attack on the Scientology cult was organized by 4chan members, which is a feather in their cap from my point of view. So the general reaction to the news that AT&T had black-holed some of 4chan’s servers was essentially “woe is AT&T, they don’t know who they’re messing with.” Poke 4chan, they poke back, and hard.

By Monday afternoon, it was apparent that the story was not all it seemed. The owner of 4chan, a fellow known as “moot,” admitted that AT&T had good reason to take action against 4chan, which was actually launching what amounted to a DOS attack against some AT&T customers without realizing it:

For the past three weeks, 4chan has been under a constant DDoS attack. We were able to filter this specific type of attack in a fashion that was more or less transparent to the end user.

Unfortunately, as an unintended consequence of the method used, some Internet users received errant traffic from one of our network switches. A handful happened to be AT&T customers.

In response, AT&T filtered all traffic to and from our img.4chan.org IPs (which serve /b/ & /r9k/) for their entire network, instead of only the affected customers. AT&T did not contact us prior to implementing the block.

moot didn’t apologize in so many words, but he did more or less admit his site was misbehaving while still calling the AT&T action “a poorly executed, disproportionate response” and suggesting that is was a “blessing in disguise” because it renewed interest in net neutrality and net censorship. Of course, these subjects aren’t far from the radar given the renewed war over Internet regulation sparked by the comments on the FCC’s National Broadband Plan, but thanks for playing.

The 4chan situation joins a growing list of faux net neutrality crises that have turned out to be nothing when investigated for a new minutes:

* Tom Foremski claimed that Cox Cable blocked access to Craig’s List on June 6th, 2006, but it turned out to be a strange interaction between a personal firewall and Craig’s List’s odd TCP settings. Craig’s List ultimately changed their setup, and the software vendor changed theirs as well. Both parties had the power to fix the problem all along.

* Researchers at the U. of Colorado, Boulder claimed on April 9, 2008, that Comcast was blocking their Internet access when in fact it was their own local NAT that was blocking a stream that looked like a DOS attack. These are people who really should know better.

The tendency to scream “censorship” first and ask questions later doesn’t do anyone any good, so before the next storm of protest arises over a network management problem, let’s get the facts straight. There will be web accounts of AT&T “censoring” 4chan for months and years to come, because these rumors never get corrected on the Internet. As long as Google indexes by popularity, and the complaints are more widespread than the corrections, the complaints will remain the “real story.” I’d like to see some blog posts titled “I really screwed this story up,” but that’s not going to happen – all we’re going to see are some ambiguous updates buried at the end of the misleading stories.

UPDATE: It’s worth noting that AT&T wasn’t the only ISP or carrier to block 4chan’s aggressive switch on Sunday. Another network engineer who found it wise to block the site until it had corrected its DDOS counter-attack posted this to the NANOG list:

Date: Sun, Jul 26, 2009 at 11:05 PM
Subject: Re: AT&T. Layer 6-8 needed.

There has been alot of customers on our network who were complaining about ACK scan reports coming from 207.126.64.181. We had no choice but to block that single IP until the attacks let up. It was a decision I made with the gentleman that owns the colo facility currently hosts 4chan. There was no other way around it. I’m sure AT&T is probably blocking it for the same reason. 4chan has been under attack for over 3 weeks, the attacks filling up an entire GigE. If you want to blame anyone, blame the script kiddies who pull this kind of stunt.

Regards,
Shon Elliott
Senior Network Engineer
unWired Broadband, Inc.

Despite the abundance of good reasons for shutting off access to a domain with a misbehaving switch, AT&T continues to face criticism for the action, some of quite strange. David Reed, a highly vocal net neutrality advocate, went black-helicopters on the story:

I’d be interested in how AT&T managed to block *only* certain parts of 4chan’s web content. Since DNS routing does not depend on the characters after the “/” in a URL in *any* way, the site’s mention that AT&T was blocking only certain sub-“directories” of 4chan’s content suggests that the blocking involved *reading content of end-to-end communications”.

If AT&T admits it was doing this, they should supply to the rest of the world a description of the technology that they were using to focus their blocking. Since AT&T has deployed content-scanning-and-recording boxes for the NSA in its US-based switching fabric, perhaps that is how they do it. However, even if you believe that is legitimate for the US Gov’t to do, the applicability of similar technology to commercial traffic blocking is not clearly in the domain of acceptable Internet traffic management.

What happened, of course, was that a single IP address inside 4chan’s network was blocked. This IP address – 207.126.64.181 – hosts the /b/ and /r9k/ discussion and upload boards at 4chan, and DNS has nothing to do with it. Reed is one of the characters who complains about network management practices before all the relevant bodies, but one wonders if he actually understands how IP traffic is routed on the modern Internet.

And as I predicted, new blog posts are still going up claiming that AT&T is censoring 4chan. Click through to Technorati to see some of them.

Is Broadband a Civil Right?

Sometimes you have to wonder if people appreciate the significance of what they’re saying. On Huffington Post this morning, I found an account of a panel at the Personal Democracy Forum gathering on the question of who controls the Internet’s optical core. The writer, Steve Rosenbaum, declares that Broadband is a Civil Right: If the … Continue reading “Is Broadband a Civil Right?”

Sometimes you have to wonder if people appreciate the significance of what they’re saying. On Huffington Post this morning, I found an account of a panel at the Personal Democracy Forum gathering on the question of who controls the Internet’s optical core. The writer, Steve Rosenbaum, declares that Broadband is a Civil Right:

If the internet is the backbone of free speech and participation, how can it be owned by corporate interests whose primary concern isn’t freedom or self expression or political dissent? Doesn’t it have to be free?

OK, that’s a reasonable point to discuss. Unfortunately, the example that’s supposed to back up this argument is the role that broadband networks have played in the Iranian protests. Does anyone see the problem here? Narrow-band SMS on private networks was a big problem for the government of Iran in the recent protests, but broadband not so much because they could control it easily through a small number of filters.

If broadband infrastructure isn’t owned by private companies, it’s owned by governments; the networks are too big to be owned any other way. So in the overall scheme of things, if I have to choose who’s more likely to let me protest the government from among: A) The Government; or B) Anybody Else, my choice is pretty obviously not the government.

Isn’t this obvious?

,

Recycling Garbage Abroad

Advocates of network neutrality regulations have been largely unsuccessful in advancing their agenda in the US. The one case in which they claim to have secured a victory was the Vuze vs. Comcast action in the FCC, which was severely tainted by Vuze turning to porn to resuscitate its dying business: In a bid to … Continue reading “Recycling Garbage Abroad”

Advocates of network neutrality regulations have been largely unsuccessful in advancing their agenda in the US. The one case in which they claim to have secured a victory was the Vuze vs. Comcast action in the FCC, which was severely tainted by Vuze turning to porn to resuscitate its dying business:

In a bid to increase their revenue, among other things, Vuze has added a catalog of HD adult videos to their BitTorrent client. For a few dollars a month Vuze users can subscribe to the latest hotness. Of course, all torrents on the erotica network are well seeded.

The same FCC commissioners who levied an unlawful fine against CBS for the Janet Jackson wardrobe malfunction ordered Comcast to give free bandwidth to a porn site. (Feeling good about that, Chairman Copps? [ed: OK, that was a cheap shot, but Copps and I know each other.])

Not deterred by this spotty track record, wannabe neutrality regulator Cory Doctorow trots out the well-worn arguments for the overseas audience in a Guardian column that stinks of Dow Chemical’s overseas pesticide dumping:

Take the Telcoms Package now before the EU: among other things, the package paves the way for ISPs and Quangos to block or slow access to websites and services on an arbitrary basis. At the same time, ISPs are instituting and enforcing strict bandwidth limits on their customers, citing shocking statistics about the bandwidth hogs who consume vastly more resources than the average punter.

Between filtering, fiddling connection speeds and capping usage, ISPs are pulling the rug out from under the nations that have sustained them with generous subsidies and regulation.

Doctorow supports his arguments with a series of fanciful metaphors since there aren’t any real abuses for UK subjects to be upset about. Here’s a portion of my reaction in the comments:

Let’s take a closer look at Doctorow’s non-metaphoric claims:

“Between these three factors – (1) reducing the perceived value of the net, (2) reducing the ability of new entrants to disrupt incumbents, and (3) penalizing those who explore new services on the net – we are at risk of scaring people away from the network, of giving competitive advantage to firms in better-regulated nations, of making it harder for people to use the net to weather disasters, to talk to their government and to each other.”

I’ve numbered them for easy reference. So where’s the proof that these things are happening? For (1) we have this:

“ISPs would also like to be able to arbitrarily slow or degrade our network connections depending on what we’re doing and with whom. In the classic “traffic shaping” scenario, a company like Virgin Media strikes a deal with Yahoo…”

How do we know that ISPs want to slow or degrade our access, which would seem to drive us to a different ISP? The metaphoric example is offered as the proof. See the relevance?

For problem (2) , Doctorow offers:

“Unless, that is, the cost of entry into the market goes up by four or five orders of magnitude, growing to encompass the cost of a horde of gladhanding negotiators who must first secure the permission of gatekeepers at the telcoms giants…”

The problem with this, of course, is that the barriers to entry for new search and video services are the edge caches Google would like to install in the ISP networks, which do in fact give them a fast lane to the consumer (why else would Google want them?) and raise obstacles to start-ups. But American neutralists say these entry barriers are good because their friend Google wants to erect them, not a telco. Double standard.

And for (3), the evils of metered billing, we have this lovely little thing:

“Before you clicked on this article, you had no way of knowing how many bytes your computer would consume before clicking on it. And now that you’ve clicked on it, chances are that you still don’t know how many bytes you’ve consumed..”

Please. Metered billing systems aren’t going to operate on the differences between web pages. If Doctorow believed what he said about the Pareto Curve, he’d certainly be able to appreciate the difference between reading a thousand web pages vs watching a thousand videos. High bandwidth consumers aren’t doing anything “innovative,” they’re most likely downloading free porn. Who is this guy kidding?

Doctorow’s fiction may be very enjoyable, but his understanding of the Internet and his policy prescriptions are nonsense. Read the book, take a pass on the law.

What’s especially sad is how Doctorow tries to pander to the overseas audience by using a tonne of Brit slang, going on about “punters,” “Quangos,” pounds and pence, and making a tube reference; NN is all about tribal ID, and he gets just that much of it.

What slows down your Wi-Fi?

The Register stumbled upon an eye-opening report commissioned by the UK telecom regulator, Ofcom, on sources of Wi-Fi interference in the UK: What Mass discovered (pdf) is that while Wi-Fi users blame nearby networks for slowing down their connectivity, in reality the problem is people watching retransmitted TV in the bedroom while listening to their … Continue reading “What slows down your Wi-Fi?”

The Register stumbled upon an eye-opening report commissioned by the UK telecom regulator, Ofcom, on sources of Wi-Fi interference in the UK:

What Mass discovered (pdf) is that while Wi-Fi users blame nearby networks for slowing down their connectivity, in reality the problem is people watching retransmitted TV in the bedroom while listening to their offspring sleeping, and there’s not a lot the regulator can do about it.

Outside central London that is: in the middle of The Smoke there really are too many networks, with resends, beacons and housekeeping filling 90 per cent of the data frames sent over Wi-Fi. This leaves only 10 per cent for users’ data. In fact, the study found that operating overheads for wireless Ethernet were much higher than anticipated, except in Bournemouth for some reason: down on the south coast 44 per cent of frames contain user data.

When 90% of the frames are overhead, the technology itself has a problem, and in this case it’s largely the fact that there’s such a high backward-compatibility burden in Wi-Fi. Older versions of the protocol weren’t designed for obsolescence, so the newer systems have to take steps to ensure the older systems can see them, expensive ones, or collisions happen, and that’s not good for anybody. Licensed spectrum can deal with the obsolescence problem by replacing older equipment; open spectrum has to bear the costs of compatibility forever. So this is one more example of the fact that “open” is not always better.

Interlocking Directorates

The New York Times reports that regulators have an interest in the structure of the Apple and Google boards of directors: The Federal Trade Commission has begun an inquiry into whether the close ties between the boards of two of technology’s most prominent companies, Apple and Google, amount to a violation of antitrust laws, according … Continue reading “Interlocking Directorates”

The New York Times reports that regulators have an interest in the structure of the Apple and Google boards of directors:

The Federal Trade Commission has begun an inquiry into whether the close ties between the boards of two of technology’s most prominent companies, Apple and Google, amount to a violation of antitrust laws, according to several people briefed on the inquiry.

I doubt this will go very far, as the interlocking directors (Eric Schmidt and former Genentech CEO Arthur Levinson,) will simply resign before any enforcement action is imminent, but it does raise some interesting questions about the market for mobile phone operating systems, currently split between Apple, Google, Microsoft, Palm, and a few others. These systems are rife with limitations, each of which could be considered a network neutrality violation when viewed in just the right way.

I imagine Apple itself might wish to give Dr. Schmidt his walking papers before he becomes an anti-trust problem, which he actually isn’t at this point. The FTC’s interest in this obscure situation is probably a signal that the Administration wants to be viewed as an anti-trust hawk without doing anything substantial.

But this is what the law calls an “occasion of sin.” Dear me.