Congrats to Harold Feld

DC wonks are by now aware that Harold Feld has left MAP and joined Public Knowledge as Legal Director. While there’s no doubt that Harold is a card-carrying communist, he’s my favorite pinko so I’m glad to see he’s secured gainful employment. With any luck, he can slap a little sense into the more fanatical … Continue reading “Congrats to Harold Feld”

DC wonks are by now aware that Harold Feld has left MAP and joined Public Knowledge as Legal Director. While there’s no doubt that Harold is a card-carrying communist, he’s my favorite pinko so I’m glad to see he’s secured gainful employment. With any luck, he can slap a little sense into the more fanatical members of the PK staff and make them act a little bit more like adults. So congrats, Harold, and good luck to you. A little, anyway.

Speaking of communists, check this breath-taking exercise in spin at Technology Liberation Front. Tim Lee trots out that sick “GNU/Linux operating system” trope. Nope, GNU and Linux are two different things created by two different communities under very different assumptions. The FSF tried to create its own OS for many years and failed, but Torvalds did it right away because he’s a brainy and practical dude. Don’t count in fire-breathing ideologues to create your technology for you, there will be so many strings attached you won’t want to use it.

SxSW Wireless Meltdown

There’s nothing like a hoarde of iPhone users to kill access to to AT&T’s wireless network: my AT&T Blackberry Bold was nearly unusable at eComm because of the large number of iPhones in the room, and the situation at SxSW is roughly the same. The silver lining in Austin this week is that the show’s … Continue reading “SxSW Wireless Meltdown”

There’s nothing like a hoarde of iPhone users to kill access to to AT&T’s wireless network: my AT&T Blackberry Bold was nearly unusable at eComm because of the large number of iPhones in the room, and the situation at SxSW is roughly the same. The silver lining in Austin this week is that the show’s Wi-Fi network is working well. Part of the trick is the deployment of Cisco 1252 Access Points with 5 GHz support. Unlike the Bold, iPhones can’t operate on 5 GHz channels, so all that spectrum is free for the taking by Bolds and laptops that can operate on it. In a concession to Macbook users who aren’t allowed to select a Wi-Fi band, the show net had different ESSID’s for 2.4 and 5 GHz operation. It also has a load of reasonable restrictions:

Acceptable Use Policy

The Wireless network at the Convention Center is designed for blogging, e-mail, surfing and other general low bandwidth applications. It is not intended for streaming of any sort.

a) Peer-to-peer traffic such as bittorrent and the like, use a disproportionate amount of bandwidth and are unfair to other attendees. Please refrain from non-conference related peer-to-peer activities to minimize this effect.

b) Please be considerate and share the bandwidth with your fellow attendees. Downloading all of the videos from a video sharing service for example, is being a hog.

c) Please do not actively scan the network. Many of the tools for scanning an address range are too efficient at using as much bandwidth as possible, this will likely be noticed.

Despite this AUP, I can confidently predict that speakers will demand unrestricted use of wireless spectrum.

Slight disconnect, eh?

UPDATE: Om of GigaOm reports that AT&T is addressing the problems in Austin by switching on the 850 MHz band in their downtown Austin towers:

AT&T’s network choked and suddenly everyone was up in arms. And then Ma Bell got in touch with Stacey, who reported that AT&T was boosting its network capacity.

How did they do this? By switching on 850 MHz band on eight cell towers to blanket the downtown Austin area. This was in addition to the existing capacity on the 900 MHz band. AT&T is going to make the same arrangements in San Francisco and New York by end of 2009, AT&T Mobility CEO Ralph de la Vega told Engadget.

Not all of your AT&T devices support the 850 MHz band, but the Bold does. The larger takeaway, however, is that all wireless systems become victim to their own success. The more people use them, the worse they get. C’est la vie.

, ,

Notable debates in the House of Lords

We’re quite fond of Sir Tim Berners-Lee. As the first web designer, he personally converted the Internet from an odd curiosity of network engineering into a generally useful vehicle for social intercourse, changing the world. That this was a contribution of inestimable value goes without saying. It’s therefore distressing to read that he’s been mumbling … Continue reading “Notable debates in the House of Lords”

We’re quite fond of Sir Tim Berners-Lee. As the first web designer, he personally converted the Internet from an odd curiosity of network engineering into a generally useful vehicle for social intercourse, changing the world. That this was a contribution of inestimable value goes without saying. It’s therefore distressing to read that he’s been mumbling nonsense in public fora about Internet management practices.

For all his brilliance, Sir Tim has never really been on top of the whole traffic thing. His invention, HTTP 1.0, did strange things to the Internet’s traffic handling system: his decision to chunk segments into 512 byte pieces tripled the number of packets the Internet had to carry per unit of information transfer, and his decision to open a unique TCP stream for every object (section of text or graphic image) on a web page required each part of each page to load in TCP’s “slow start” mode. Carriers massively expanded the capacity of their pipes in a vain attempt to speed up web pages, as poor performance was designed into Sir Tim’s protocol. Hence the term “world-wide wait” had to be coined to describe the system, and more experienced engineers had to produce HTTP 1.1 to eliminate the tortured delay. This is not to bash His Eminence, but rather to point out that all of us, even the geniuses, have limited knowledge.

At a House of Lords roundtable last week, Sir Tim took up a new cause by way of complaining about one of the ways that personal information may be obtained on the Internet:

Speaking at a House of Lords event on the 20th anniversary of the invention of the World Wide Web, Berners-Lee said that deep packet inspection was the electronic equivalent of opening people’s mail.

“This is very important to me, as what is at stake is the integrity of the internet as a communications medium,” Berners-Lee said on Wednesday. “Clearly we must not interfere with the internet, and we must not snoop on the internet. If we snoop on clicks and data, we can find out a lot more information about people than if we listen to their conversations.”

Deep packet inspection involves examining both the data and the header of an information packet as it passes a ‘black box’ on a network, in order to reveal the content of the communication.

Like many opponents of the scary-sounding “deep packet inspection,” His Eminence confuses means and ends. There are many ways to obtain personal information on the Internet; the preceding post was about one of them. Given the choice, most of us would gladly surrender some level of information in order to obtain free services or simply better-targeted ads. As long as the Internet is considered a bastion of “free-” (actually, “advertising-supported-“) culture and information, personal information gathering will be the coin of the realm. So it doesn’t much matter if my privacy is violated by a silly packet-snooping system that I can easily thwart by encrypting my data or by an overly-invasive ad placement system, it’s gone either way. So if he’s manic about privacy, he should address the practice of information-gathering itself and not simply one means of doing it.

Nonsense is not unknown in the House of Lords, however. One of the most entertaining debates in the history of Western democracy took place in that august body, the infamous UFO debate:

The big day came on 18 January 1979 in the middle of a national rail strike. But the industrial crisis did nothing to dampen interest in UFOs. The debate was one of the best attended ever held in the Lords, with sixty peers and hundreds of onlookers – including several famous UFOlogists – packing the public gallery.

Lord Clancarty opened the three hour session at 7pm “to call attention to the increasing number of sightings and landings on a world wide scale of UFOs, and to the need for an intra-governmental study of UFOs.” He wound up his speech by asking the Government reveal publicly what they knew about the phenomenon. And he appealed to the Labour Minister of Defence, Fred Mulley, to give a TV broadcast on the issue in the same way his French counterpart, M. Robert Galley, had done in 1974.

The pro-UFO lobby was supported eloquently by the Earl of Kimberley, a former Liberal spokesman on aerospace, who drew upon a briefing by the Aetherius Society for his UFO facts (see obituary, FT 199:24). Kimberley’s views were evident from an intervention he made when a Tory peer referred to the Jodrell Bank radio telescope’s failure to detect a single UFO: “Does the noble Lord not think it conceivable that Jodrell Bank says there are no UFOs because that is what it has been told to say?”

More than a dozen peers, including two eminent retired scientists, made contributions to the debate. Several reported their own sightings including Lord Gainford who gave a good description of the Cosmos rocket, “a bright white ball” like a comet flying low over the Scottish hills on New Year’s Eve. Others referred to the link between belief in UFOs and religious cults. In his contribution the Bishop of Norwich said he was concerned the UFO mystery “is in danger of producing a 20th century superstition” that sought to undermine the Christian faith.

Perhaps their Lordships will invite His Eminence to observe an actual debate on Internet privacy, now that he’s set the stage with the roundtable. I think it would be absolutely smashing to see 40 of Bertie Wooster’s elderly uncles re-design the Web. Maybe they can add a comprehensive security model to the darned thing.

On a related note, Robb Topolski presented the worthies with a vision of the Web in a parallel universe that sent many scurrying back to their country estates to look after their hedgehogs. Topolski actually spoke about North American gophers, but the general discussion brings to mind the hedgehog’s dilemma of an open, advertising-supported Internet: a system that depends on making the private public is easily exploited.

UPDATE: Incidentally, Topolski’s revisionist history of the Web has been harshly slapped-down by the Boing-Boing readers who should be a friendly audience:

Huh? What a bizarre claim. Is he saying that network admins weren’t capable of blocking port 80 when HTTP was getting off its feet?!?

Wha? Even ignoring the fact that network admins at the time _did_ have the tools to block/filter this kind of traffic, this would still have little or nothing to do with endpoint computing power.

Oh, man. This is defintely junk.

Revisionist history in the name of greater freedom is still a lie.

Follow this link to a discussion from 1993 about how to make a Cisco firewall block or permit access to various Internet services by port. HTTP isn’t in the example, but the same rules apply. The power was clearly there.

Welcome to the NAF, Robb, do your homework next time.

, ,

Opting-out of Adsense

Regular readers are aware that this blog used to feature Google ads. We never made serious money from Adsense, so it was easy to decide to drop it when the Terms and Conditions of Google’s new behavioral advertising campaign were relased. Here’s what Google suggests re: a privacy disclosure: What should I put in my … Continue reading “Opting-out of Adsense”

Regular readers are aware that this blog used to feature Google ads. We never made serious money from Adsense, so it was easy to decide to drop it when the Terms and Conditions of Google’s new behavioral advertising campaign were relased. Here’s what Google suggests re: a privacy disclosure:

What should I put in my privacy policy?

Your posted privacy policy should include the following information about Google and the DoubleClick DART cookie:

* Google, as a third party vendor, uses cookies to serve ads on your site.
* Google’s use of the DART cookie enables it to serve ads to your users based on their visit to your sites and other sites on the Internet.
* Users may opt out of the use of the DART cookie by visiting the Google ad and content network privacy policy.

Because publisher sites and laws across countries vary, we’re unable to suggest specific privacy policy language. However, you may wish to review resources such as the Network Advertising Initiative, or NAI, which suggests the following language for data collection of non-personally identifying information:

We use third-party advertising companies to serve ads when you visit our website. These companies may use information (not including your name, address, email address, or telephone number) about your visits to this and other websites in order to provide advertisements about goods and services of interest to you. If you would like more information about this practice and to know your choices about not having this information used by these companies, click here.

You can find additional information in Appendix A of the NAI Self-Regulatory principles for publishers (PDF). Please note that the NAI may change this sample language at any time.

People don’t come to this site to buy stuff, and they shouldn’t have to undergo a vexing decision-making process before visiting this blog, so we’ve dropped Google as an advertiser. Not because Google is Evil, but simply because this is one too many hoops for our readers to jump through. Plus, the commission rate sucks.

So please continue to read Broadband Politics without fear of being reported to Big Brother.

A little bit breathless

The UK has offered some language to the EU regulators on Internet services that would clarify the relationship between users and providers and require full disclosure of management practices by the latter. The measure address the prime source of friction between the package of end user freedoms and the network management exception that we currently … Continue reading “A little bit breathless”

The UK has offered some language to the EU regulators on Internet services that would clarify the relationship between users and providers and require full disclosure of management practices by the latter. The measure address the prime source of friction between the package of end user freedoms and the network management exception that we currently have in the US, absent a coherent regulatory framework for Internet services.

Most of us would probably say, after reading the whole package, that consumer rights are advanced by it. But most of us aren’t fire-breathing neutrality monsters who can’t be bothered with the practical realities of network operation. The actual document the Brits are circulating is here; pay special attention to the Rationale.

The operative language establishes the principle that there are in fact limits to “running the application of your choice” and “accessing and sharing the information of your choice” on the Internet, which is simply stating some of the facts of life. If you’re not allowed to engage in identity theft in real life, you’re also not allowed to do so on the Internet; if you’re not allowed to violate copyright in real life, you’re also not allowed to do so on the Internet; and so on. Similarly, while you’re allowed to access the legal content and services of your choice, you’re not allowed to access them at rates that exceed the capacity of the Internet or any of its component links at any given moment, nor without the finite delays inherent in moving a packet through a mesh of switches, nor with such frequency as to pose a nuisance to the Internet Community as a whole or to your immediate neighbors. Such is life.

In the place of the current text which touts the freedoms without acknowledging the existing legal and practical limits on them, the amendment would require the carriers to disclose service plan limits and actual management practices.

So essentially what you have here is a retreat from a statement that does not accurately describe reasonable expectations of Internet experience with one that does. You can call it the adoption of a reality-based policy statement over a faith-based statement. Who could be upset about this?

Plenty of people, as it turns out. A blog called IPtegrity is hopping mad:

Amendments to the Telecoms Package circulated in Brussels by the UK government, seek to cross out users’ rights to access and distribute Internet content and services. And they want to replace it with a ‘principle’ that users can be told not only the conditions for access, but also the conditions for the use of applications and services.

…as is science fiction writer and blogger Cory Doctorow:

The UK government’s reps in the European Union are pushing to gut the right of Internet users to access and contribute to networked services, replacing it with the “right” to abide by EULAs.

…and Slashdot contributor Glyn Moody:

UK Government Wants To Kill Net Neutrality In EU
…The amendments, if carried, would reverse the principle of end-to-end connectivity which has underpinned not only the Internet, but also European telecommunications policy, to date.’

The general argument these folks make is that the Internet’s magic end-to-end argument isn’t just a guideline for developers of experimental protocols (as I’ve always thought it was,) but an all-powerful axiom that confers immunity from the laws of physics and economics as well as those of human legislative bodies. Seriously.

So what would you rather have, a policy statement that grants more freedoms to you than any carrier can actually provide, or one that honestly and truthfully discloses the actual limits to you? This, my friends, is a fundamental choice: live amongst the clouds railing at the facts or in a real world where up is up and down is down. Sometimes you have to choose.

H/T Hit and Run.

The Fiber Formula

In part three of Saul Hansell’s series on broadband in the Rest of the World, we learn that taxpayers in the fiber havens are doing all the heavy lifting: But the biggest question is whether the country needs to actually provide subsidies or tax breaks to the telephone and cable companies to increase the speeds … Continue reading “The Fiber Formula”

In part three of Saul Hansell’s series on broadband in the Rest of the World, we learn that taxpayers in the fiber havens are doing all the heavy lifting:

But the biggest question is whether the country needs to actually provide subsidies or tax breaks to the telephone and cable companies to increase the speeds of their existing broadband service, other than in rural areas. Many people served by Verizon and Comcast are likely to have the option to get super-fast service very soon. But people whose cable and phone companies are in more financial trouble, such as Qwest Communications and Charter Communications, may well be in the slow lane to fast surfing. Still, it’s a good bet that all the cable companies will eventually get around to upgrading to the faster Docsis 3 standard and the phone companies will be forced to upgrade their networks to compete.

The lesson from the rest of the world is that if the Obama administration really wants to bring very-high-speed Internet access to most people faster than the leisurely pace of the market, it will most likely have to bring out the taxpayers’ checkbook.

None of this should come as a surprise to our regular readers. Businesses invest in fiber infrastructure on a 20-year basis, and government subsidies can compress the investment timeline to one tenth of that. And Hansell finds that a lot of the foreign spending is driven by nationalist pride rather than more prudent factors. The problem I have with massive government spending on ultra-highspeed fiber projects is the conflicting priorities. I like fast networks, but I know that my tastes and interests aren’t the universal ones. And then there’s the question of utility: mobile networks aren’t as fast as locked-down fiber, but they’re an order of magnitude more useful.

So why don’t we strive to make the US number one in wireless, and leave the fiber race to the smaller nations? The long-term benefits of pervasive, high-speed wireless are much greater than those of heavily subsidized (and therefore heavily regulated) stationary networks.

, ,

Spectrum 2.0 panel from eComm

Courtesy of James Duncan Davidson, here’s a snap from the Spectrum 2.0 panel at eComm09. The general discussion was about the lessons learned from light licensing of wireless spectrum in the US, on the success of Wi-Fi and the failure of UWB, and what we can realistically hope to gain from the White Spaces licensing … Continue reading “Spectrum 2.0 panel from eComm”

Courtesy of James Duncan Davidson, here’s a snap from the Spectrum 2.0 panel at eComm09.

Maura Corbett, Rick Whitt, Peter Ecclesine, Darrin Mylet, and Richard Bennett at eComm
Maura Corbett, Rick Whitt, Peter Ecclesine, Darrin Mylet, and Richard Bennett at eComm

The general discussion was about the lessons learned from light licensing of wireless spectrum in the US, on the success of Wi-Fi and the failure of UWB, and what we can realistically hope to gain from the White Spaces licensing regime. As a person with a foot in both camps – technical and regulatory – it was an interesting exercise in the contrast in the ways that engineers and policy people deal with these issues. In general, hard-core RF engineer Peter Ecclesine and I were the most pessimistic about White Space futures, while the policy folks still see the FCC’s Report and Order as a victory.

In lobbying, you frequently run into circumstances where the bill you’re trying to pass becomes so heavily encumbered with amendments that it’s not worth passing. Rather than get your policy vehicle adopted in a crippled form, it’s better in such circumstances to take it off the table and work with the decision-makers to revive it in a future session without the shackles. While this is a judgment call – sometimes you go ahead and take the victory hoping to fix it later – it’s dangerous to pass crippled bills in a tit-for-tat system because you’re conceding a win in the next round to the other side.

I suggested that the FCC’s order was so badly flawed that the best thing for White Space Liberation would be to have the court void the order and the FCC to start over. This message wasn’t well-received by Rick Whitt, but I had the feeling Peter is on board with it.

The problem with the White Spaces is that the FCC couldn’t make up its mind whether these bands are best used for home networking or for a Third (or is it fourth or fifth?) pipe. The power limits (40 milliwatts to 1 watt) doom it to home networking use only, which simply leads to more fragmentation in the home net market and no additional WAN pipes. That’s not the outcome the champions of open networks wanted, but it’s what they got.

eComm, incidentally, is a terrific conference. The focus is very much on the applications people are developing for mobile phones, and it’s essential for people like me who build networks to see what people want to do with them, especially the things they can’t do very well today. Lee Dryburgh did a fantastic job of organization and selecting speakers, and is to be congratulated for putting on such a stellar meeting of the minds.

Perils of Content Neutrality

Via Scott Cleland I see that Adam O’Donnell has written a nice piece on the side-effects of net neutrality regulation, Why I am against pure net neutrality While it may sound like treating all ISP traffic equally is a good idea, mandating strict net neutrality hurts computer security for all of us. Adam was in … Continue reading “Perils of Content Neutrality”

Via Scott Cleland I see that Adam O’Donnell has written a nice piece on the side-effects of net neutrality regulation, Why I am against pure net neutrality

While it may sound like treating all ISP traffic equally is a good idea, mandating strict net neutrality hurts computer security for all of us.

Adam was in the audience at last week’s MAAWG panel on net neutrality, and raised an interesting question about Random Early Discard. The moderator cut us off before we were able to address his point (he was anxious to catch a plane) but the question deserves a response.

RED is a method of packet discard that’s intended to avoid the problems inherent in a packet drop discipline that simply uses tail-drop to prevent buffer overflow in routers. The tail drop mechanism tends to cause cycles in packet delivery rates:

1. A buffer overflows, and a whole set of transmitters throttles back.
2. Link utilization drops to 50%.
3. The transmitters as a group increase rate together, until buffer overflow occurs again.
4. Repeat.

The net result of this cycling behavior is that congested links have their effective capacity reduced to about 70% of link speed. RED is an attempt to reduce transmission rate more selectively in order to push the link toward the upper limit of capacity. RED algorithms have been under study since the late ’80s, and none is completely satisfactory. The IETF response was to draft an Internet Standard for something called ECN that enables the network to signal end systems that congestion was building, but it remains undeployed due to Microsoft’s concerns about home router compatibility. The follow-on to ECN is Bob Briscoe’s Re-ECN, which I’ve written about on these pages and in The Register.

The bottom line is that Internet congestion protocols are an area that needs a lot of additional work, which the proposed Net Neutrality laws would hamper or prevent.

Van Jacobson realizes this, per the remarks he makes in an interview in the ACM Queue magazine this month:

Also, we use buffer memory in such a way that it’s valuable only if it’s empty, because otherwise it doesn’t serve as a buffer. What we do is try to forget what we learned as soon as we possibly can; we have to do that to make our buffer memory empty.

For the Olympics (not the most recent, but the previous one), we got some data from the ISP downstream of NBC. That router was completely congested; it was falling over, dropping packets like crazy. If you looked inside its buffers, it had 4,000 copies of exactly the same data, but you couldn’t tell that it was the same because it was 4,000 different connections. It was a horrible waste of memory, because the conversations were all different but what they were about was the same. You should be able to use that memory so you don’t forget until you absolutely have to—that is, go to an LRU (least recently used) rather than MRU (most recently used) replacement policy. It’s the same memory; you just change the way you replace things in it, and then you’re able to use the content.

It wouldn’t be necessary for carriers to put disks in routers. They could just start using the existing buffer memory in a more efficient way, and any time the data was requested more than once, they would see a bandwidth reduction.

Strict neutralism would prevent this system from being implemented: it involves Deep Packet Inspection, and the fanatics have warned us that DPI is a great evil. So we’re faced with this choice: networks that are cheap and efficient, or networks that are bloated with silly ideology. Take your pick, you only get one.

Neutralism is the new S-word

Scott Cleland has done some interesting quote-diving from the works of the neutralists, offering up his findings in a new study. Without using the term “socialism” Scott provides the evidence that this is largely an anti-capitalist movement. The fact that a few highly-profitable capitalist enterprises have found a way to manipulate a rather traditional form … Continue reading “Neutralism is the new S-word”

Scott Cleland has done some interesting quote-diving from the works of the neutralists, offering up his findings in a new study. Without using the term “socialism” Scott provides the evidence that this is largely an anti-capitalist movement. The fact that a few highly-profitable capitalist enterprises have found a way to manipulate a rather traditional form of digital utopian socialism for their own ends is the real news, however.

Anyhow, enjoy Scott’s paper and think about the notion of a “digital kibbutz” while you’re doing it. Now that we live in a time where government owns the banking system, “socialism” isn’t a bad word in all contexts automatically, but we do have to understand that we need to apply different expectations to government-managed systems than we do to privately-managed ones. It’s not as obvious to me as it is to the neutralists that government is more likely to give us universal high-speed connectivity than is business.

UPDATE: See comments for a critique of Scott’s analysis by Brett Glass.

Nice Outings

My talk at the Messaging Anti-Abuse Working Group went very well. It was a huge room, seating probably 500 or so, and over half-full. I talked about how some of the crazier ideas about net neutrality are potentially becoming mainstream thanks to the politics in the nation’s capitol and some of the personnel choices made … Continue reading “Nice Outings”

My talk at the Messaging Anti-Abuse Working Group went very well. It was a huge room, seating probably 500 or so, and over half-full. I talked about how some of the crazier ideas about net neutrality are potentially becoming mainstream thanks to the politics in the nation’s capitol and some of the personnel choices made by the Obama Administration. The selection of Susan Crawford for the FCC Transition Team is a cause for alarm. Susan is as nice a person as you’ll ever want to meet, and quite bright and well-intentioned, but her position that ISPs and carriers have no business actively managing packets is poison. I got a healthy round of applause, and several people thanked me for my remarks afterwards. Very few people know how dependent e-mail is on the DNS Blacklists that members of this organization maintain, and that’s a real shame.

Last night I took the short trip up to Mountain View to see Jeff Jarvis’s talk about his book What Would Google Do? The audience, about 25 people more or less, was a lot less impressed with Google than Jeff is, and it occurred to me that Google really is vulnerable on the search front. I can imagine a much more effective search methodology than the one Google employs, but getting the venture capital to build a rival infrastructure isn’t going to happen.

I told Jeff (an old friend of the blog who’s driven a lot of traffic this way over the years) that what he likes about Google isn’t Google as much as it’s inherent qualities of the Internet. He more or less knows that, but the packaging of open networks, distributed computing, and free expression is easier when you concretize it, and that’s what his book does. I read it as a sequel to Cluetrain.