The Internet’s Big Idea

Because of current events, it strikes me that we need to review the fundamental idea that the Internet was based on, packet switching. This goes back to the dawn of time in the 1960s, before any corporations were involved, and is the pure essence of the thing. Only by understanding the basic idea can we … Continue reading “The Internet’s Big Idea”

Because of current events, it strikes me that we need to review the fundamental idea that the Internet was based on, packet switching. This goes back to the dawn of time in the 1960s, before any corporations were involved, and is the pure essence of the thing. Only by understanding the basic idea can we see who’s true to it today and who isn’t.

Packet switching is not a hard notion to grasp, as it involves the spirit of cooperation, a commons, and mutual benefit. Recall that communications networks of the earlier sort allocated bandwidth strictly. On the telephone network you always got the same slice of bandwidth, neither more nor less. On some rare occasions like Mothers’ Day you couldn’t make a call right away, but for all practical purposes it was always there and always the same.

This isn’t a very efficient way to allocate bandwidth, however, because much of it goes to waste. When you’re on a call, you consume just as much bandwidth when you’re speaking as when you’re not, and a great deal of bandwidth is idle for most of the day because it’s simply a reserve for peak calling times. So the designers of the early Internet – it was called ARPANET back then – wondered what would happen if they built a network where bandwidth was a common pool that each user would draw from when he needed it, as he needed it, instead of being strictly divided in the old-fashioned way. In this scheme, during periods of low usage, each user would get tons of bandwidth so the network would appear to be really, really fast, and during periods of high demand it would partition up fairly just like the phone network, or so it seemed. So they launched this great experiment to see what had to be done to make a network that would scale up in performance under light load and scale down to fairness under heavy load. The method is called “packet switching” to differentiate it from the “circuit switching” technology in the phone network, and the ARPANET became the Internet in its second incarnation of protocols.

Packet switching is the single most important idea in the design of the Internet, even more than universal connectivity; after all, the phone network gave us the ability to reach out and annoy anyone on the planet long ago. Packet switching as a way to manage bandwidth is the Big Idea.

It always strikes me as odd that there’s so little understanding of the Big Idea at the base of the Internet’s design pyramid among our would-be Internet regulators and ISP critics. They’re always complaining about the deceptiveness of “unlimited access” rates and “all you can eat” deals that don’t guarantee any constant or minimum rate. (Duh, we tried that already.) This is an experiment in another direction, where the deal is that it’s going to be faster at some times than at other times, but overall it’s going to be much better and much cheaper than guaranteed bandwidth. And sure enough, it works: you can even make phone calls over the Internet of exceptional quality anywhere in the world for peanuts. It’s marvelous.

Well, mostly marvelous. Throughout the Internet’s history, even when it was a closed garden for the research world and long before the great unwashed were allowed on it, the “fairness” problem has proved very difficult to resolve, because each user and each application has a different appetite for bandwidth and a different demand for response time. In the early days, the interactive terminal protocol “telnet” was often stymied by the bulk data transfer protocol “ftp”, and today Skype has to work around BitTorrent.

In theory, it shouldn’t be hard fit the needs of programs that communicate small chunks of data on a tight time line around programs that move massive amounts of data with no particular time requirement around any one chunk. In theory, we should be able to design networks that do that, either by booking reservations for the call or by giving Skype priority over BitTorrent. And in fact we have a number of experimental protocols that will do just that, especially within the confines of a private network in a business, an organization, or a home. And they all depend on a prioritizing or scheduling function having a clear idea of which packets belong to which program, and of the programs being willing to settle for less than what they want for various periods of time. And that’s the way things were on the Internet before it went commercial.

In the mid-80s, we saw Congestion Collapse (“Internet meltdown”) during periods of heavy ftp usage. The quick fix that was cobbled together required TCP to voluntarily throttle-back on the amount of data it transmitted when messages weren’t delivered. This “overloaded” dropped packets, giving them two meanings: either a packet was hit by noise and corrupted, or a network queue was full and the packet was discarded because there was no more room in the line for it. Error rates were low (there was no WiFi back then) so it was fine to react as if the network was overloaded. And we could count on everybody being polite and graciously accepting slow response time until the overall load went down.

This could have been a fairly crappy solution as it didn’t distinguish application requirements between our interactive application and our bulk data application, but implementation did what design failed to do: in practice, telnet data came in much shorter packets than ftp data, so when you combine that with the fact that the packet droppers are looking for space in network queues, you obviously get more space out of dropping long packets than short ones. So voila, in one step you’ve got priority enforcement and congestion control.

And it’s all going to be fine until the next generation of protocols comes around and our assumptions once again have to be revised. I’ll go into that tomorrow.

(Updated by removing some stuff about a blog post that inspired me to write this stuff. That material now has its own post, right below.)

Try this article for a little insight into changes afoot inside the Internet.

Excellent analysis

My post-season baseball predictions have been shockingly good. Each of the 3 game in the ALCS (AKA “Real World Series”) were won by the team I said had the pitching advantage. We’ll see if this holds up in game four, but I’m feeling pretty good about it on account of Wakefield coming off an injury … Continue reading “Excellent analysis”

My post-season baseball predictions have been shockingly good. Each of the 3 game in the ALCS (AKA “Real World Series”) were won by the team I said had the pitching advantage. We’ll see if this holds up in game four, but I’m feeling pretty good about it on account of Wakefield coming off an injury and being a knuckler and all. Cleveland still has several key hitters MIA, such as Sizemore and Hafner, so those boys do need to step up.

The NLCS was a total snooze-fest, the only excitement coming in Game 4 when the Snakes had to leave their pitcher in to give up 6 runs in one inning because he was their best hitter. Dumping that poser Byrnes was the best thing Billy Beane ever did.

Rabid right-wing “Christian” fundamentalist Jeff Goldstein is pretty excited about the snake-handling Rockies playing the Champion, and I have to admit there’s something cute about a Cleveland-Rox series; as long as Ted Haggard doesn’t throw out the first pitch.

Only on the Internet

From the annals of modern technology: A Bosnian couple are getting divorced after finding out they had been secretly chatting each other up online under fake names. Sana Klaric, 27, and husband Adnan, 32, from Zenica, poured out their hearts to each other over their marriage troubles, and both felt they had found their real … Continue reading “Only on the Internet”

From the annals of modern technology:

A Bosnian couple are getting divorced after finding out they had been secretly chatting each other up online under fake names.

Sana Klaric, 27, and husband Adnan, 32, from Zenica, poured out their hearts to each other over their marriage troubles, and both felt they had found their real soul mate…

“To be honest I still find it hard to believe that the person, Sweetie, who wrote such wonderful things to me on the internet, is actually the same woman I married and who has not said a nice word to me for years.”

What can I say?

AeA report: False and Misleading

The crack research team at the American Electronics Association has issued a report on net neutrality that sets a new bar for rank absurdity in public policy discourse. The report simply parrots the most outrageous, counter-factual and a-historical claims made by the professional protest groups that have assembled to promote this dubious cause, and then … Continue reading “AeA report: False and Misleading”

The crack research team at the American Electronics Association has issued a report on net neutrality that sets a new bar for rank absurdity in public policy discourse. The report simply parrots the most outrageous, counter-factual and a-historical claims made by the professional protest groups that have assembled to promote this dubious cause, and then jumps through hoops to argue for a new and unprecedented network regulation regime. It’s amazing that the primary trade group for the electronics industry would employ a “research” staff that’s clearly out-of-touch with reality and the concerns of its membership, and I have to believe that heads are going to roll over this abomination. Here’s most of the report with my comments interspersed and some redundancies removed. It makes for a good laugh.

The AeA research team produces regular reports on the most timely and relevant issues to the high-tech industry and to U.S. competitiveness in a global economy. We combine rigorous data with careful analysis to provide industry leaders and policymakers the information they need to assess the issue.

While this is certainly a timely issue, the report actually fails to provide any “rigorous data” or “careful analysis.” It makes a number of unsourced and unsupportable claims, states them hysterically, and leaps to an unwarranted conclusion. Read on and you’ll see.

Network neutrality is a wide ranging concept with many facets and many different groups trying to define what it means. Unfortunately, much of the current debate is being driven by network operators, resulting in a one-sided view, full of misleading information.

It seems to me that the pro-regulation side has done plenty of “driving” of this issue, from its original manufacture to supplying the broad set of false and misleading claims that we’re going to see here. Certainly, network operators and equipment manufacturers should have a voice in any new attempt to regulate their industry. This is a democracy and all voices should be heard, especially those of the experts.

This paper focuses on addressing these misperceptions and on the most contentious part of the debate, the discrimination of Internet traffic on the basis of source or ownership of content.

As exciting as this subject matter may be, it’s off to the side of the network neutrality debate as it’s been framed in the bills proposed by Snowe, Dorgan, Markey, and the rest of the pro-regulation coalition. Their bills ban the sale of enhanced services, such as the low-delay Quality of Service needed by telephony and live TV broadcasts, to residential Internet access customers. These are services that business and education can freely buy today, but which aren’t generally available to consumers. So right off the bat we can see that the AeA’s crack research team means to misframe the issue and deal with a strawman.

When the Internet was first built it was designed to be content neutral; its purpose was to move data from one place to another in a nondiscriminatory fashion regardless of who provided the original content.

When the Internet was first built, it was designed to be a playground for research on network protocols, not to be the final word on public networking. We’ve learned a lot from that research, mainly that the Internet lacked adequate mechanisms for fair access, congestion control, security, authentication, and Quality of Service. But this assertion is at best a red herring; whatever the principles were that guided the Internet at its inception, now that it’s a general purpose network used by a billion people outside the network research community, it should be guided by the needs of its current users, not the relative ignorance of its first wave of designers. And in any event, the Internet’s architecture has always recognized that all packets don’t have equal needs, which is why each Internet packet carries tags reporting its desired Class of Service.

Initially, the Federal Communications Commission (FCC) enforced this principle by requiring nondiscriminatory treatment by the telecom carriers, where content was delivered on a “best effort” basis, i.e., by treating all “packets” as relatively equal.

However, this changed in August 2005 when the FCC effectively removed the legal protection of content neutrality for all broadband Internet access providers.

This is total gibberish. “Best effort” delivery simply means that the network does not attempt to re-transmit lost or corrupted packets. The term comes from the design of the now-obsolete coax-cable Ethernets that were built at Xerox PARC. And it certainly has nothing to do with any notion of treating all packets as equal regardless of their requested Class of Service. And for as long as the commercial Internet has existed, packets have been routed differentially depending on source, destination, and paid service agreements between ISPs and NSPs. All routes are not equal, and they’re chosen based on who’s communicating with whom.

The FCC has never regulated the behavior of packet-switching networks. What it has done is regulate wires owned by monopoly telephone companies with respect to the source and destination end-points, which is a very different thing. The former FCC rules on DSL, for example, provided that independent ISPs could rent lines from the phone company at discount prices and connect them to their own equipement. These regulations – called “unbundling” – did not dictate how packets should be handled. And during the time that the DSL regulations were in place, similar services provided by Cable TV were not subject to “unbundling” rules. We found that Cable Internet was faster and cheaper than DSL, so the experiment with different regulations was terminated and DSL was re-regulated under Cable rules. This has nothing to do with preferred content.

Some broadband providers want to be able to offer priority service to those content providers who agree to pay an additional fee beyond what they already pay to access the Internet. Those who can afford to pay the fee would have their content moved to the front of the line.

These carriers claim that the next generation of Internet content (such as videos, voice over IP, real-time gaming, and distance learning) requires higher levels of speed and quality than other content, and as a result, must be prioritized ahead of other Internet traffic. To pay for this increased capacity, the network operators argue that they need additional revenue.

Notice the use of the term “content” here to describe things that are clearly “communication”. This is the essence of the confusion in net neutrality regulation. The old Internet was indeed a system for moving stored “content” from one site to another, whether in the form of e-mail or computer files. But the New Internet is emerging as a system where content has to share wires with real-time communication that can’t sit on the shelf for hours or minutes or even seconds before its delivery. Real-time gaming has a completely different set of communications requirements than BitTorrent downloads, and the network neutrality debate revolves around the question of providing each with the service it requires at a price that’s fair to all. This isn’t an empty carrier claim, it’s technical reality.

Countering this, Internet content providers and consumer groups state that they already pay billions of dollars to access the Internet. They are also concerned that telecom and cable operators, which dominate broadband Internet access with over 92 percent market share, will leverage their potential monopoly power to pick winners and losers in the highly competitive Internet content market. This environment has historically been quite egalitarian.

Yes, Virginia, we all know that Google pays to connect to the Internet, and their carrier agreements probably specify a peak level of bandwidth and little else. Does this mean that they’re automatically entitled to access higher levels of service beyond what they pay for? Perhaps, but that’s certainly not an obvious conclusion. The AeA is trotting out a big strawman here.

And the claim that the Internet is egalitarian is patently false. The more you pay, the more you get and there’s nothing equal about it.

There seems to be the perception that Internet companies (also called Internet content providers) and, to a lesser extent, Internet consumers are not paying their fair share to access the Internet. This perception is just wrong.

Actually, it’s plain right. A small fraction of Internet consumers – like 5% – use most of the bandwith. As your granny pays the same monthy bill as these boys, there is in fact considerable inequity in the present billing system. Now one way to remedy this problem is to give low-volume users priority in access to network pipes and to give lower priority to heavy volume users who pay the same price. This sensible approach is forbidden by heavy-handed network neutrality regulations.

By tiering the Internet based on who pays the most to prioritize their content, the telecom industry is creating a system of haves and have-nots: those that can afford the premium for preferred treatment and those that cannot.
A tiered system for broadband services is already in place, but it is based on the bandwidth purchased by the consumer and content provider, who both are already paying for Internet access. This current system allows consumers equal access to any legal content they choose and gives even the smallest content provider the chance to compete in a robust marketplace.

This system treats all packets equally.

Broadband providers certainly do want to create service tiers, because this will allow them to pay for their investment in fiber-optic networks to the home the way that all infrastructure projects are paid for in America: by selling services. In particular, the carriers want to sell cable TV and voice services, just as Cable TV companies already do. We don’t seem to have any problem with the technical steps the Cable companies have made to sell “triple-play services” over their lines, so why do we have a problem with fiber-optic network providers doing what amounts to the same thing?

The controversial part of the plan is whether they should be allowed to give some actual web sites better service than others, thereby nullifying the capital advantage that companies such as Google, with its 450,000 servers, have over the next pair of guys in a dorm room. Depending on several factors, nullifying Google’s advantage could be a good thing or a bad thing, so I’d rather have this sort of arrangement perused by a regulatory agency than committed to statute. The FCC says they already have this authority, and they’ve used in the past. No new law is needed here.

These types of tiered services already exist in other countries, without resorting to additional fees on content providers. Internet subscribers in Japan can receive 100-megabit service for $25 a month. Sweden is planning for a 1-gigabit (1,000 megabit) service for about $120 a month — this is over 150 times faster than the fastest typical DSL service available in the United States, which currently tops out at around 6 megabits.

This is just plain false. Korea has fiber-to-the-home, and they pay for it by blocking VoIP and selling voice services exclusively. And similar arrangements exist in the UK and other countries. The analysts are either intentionally lying or they’re woefully uninformed.

OK, that’s enough for today, I’ll get to the rest of it as I have time. Suffice it to say, the study’s authors, Matthew Kazmierczak and Josh James, should be fired.

Cisco weighs in on net neutrality

This is historical by now, but I was curious about it so I checked: “We strongly support the principle of an open Internet,” Cisco CEO John Chambers wrote in a letter to Congressman Joe Barton, who chairs the House Energy and Commerce Committee. “We must, however, balance the fact that innovation inside the network is … Continue reading “Cisco weighs in on net neutrality”

This is historical by now, but I was curious about it so I checked:

“We strongly support the principle of an open Internet,” Cisco CEO John Chambers wrote in a letter to Congressman Joe Barton, who chairs the House Energy and Commerce Committee. “We must, however, balance the fact that innovation inside the network is just as important as innovation in services and devices connected to the Internet. Broadband Internet access service providers should remain free to engage in pro-competitive network management techniques to alleviate congestion, ameliorate capacity constraints and enable new services.”

Chambers makes one very excellent point: most of the talk about Internet innovation in DC is about services attached to the Internet, and not the system of lines and routers itself. The neutrality regulations would stifle innovation in the core structure of the Internet, which will eventually lead to stagnation in the services space, even worse than the stagnation we’ve seen since the Bubble burst. That can’t be good.

Full text of COPE Act

The COPE Act is in Thomas now, and the lies about it fly fast and furious in Nutrialand. See the bill here, and notice this: ‘‘(3) ADJUDICATORY AUTHORITY.—The Commission shall have exclusive authority to adjudicate any complaint alleging a violation of the broadband policy statement and the principles incorporated therein. The Commission shall complete an … Continue reading “Full text of COPE Act”

The COPE Act is in Thomas now, and the lies about it fly fast and furious in Nutrialand. See the bill here, and notice this:

‘‘(3) ADJUDICATORY AUTHORITY.—The Commission shall have exclusive authority to adjudicate any complaint alleging a violation of the broadband policy statement and the principles incorporated therein. The Commission shall complete an adjudicatory proceeding under this subsection not later than 90 days after receipt of the complaint. If, upon completion of an adjudicatory proceeding pursuant to this section, the Commission determines that such a violation has occurred, the Commission shall have authority to adopt an order to require the entity subject to the complaint to comply with the broadband policy statement and the principles incorporated therein. Such authority shall be in addition to the authority specified in paragraph (1) to enforce this section under titles IV and V. In addition, the Commission shall have authority to adopt procedures for the adjudication of complaints alleging a violation of the broadband policy statement or principles incorporated herein.

Nutria claim this means the FCC lacks the authortity to punish broadband abuse. Right.

The rules they’ll enforce are in Appropriate Framework for Broadband Access to the Internet over Wireline Facilities:

• To encourage broadband deployment and preserve and promote the open and interconnected nature of the public Internet, consumers are entitled to access the lawful Internet content of their choice.

• To encourage broadband deployment and preserve and promote the open and interconnected nature of the public Internet, consumers are entitled to run applications and use services of their choice, subject to the needs of law enforcement.

• To encourage broadband deployment and preserve and promote the open and interconnected nature of the public Internet, consumers are entitled to connect their choice of legal devices that do not harm the network.

• To encourage broadband deployment and preserve and promote the open and interconnected nature of the public Internet, consumers are entitled to competition among network providers, application and service providers, and content providers.

That doesn’t seem too complicated.

NRO likes the new network

Here’s a great editorial from National Review Online on the cool new network: Where the telecoms predict bold innovations, advocates of net neutrality see Orwellian nightmares. They argue that if telecoms are allowed to speed up the delivery of some content, there is nothing to stop them from slowing down or blocking content they don’t … Continue reading “NRO likes the new network”

Here’s a great editorial from National Review Online on the cool new network:

Where the telecoms predict bold innovations, advocates of net neutrality see Orwellian nightmares. They argue that if telecoms are allowed to speed up the delivery of some content, there is nothing to stop them from slowing down or blocking content they don’t like. But such anti-consumer behavior is unlikely in a competitive market. Let’s say George Soros somehow took over Verizon and made troublemaking websites like National Review Online disappear from his network. Competition from other broadband providers would discourage him from thus breaking his customers’ hearts.

Net-neutrality advocates argue that there isn’t enough competition among broadband providers to ensure that service degradation would be punished or that telecoms would charge Internet companies fair prices for faster service. Most U.S. broadband consumers are forced to choose between their local cable and local phone companies, they argue, giving these telecoms a “virtual duopoly” in the broadband market.

Leave aside the FCC’s finding, noted in the Supreme Court’s ruling on this matter, of “robust competition . . . in the broadband market,” including not just cable and DSL, but burgeoning satellite, wireless, and broadband-over-powerline technologies. Ignore also the argument that net-neutrality legislation could actually entrench the bigger players at the expense of new technologies that might otherwise compete by differentiating their services.

They make an important point: there’s a lot of doubt about the business plan for new broadband, but that’s all the more reason to let the market sort it out. The Nutria want to abort it before we ever get a chance to see what it can do. That would be a dreadful mistake.

A fiction and not a useful one at that

Net neutrality is a fiction, and not even a useful one. I’m working on a short essay laying out the salient points, and trying to sort out some of Tim Berners-Lee’s eccentric notions. See Sir Web’s blog for the back-and-forth.

Net neutrality is a fiction, and not even a useful one. I’m working on a short essay laying out the salient points, and trying to sort out some of Tim Berners-Lee’s eccentric notions. See Sir Web’s blog for the back-and-forth.

Two against one

It takes two Angels to handle one Athletic. This is baseball at its finest. UPDATE: Major League Baseball fined Kendall and suspended him for four games, but the guy throwing the punch, John “Slingblade” Lackey, was let off with only a fine.

It takes two Angels to handle one Athletic.

Two against one

This is baseball at its finest.

UPDATE: Major League Baseball fined Kendall and suspended him for four games, but the guy throwing the punch, John “Slingblade” Lackey, was let off with only a fine.

The Daily Neut – Part II

Recent developments on the neut front have the New York Times showing a failure to grasp the concept: “Net neutrality” is a concept that is still unfamiliar to most Americans, but it keeps the Internet democratic. Cable and telephone companies that provide Internet service are talking about creating a two-tiered Internet, in which Web sites … Continue reading “The Daily Neut – Part II”

Recent developments on the neut front have the New York Times showing a failure to grasp the concept:

“Net neutrality” is a concept that is still unfamiliar to most Americans, but it keeps the Internet democratic. Cable and telephone companies that provide Internet service are talking about creating a two-tiered Internet, in which Web sites that pay them large fees would get priority over everything else. Opponents of these plans are supporting Net-neutrality legislation, which would require all Web sites to be treated equally. Net neutrality recently suffered a setback in the House, but there is growing hope that the Senate will take up the cause.

And Web inventor Tim Berners-Lee flying off into a socialist Neverland:

It is of the utmost importance that, if I connect to the Internet, and you connect to the Internet, that we can then run any Internet application we want, without discrimination as to who we are or what we are doing. We pay for connection to the Net as though it were a cloud which magically delivers our packets. We may pay for a higher or a lower quality of service. We may pay for a service which has the characteristics of being good for video, or quality audio. But we each pay to connect to the Net, but no one can pay for exclusive access to me.

There’s actually nothing magical about how the Internet delivers packets, it’s a machine that follows a strict set of rules. The Net Neutrality advocates are indeed hostile to levels of service that are good for video or good for audio, and nobody is even thinking about a service that blocks access to anybody; in actual fact the COPE Act that was passed by the Energy and Commerce Committee expressly forbids that. So this is simply another strawman argument from somebody who should know better.