The crowning achievement: cats on a treadmill.
Thirty years of networking brings us to this
The crowning achievement: cats on a treadmill.
The crowning achievement: cats on a treadmill.
The crowning achievement: cats on a treadmill.
Here’s a little speech I gave to members of the EU Parliament in Brussels on Oct. 14th. The cousins are contemplating a set of Internet access account regulations that would mandate a minimum QoS level and also ban most forms of stream discrimination. This explains why such rules are a bad (and utterly impractical) idea. … Continue reading “Regulation and the Internet”
Here’s a little speech I gave to members of the EU Parliament in Brussels on Oct. 14th. The cousins are contemplating a set of Internet access account regulations that would mandate a minimum QoS level and also ban most forms of stream discrimination. This explains why such rules are a bad (and utterly impractical) idea.
The Internet is a global network, and regulating it properly is a matter of global concern. I’d like to share a view of the technical underpinnings of the question, to better inform the legal and political discussion that follows and to point out some of the pitfalls that lie in wait.
Why manage network traffic?
Network management, or more properly network traffic management, is a central focus of the current controversy. The consumer-friendly statements of policy, such as the Four Freedoms crafted by Senator McCain’s technology adviser Mike Powell, represent lofty goals, but they’re constrained by the all-important exception for network management. In fact, you could easily simplify the Four Freedoms as “you can do anything you want except break the law or break the network.†Network management prevents you from breaking the network, which you principally do by using up network resources.
Every networking technology has to deal with the fact that the demand for resources often exceeds supply. On the circuit-switched PSTN, resources are allocated when a call is setup, and if they aren’t available your call doesn’t get connected. This is a very inefficient technology that allocates bandwidth in fixed amounts, regardless of the consumer’s need or his usage once the call is connected. A modem connected over the PSTN sends and receives at the same time, but people talking generally take turns. This network doesn’t allow you to save up bandwidth and to use it later, for example. Telecom regulations are based on the PSTN and its unique properties. In network engineering, we call it an “isochronous network†to distinguish it from technologies like the old Ethernet that was the model link layer technology when the DoD protocol suite was designed.
The Internet uses packet switching technology, where users share communications facilities and bandwidth is allocated dynamically. Dynamic bandwidth allocation, wire-sharing, and asynchrony mean that congestion appears and disappears on random, sub-second intervals. Packets don’t always arrive at switching points at the most convenient times, just as cars don’t run on the same rigorous schedules as trains.
Continue reading “Regulation and the Internet”
It depends on whose numbers you like. Andrew Odlyzko claims it’s up 50-60% over last year, a slower rate of growth than we’ve seen in recent years. Odlyzko’s method is flawed, however, as he only looks at public data, and there is good reason to believed that more and more traffic is moving off the … Continue reading “How fast is Internet traffic growing?”
It depends on whose numbers you like. Andrew Odlyzko claims it’s up 50-60% over last year, a slower rate of growth than we’ve seen in recent years. Odlyzko’s method is flawed, however, as he only looks at public data, and there is good reason to believed that more and more traffic is moving off the public Internet and its public exchange points to private peering centers. Nemertes collects at least some data on private exchanges and claims a growth rate somewhere between 50-100%.
The rate of growth matters to the ongoing debates about Internet regulation. If Odlyzko is right, the rate of growth is lower than the rate at which Moore’s Law makes digital parts faster and cheaper, so no problem, routine replacement of equipment will keep up with demand (leaving out the analog costs that aren’t reduced by Moore’s Law.) If Nemertes is right, user demand outstrips Moore’s Law and additional investment is needed in network infrastructure. Increased investment needs to be covered by government subsidies or by the extraction of additional value from the networks by their owner/operators. Subsidy isn’t going to happen while the economy teeters on the edge of collapse, so the high growth conclusion argues against regulations designed to preserve the legacy service model. It’s a vital question.
A couple of new data points emerged this week. Switch and Data, operator of PAIX public exchange points in Palo Alto and New York, says its traffic grew 112% last year:
International networks are making the decision to peer in the United States to reduce transit time between countries and accelerate the performance of U.S. and other global websites in their home markets. This is important due to the explosive growth of Web 2.0 with its bandwidth intensive websites for social networking, rich digital content, and business software applications. Exchanging traffic directly between content and end user networks also significantly reduces Internet transit expense which has been a rapidly growing cost for companies as their traffic volumes soar.
At the Switch and Data New York peering center, traffic was up an astonishing 295%.
Combining these numbers with what we know about the Content Delivery Networks that deliver as much as half of the Internet’s traffic, I think we can reasonably conclude that comprehensive measurement of Internet traffic would support the theory that traffic still grows at an increasing rate. One side effect of the increased use of CDNs and private peering is less certainty about the overall state of Internet traffic. Studies confined to public data are less and less useful, as many researchers have been saying for years.
At any rate, there’s considerable uncertainty about this question at the moment, which argues that the Internet needs a Nate Silver to pierce the fog of conflicting polls.
Cross-posted at CircleID.
This does not surprise me at all: Army to spend $50M on video games The U.S. Army plans to spend some $50 million over five years on combat video games to train soldiers, according to a report in Stars and Stripes. Next, the CIA will spend a few million on reruns of 24.
This does not surprise me at all: Army to spend $50M on video games
The U.S. Army plans to spend some $50 million over five years on combat video games to train soldiers, according to a report in Stars and Stripes.
Next, the CIA will spend a few million on reruns of 24.
The recently-published Nemertes study, Internet Interrupted: Why Architectural Limitations Will Fracture the ‘Net, includes a fine overview of the Internet, explaining public and private peering, content delivery networks, and overlay networks. It was necessary for the study to cover this ground as it had to correct the mistaken picture of Internet traffic that’s been foisted … Continue reading “Nice Internet overview”
The recently-published Nemertes study, Internet Interrupted: Why Architectural Limitations Will Fracture the ‘Net, includes a fine overview of the Internet, explaining public and private peering, content delivery networks, and overlay networks. It was necessary for the study to cover this ground as it had to correct the mistaken picture of Internet traffic that’s been foisted off on the regulating public by the MINTS study published by Andrew Odlyzko. MINTS only studies data gathered from public peering centers, a part of the Internet at which traffic growth is significantly lower than it is at private peering centers. Nemertes has a controversial model of traffic growth, but for understanding the way the Internet is put together, it’s excellent.
Critics of Obama FCC transitioner Susan Crawford have correctly pointed out that she’s made some odd-sounding remarks to the effect that Internet access is just another utility, like water, power, or sewer service. If the intent of this remark is to suggest that everybody needs access to the Internet these days, just as much (or … Continue reading “Just Another Utility”
Critics of Obama FCC transitioner Susan Crawford have correctly pointed out that she’s made some odd-sounding remarks to the effect that Internet access is just another utility, like water, power, or sewer service. If the intent of this remark is to suggest that everybody needs access to the Internet these days, just as much (or nearly as much) as we need electricity and running water, nobody has much of a problem with this observation, other than standard hyperbole objections.
But utilities in the USA tend to be provided by the government, so it’s reasonable to draw the implication from Crawford’s comparison that the government should be providing Internet access. This interpretation is underscored by her frequent complaints about the US’s ranking versus other countries in the broadband speed and price sweepstakes.
If you continually advocate for more aggressive spending to win a supposed international broadband arms race, you minimize the effectiveness of private investment, and you tout the planet’s fastest and cheapest Internet service as a necessity of life, you’re going to be regarded as either a garden-variety socialist or an impractical dreamer.
Continue reading “Just Another Utility”
I see in the BITS Blog that John Doerr is recommending a partner to become Pres. Obama’s CTO: Barack Obama wanted to know whom Mr. Doerr would recommend for chief technology officer of the United States, a position that Mr. Obama has promised to create. Mr. Doerr’s first choice was Bill Joy, co-founder of Sun … Continue reading “John Doerr’s CTO recommendation”
I see in the BITS Blog that John Doerr is recommending a partner to become Pres. Obama’s CTO:
Barack Obama wanted to know whom Mr. Doerr would recommend for chief technology officer of the United States, a position that Mr. Obama has promised to create. Mr. Doerr’s first choice was Bill Joy, co-founder of Sun Microsystems, in which Mr. Doerr invested early on. Mr. Joy is now a partner at Mr. Doerr’s firm, Kleiner Perkins Caufield & Byers. Mr. Doerr said it would be a sacrifice to lose him to the Obama administration, but that “there is no greater cause.â€
I can’t go along with that. Bill Joy is certainly an interesting guy, but he’s too much a creative mind for a government job like this one. He’s certainly done a lot of important work, what with vi, BSD sockets, and championing nfs and Java (and, as Wes Felter points out, Jini and Jxta,) but I think he tends to look a bit too far off into the future and tends to get burned on the practical side of things. I went to a talk he gave in Singapore back in ’86 in which he predicted the imminent demise of Microsoft. A talent for wishful thinking isn’t good in a bureaucrat. See: War, Iraq.
UPDATE: And more recently, Joy wrote this embarrassing piece from Wired about the danger of technical progress, which even Glenn Reynolds finds alarming. Doerr is a strong supporter of the “net neutrality” regulatory model, having co-penned an Op-Ed on it with Reed Hastings of Netflix. Protecting your portfolio is an understandable aim, but it’s not a good guide for national (and international) policy.
I hope Julius Genachowski isn’t swayed by the fact that these folks jumped on the Obama bandwagon (after their girl lost the nomination, actually.) The vast majority of the tech community supported Obama, and most of us are brighter and more sensible than the prominent figures from law and unsavory advertising who’ve made the most noise in support Google’s regulatory capture of the FCC. More on that later.
A million fans came to the parade in Philly on Friday. I’d say baseball is still the National Pastime. This was a pretty decent World Series, apart from the Philly weather and the inept umpiring. I wanted the Rays to win, but the result’s not exactly heart-breaking either. Comcast had a lot to do with … Continue reading “Congratulations, Phillies”
A million fans came to the parade in Philly on Friday. I’d say baseball is still the National Pastime. This was a pretty decent World Series, apart from the Philly weather and the inept umpiring. I wanted the Rays to win, but the result’s not exactly heart-breaking either. Comcast had a lot to do with it, apparently, which must rankle Mr. NASCAR, Kevin Martin, whose car has crashed.
There’s nothing like a quick trip to the Old Country to advise regulators on the folly of our New World ways. I’ll be speaking in Brussels on Oct. 14 and in London on the 15th to help our cousins structure their telecom regulations appropriately. These events are coordinated by my friends at the Institute for … Continue reading “European Event”
There’s nothing like a quick trip to the Old Country to advise regulators on the folly of our New World ways. I’ll be speaking in Brussels on Oct. 14 and in London on the 15th to help our cousins structure their telecom regulations appropriately. These events are coordinated by my friends at the Institute for Policy Innovation, an excellent free market think tank.
A fellow named Paul Korzeniowski has written a very good, concise piece on the Comcast action at the FCC for Forbes, Feds And Internet Service Providers Don’t Mix. He manages to describe the controversy in clear and unemotional language, which contrasts sharply with the neutralists who constantly use emotionally-charged terms such as “blocking,” “Deep Packet … Continue reading “Comcast was right, FCC was wrong”
A fellow named Paul Korzeniowski has written a very good, concise piece on the Comcast action at the FCC for Forbes, Feds And Internet Service Providers Don’t Mix. He manages to describe the controversy in clear and unemotional language, which contrasts sharply with the neutralists who constantly use emotionally-charged terms such as “blocking,” “Deep Packet Inspection,” “forgery,” and “monopoly” to describe their discomfort.
What Comcast actually did, and still does today, is simply limit the amount of free upstream bandwidth P2P servers can use to 50% of capacity. This isn’t “blocking” or “censorship,” it’s rational network management:
Cable giant Comcast is at the center of a very important controversy for small businesses. In the summer of 2007, it became clear that the carrier was putting restrictions on how much information selected customers could transmit. BitTorrent, a P2P application-sharing company, had been using lots of bandwidth, so the ISP throttled back some its transmissions.
“Throttled back some of its transmissions” is correct. Comcast doesn’t throttle back P2P downloads, which you can prove to yourself if you happen to have a Comcast account: download a large file using P2P and notice that it moves faster than it possibly can on any flavor of DSL. My recent tests with Linux have files downloading at 16 Mb/s, the advertised maximum for my account.
Korzeniowski then explains the facts of life:
The reality is that all ISPs are overbooked–they have sold more bandwidth than they can support.
This overbooking has been an issue since the old Public Switched Telephone Network (PSTN) days. In that situation, individuals would receive a busy signal when the network was overloaded. Because the Internet has an antithetical design, ISPs don’t have a busy signal option.
ISP’s actually do have a “busy signal option:” it’s the Reset packet that Comcast uses to limit active upstream sessions. But neutrality regulationists call it “forgery” and abhor it.
“Overbooking” bandwidth isn’t a bad thing, and in fact it’s central to the economics of packet-switching. The PSTN forces each caller into a bandwidth ghetto where he is allocated a small chunk of bandwidth, 4 KHz, regardless of how much he currently requires. If you’re on the phone and have to set it down to check on your chili, you have 4 KHz. If you’re blasting files over a modem connection, you have 4 KHz. It doesn’t matter how many other callers are on-line and what they’re doing: you each get 4 KHz. That’s the law.
But packet switching, of which the Internet is an example, allows your bandwidth allocation to float depending on what you need to do and what other people are doing. You share network facilities with your neighbors (and this is true whether you use DSL or cable, you just share at different points on these technologies), so you can get a larger chunk of bandwidth when they’re idle than when they’re banging the net hard.
Overbooking allows you to use very large amounts of bandwidth for short periods of time, which is ideal for web surfing: you click on a link, you get a ton of graphics sent to you computer. While you’re reading, your neighbors get to use the bandwidth that would be wasted if you had PSTN connections. It works for everybody, most of the time. It works so well, in fact, that ISPs haven’t bothered to meter actual bandwidth use: the resource is so abundant, and the demands so few (especially in the upstream direction, where your clicks move) that there’s never been a need to control or meter it.
Enter P2P, a method of moving large files across networks that relies on free upstream bandwidth. Now the abundant broadband upstream is constantly occupied, not by an interactive application that sends a click now and click 5 seconds from now and a click a minute from now, you’ve got applications running that constantly stream traffic up the wire, to the detriment of the others in the neighborhood. Something has to give.
One approach is to cap upstream traffic:
However, the “all you can eat” model may no longer be viable–a change the government seems to be ignoring. ISPs could use the open salad bar model when users were mainly transmitting small textual data. But with video becoming more common, users increasingly transmit very large high-definition files.
In response, Comcast plans to cap customer usage at 250 GB of data each month. That translates to about 50 million e-mails, 62,500 songs, 125 standard-definition movies, or 25,000 high-resolution digital photos. That amount would seem to meet the needs of most customers, including small and midsize businesses. The only folks affected would be companies such as BitTorrent, that have based their business on the “all you can eat” model, and hackers, who routinely spew out tons of unwanted solicitations and malware.
Capping has its critics, mostly the same people who object to traffic management as well:
For whatever reason, some believe ISPs should not be able to put any restrictions on the volume of information that any user transmits. That’s absurd. Per-bit and per-byte pricing models have long been used for data transmissions. In trying to build and sustain their businesses, carriers constantly balance their attractiveness and viability versus unlimited usage pricing models. By government decree, they no longer have that option. In effect, the FCC has decided to tell ISPs how to run their networks.
Capping frees up bandwidth for sharing by taking free bandwidth off the table for P2P. But it’s not a technically elegant approach. Humans respond to caps month-by-month, but networks experience congestion and overload millisecond-by-millisecond. So the sensible engineering approach is to manage traffic in pretty much the way that Comcast does it today: identify the bandwidth requirements of applications, and allocate bandwidth to those that need it the most, as we would with any scarce resource: grant transmission opportunities (that’s a technical term we use in network architecture) to highly interactive applications such as VoIP ahead of non-interactive applications such has HDTV file transfers. This is sound practice, but the FCC has now said it’s illegal. The FCC is anti-consumer.
Net neutrality supporters have pressured the FCC because they believe cable companies are unfairly monopolizing the Internet access marketplace. This conveniently ignores a couple of factors. First, there is no Internet access monopoly. A small or midsize business can get access from cable companies, telcos or wireless suppliers. True, there are not 50 choices, as you might have when buying a new pair of pants, but there is a reason why so few companies compete in the Internet access arena–it’s not a great business.
In fact, net neutrality advocates have turned a blind eye to the history of the dot-com bubble. Internet access start-ups burned through more cash with fewer positive results than any market sector in memory–and perhaps ever. Providing Internet access requires a lot of capital for the network and support infrastructure, and there’s not a lot of money to be made when customers pay about $20 a month for unlimited access.
The alternative to application-sensitive traffic management is a crude user-based system that treats all of each user’s traffic the same. This means, for example, that your VoIP streams get the same service from your ISP as your web clicks and your file transfers. This is insane.
Each Internet user should be able to multitask. We should be allowed to share files with P2P or any other non-standard protocol of our choice at the same time that we’re video-chatting or surfing the web. The heavy-handed FCC ruling that all packets must be treated the same undermines the economics of packet switching and delays the day when the Internet will make the PSTN and the cable TV systems obsolete.
Comcast was right to take the ruling to the courts to get it overturned. ISPs should be allowed to deploy a traffic system that combines elements of the protocol-aware system currently in use at Comcast with the new “protocol-agnostic” system that’s under test, such that each customer has a quota for each class of traffic. This is sound network engineering, but the current state of law makes it illegal.
This is not good.
Cross-posted to CircleID.
UPDATE: See Adam Thierer’s comments on this article at Tech Lib.
Technorati Tags: net neutrality, FCC, Comcast