Lessons from Internet 2

David Isenberg tried to explain his dramatic turn-around on net neutrality recently: In June of 1997, when I wrote the essay, it seemed reasonable (to a Bell Labs guy steeped in telco tradition) that a stupid network might incorporate low level behaviors analogous to taxis or tropism to automatically adapt to the needs of the … Continue reading “Lessons from Internet 2”

David Isenberg tried to explain his dramatic turn-around on net neutrality recently:

In June of 1997, when I wrote the essay, it seemed reasonable (to a Bell Labs guy steeped in telco tradition) that a stupid network might incorporate low level behaviors analogous to taxis or tropism to automatically adapt to the needs of the data. But research results from Internet2 [.pdf] now show that network upgrades to accommodate even extremely demanding applications, such as Hi-Def video conferences, can be achieved more effectively, cheaply and reliably by simply adding more capacity.

The research results he cites were delivered to Congress by this eminent researcher:

Gary Bachula is the Vice President for External Relations for Internet2. Gary has substantial government and not-for-profit experience, with an extensive history of leadership in technology development. Most recently, Gary served as Acting Under Secretary of Commerce for Technology at the US Department of Commerce where he led the formation of government-industry partnerships around programs such as GPS and the Partnership for a New Generation of Vehicles. As Vice President for the Consortium for International Earth Science Information Network (CIESIN) from 1991 to 1993, Gary managed strategic planning and program development for the organization designated to build a distributed information network as part of NASA’s Mission to Planet Earth. From 1986 to 1990, he chaired the Michigan Governor’s Cabinet Council, and from 1974 to 1986 Gary served as Chief of Staff to U.S. Representative Bob Traxler of Michigan where he advised on appropriations for NASA, EPA, the National Science Foundation and other federal R&D agencies. Gary holds undergraduate and law (J.D.) degrees from Harvard University. A native of Saginaw, Michigan, Bachula served at the Pentagon in the U.S. Army during the Vietnam war.

So now we have a new definition for net neutrality: in the past, networks were designed by engineers, but under net neutrality they’ll be designed by lawyers and lobbyists. Great.

But to be fair, there was an actual study performed by two guys using the Internet2 Abilene network from 1998-2001 which determined that QoS wasn’t practical to implement with routers of that era, primarily because they had to use software to figure out how to distinguish high-priority and low-priority packets. As these routers performed most packet forwarding operations in hardware, this was a big slow-down:

Some router vendors have elected to include complex QoS functionality in microcode running on their interface cards, rather than in custom ASICs that add to the power consumption and cost of a card. This is a non-starter. Our experience has been that this approach can result in a drop of maximum packet-per-second forwarding rates by 50% or more. Such a CPU cycle shortage hurts all traffic, including Premium, making deployment very hard to justify.

The trend among newer, higher-speed routers seems to be towards less QoS functionality, not more. As circuit costs are responsible for an ever decreasing portion of network capital expenditures, and interface costs are responsible for an ever increasing share of network capital expenditures, the market pressure for dumb, fast, and cheap router interfaces is ever greater.

Are we prepared to pay the price difference for extra QoS features? If so, is there enough of a customer base for feature-rich routers to make their development worthwhile for router vendors?

Contrary to the Internet2 predictions, modern routers do in fact have more QoS functionality in hardware. So if Internet2 were a serious research organization they’d repeat the study with up-to-date systems and abandon the rank speculation. But they won’t, of course.

To those of us who’ve been around a while, the idea that you can learn ultimate lessons about the Internet from academic experiments is laughable. Early experiments with the Internet in academic settings taught us, for example, that there was no need for spam protection, virus removal, and the control of bandwidth hogging. In the academy, social systems control this behavior, but in the real world it’s up to the network. We haven’t always known that, but we do now.

My question for Isenberg is this: what kind of engineer abandons well-known findings on the say-so of a lobbyist touting one ancient experiment conducted without control or follow-up? We know that an over-built network doesn’t need much in the way of QoS to support moderately demanding applications like VoIP.

The problem with over-provisioning the entire Internet is that applications will emerge to consume the excess bandwidth, putting the user back on square one.

Oops.

UPDATE: To clarify the Internet2 “research” just a bit, bear this in mind: the fears that NN advocates have about web site slowdowns and censorship presume that routers are capable of discriminating for and against all the traffic they pass without slowing down Premium customers. The Internet2 “research” says this is impossible.

So who are you going to believe, Internet2 or Internet2?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.