Symmetry, Control, and Progress

A friend asked me what I thought about Doc Searls’ latest essay on the evolution of the Internet and as I happened to be reading it already, I’ve written a few disjointed notes. The short version of my reaction is that it’s sad that everybody with an axe to grind about technology, politics, or business … Continue reading “Symmetry, Control, and Progress”

A friend asked me what I thought about Doc Searls’ latest essay on the evolution of the Internet and as I happened to be reading it already, I’ve written a few disjointed notes. The short version of my reaction is that it’s sad that everybody with an axe to grind about technology, politics, or business these days seems to think that the Internet has an immutable, Platonic form that’s somehow mystically responsible for all that’s good in the technology business for the past twenty years, and any alteration of this form will screw it up. According to this way of thinking, stuff like Napster that exists solely for the purpose of illegal activity is good (even though new), but DRM (which isn’t really a Net deal anyhow) would be inscrutably bad.

This is sort of a “natural law” argument that’s supposed to persuade business and government to turn a blind eye to abuses of the Net, leaving its regulation to self-appointed do-gooders free of commercial interest. It’s a flawed argument that ignores the fact that the Internet is actually a tool and not a spiritual essence from a higher reality, which like all tools adapts to human needs or is discarded. The strongest proponent of this view is Larry Lessig, whose book “The Future of Ideas” I’ve just read, and the others who argue this line (Searls, Weinberger, Gillmor) take their lead from him. I’ll write a review of Lessig’s book in the next few days, and it’s not going to be pretty. But back to Searls, and the theory of immaculate conception:

The Internet is not simply a network, it’s a means of interconnecting networks. It won out over competing technologies because it was heavily subsidized by the government and more simple than the alternative, the ISO/OSI protocol suite. OSI was a complicated set of international standards devised by committees with membership as diverse as the UN but in some ways even less rational. It contains a myriad of options, many non-usable, and is hard to understand, let alone to implement. In the heyday of OSI, we had a series of “OSI Implementors’ Workshops” to hash out subsets of the protocols to implement for purposes of demonstration, and even that was very painful. Internet protocols weren’t designed by committees, but by individuals paid by ARPA to keep things simple. OSI was intended to take the place of proprietary protocols from IBM, Xerox, and DEC, providing end-to-end applications, whereas the Internet was simply intended to interconnect diverse networks with a basic level of end-to-end capability.

Make a side-by-side comparison of any early Internet protocol with the competing ISO candidate and you see that the Internet offering can be implemented in tinier memory and fewer CPU cycles and with less man-hours of programming effort than the alternative. As if that weren’t enough to ensure victory, the government paid contractors to write reference implementations of Internet protocols and then gave them away for free.

The Internet protocols offered much less functionality than ISO protocols. The Internet’s file transfer protocol, ftp, has a very limited ability to reformat files as they move from one network to another, while the ISO FTAM protocol could resize and reorder binary integers to fit any given machine architecture on the fly, using the same ASN.1 technique employed by the Internet management protocol, SNMP.

As a means of interconnecting networks rather than of running businesses, Internet protocols make the least number of assumptions about the capabilities of a given network consistent with moving relatively small amounts of data, most of it e-mail, between networks at relatively low speeds.

As the two most common methods of network interconnect in the early 80s were primitive, shared-cable LANs with relatively high rates of collision, and 56Kbps synchronous modems on leased lines with high error rates, the Internet was designed to be highly robust at its endpoints, the better to recover from network-induced failures common with these technologies. The OSI protocols also offered an end-to-end transport service, TP4, but they didn’t lock applications into a simple datagram service at the network layer; OSI was all about choice.

The End-to-End argument that prompted a slimming down of the ARPANET’s network protocol to the minimal IP datagram service upon which the Internet was built was prompted by the nature of these interconnects, and not out of utopian visions that the Internet, as the savior of mankind, had to emanate democratic and egalitarian principles. System guys noticed that the ARPANET dumped garbage in their buffers, and they wanted to take control of the exchange of e-mails and file transfers in order to get good data instead. They also wanted to be able to connect a system to the Net without waiting for BB&N to built an adapter for their system’s bus. So the Internet became an extended Ethernet rather than a good citizen on a wide-area network designed to carry voice as well as data.

As the Internet has grown, all of the Internet’s original protocols have been discarded and replaced as they couldn’t handle the complexity of modern networks and the inherent problems of size and bandwidth. OSI was ahead of the curve in 1985, but today it’s about right; the Internet protocols were about right in 1985, and today they’re slowing progress.

The Internet itself now dumps garbage into our systems, and we need to do some more engineering to take it to the next level. One form this garbage takes is spam, another is jitter, and a third is utopian social theory. Spam has just about made e-mail unusable, and the filters intended to combat it, manual or automatic, remove good messages along with bad. Jitter – unpredictable delays between packets – limits our ability to send voice calls over the Internet, and utopian social theory prevents clear thinking about network design. The following piece of theory from Doc Searl’s recent essay Saving the Net illustrates:

The Net’s problem, from telco and cable industries’ perspective, is it was born without a business model. Its standards and protocols imagine no coercive regime to require payment–no metering, no service levels, no charges for levels of bandwidth. Worse, it was designed as an end-to-end system, where all the power to create, distribute and consume are located at the ends of the system and not in the middle. In the words of David Eisenberg [sic] the Internet’s innards purposefully were kept “stupid”. All the intelligence properly belonged at the ends. As a pure end-to-end system, the Net also was made to be symmetrical. It wasn’t supposed to be like TV, with fat content flowing in only one direction.

The Internet was originally deployed on systems owned by government contractors forced to subscribe to a draconian “Acceptable Use Policy” that forbade its use for commercial purposes. This AUP was a concession to the heavy taxpayer subsidy, and a reflection of the fact that the Internet’s main purpose was experimentation with its own network protocols and not for a practical, day-to-day purpose.

As soon as the Internet went commercial, it adopted mechanisms for charging subscribers for individual and system use, protocols for enforcing trade agreements between ISPs and NSPs, and a clearing-house – the NAPs – for enforcing these charges and agreements. One of the earliest Internet protocols, SLIP, allows ISPs to authenticate customers in order to prevent unauthorized use, and later protocols such as OSP, RSVP, SIP, and DiffServ allow for customer authentication and charging for specific services and sessions.

As we’ve already said, the “End-to-End” architecture was simply a means of error detection and recovery, and not a theory of creativity and intelligence. While Isenberg argues that the Internet is a “stupid” network to differentiate it from a failed experiment at AT&T called “The Intelligent Network”, the Internet today (with Mobile IP and mobility support for e-mail in IMAP) provides all the capability imagined by AT&T for its roving telephone customers and then some (except for phone calls, of course.)

And while the campus-to-campus links in the early Internet were all symmetrical, 56Kbps links, the access network surrounding the Internet has always been built out of a variety of symmetrical and asymmetrical technologies. Before DSL and DOCSIS (cable Internet), the most popular means of attaching to an Internet access network was the V.90 modem standard, which allows faster download speeds – 56Kb/s – than upload speeds – 33Kb/s. The access network is not the Internet, and it’s not even “on the Internet;” it’s a means of connecting to a system or a router that itself is on the Internet, and shouldn’t be confused with anything else. Serious web sites don’t run on dial-up lines or on consumer computers in the home, the run on high-performance systems closer to one of the backbone NSP links that make up the Internet today.

These access networks are generally asymmetrical for technical reasons related to the cost and construction of networks (escaping digital-to-analog conversion is the trick for V.90). In a general sense, these technical reasons relate to the fact that download traffic on a cable TV system inherently moves more efficiently than upload traffic because there is only one sender – the cable headend – and it doesn’t have to share the channel with others: when it has something to send, it sends it out immediately. In order to send upstream, each cable modem has to negotiate, in principle, with all the other cable modems on the link who may themselves want to upload at the same time and share access with them. Network designers call this the “multiple access problem”, and it’s a technical reality, not a conspiracy to deprive the consumer of his rights. Every solution to this problem that’s ever been devised has overhead that surpasses the overhead of one sender addressing packets to a number of receivers. So: one sender, multiple receivers=fast; multiple senders and multiple receivers=slow.

Unlike cable, DSL offers symmetric options, albeit at higher prices, for those few customers who need them enough to pay extra. The modem companies who designed V.90 and its predecessors K56Flex and X2 didn’t go asymmetric in order to monopolize TV broadcasting; they weren’t in that game any more than SBC and Comcast are today. Their systems reflect technical realities, not the least of which is the fact that personal computers surfing the web receive much more data than they transmit.

The Net’s end-to-end nature is so severely anathema to cable and telco companies that they have done everything they can to make the Net as controlled and asymmetrical as possible. They want the Net to be more like television, and to a significant degree, they’ve succeeded. Most DSL and cable broadband customers take it for granted that downstream speeds are faster than upstream speeds, that they can’t operate servers out of their houses and that the only e-mail addresses they can use are ones that end with the name of their telephone or cable company.

The net’s “end-to-end” error recovery scheme makes it easy to interconnect networks that employ a variety of protocols internally without spending a great deal of money on interconnect, and therefore are a boon to cable and telcos companies who don’t wish to spend much money on reliable, jitter-free, high-fidelity service. If you can meet customer expectations with cheap and shoddy quality of service, why spend more?

The Internet’s greatest drawbacks are direct consequences of its success at eliminating competing protocols and luring people on-line. As the Internet has become successful, its capabilities to interconnect networks have become recognized as standards, and the richer capabilities of more advanced systems have been largely discarded, which is not a good thing, especially for voice.

As traffic and bandwidth grow, so too does the thirst for new services such as mobility that the old architecture and the old infrastructure couldn’t support. As we move forward, we’re going to see re-engineering of the means that the Internet uses to interconnect diverse networks, and some of these will be accompanied by filthy lucre. With sound engineering and a free market, these changes will move the net forward, even if they don’t make it a safer haven for the theft of music.

I’m happy with that.

edited 3:24AM July 24, 2003.

3 thoughts on “Symmetry, Control, and Progress”

  1. You’re actually agreeing with Lessig on many points.

    … the Internet is actually a tool and not a spiritual essence from a higher reality, …

    This is EXACTLY what Lessig is saying!

    Of course, many of us want a tool optimized for citizen use, as opposed to business use, and you’re arguing over that.
    But that’s a very standard argument, and there’s little point to repeating it.

  2. And here I thought businesses were all about selling stuff to citizens.

    Lessig argues that the Original Internet Architecture&tm serves as a consitution to limit any and all efforts to enhance, modify, or extend the Internet, all networks attached through the Internet, and all networks that provide accesss to the Internet. But he’s not very clear about where and when this Original Architecture was written, and why it’s more important than all the work on network design that preceded its moment in history or followed it.

    Judging by the importance he attaches to end-to-end error detection, flow control, and recovery, I’d guess he’s focussed on the 1982 TCP/IPv4 rollout, and not on the ARPA-Internet that preceded it or the IPv6 network that’s rolling out now. Trouble is, of course, IPv4 was largely a reactionary move driven by one bad experience sloppy programmers at MIT had with a bad memory board in one IMP, and not really a rational, comprehensive, and scalable architecture. And to compound the problems, IPv4 was tailored to timesharing systems connected to each other by 56Kb/s modems serving users on glass Teletypes attached at 300 baud, and we live in a very world today. If we have to make analogies to government, the TCP/IP Internet was the “Articles of Confederation”, not a workable system.

    The Internet doesn’t have a constitution, it’s never had one, and it never will, so we have to find some other way to guide public policy on Internet regulation than to pretend the Easter Bunny exists.

Comments are closed.