The Future of Mediocrity

Larry Lessig?s book The Future of Ideas is an examination of the Internet?s influence on social discourse as well as an analysis of the forces shaping the net in the past and present. The message is both utopian and apocalyptic, and the analysis aspires to be technical, cultural, and legal. It?s an ambitious enterprise that … Continue reading “The Future of Mediocrity”

Larry Lessig?s book The Future of Ideas is an examination of the Internet?s influence on social discourse as well as an analysis of the forces shaping the net in the past and present. The message is both utopian and apocalyptic, and the analysis aspires to be technical, cultural, and legal. It?s an ambitious enterprise that would have been tremendously valuable had it been successful. Unfortunately, this is one of the most absurd books ever written. Its fundamental premise — that the Internet can only be regulated according to a mystical appreciation of the values embedded in its original design — is ridiculous, its reseach is shoddy, and its exposition of these values is deeply confused.

Apart from gross errors of theory and fact, the book is nonetheless an amusing and deeply felt diatribe against modern government, industry, and society, written with such earnestness and passion that its shortcomings in humor and insight may almost be forgiven. Unfortunately, Future has developed a cult following that threatens to go mainstream with a deeply disturbed misconception of the Internet’s design, purposes, and challenges.

What it’s about

Lessig’s is a kind of open-source millenarianism, an apocalyptic picture of cyberspace that sees salvation much as the Puritan preachers did, as the result not of fate or chance, but of each individual taking personal responsibility for his spiritual–or, in this case, virtual–well-being. To observe this is not to take away from either the power or the truth of what Lessig has to say. Rather, it is to show respect for his message: If Lessig teaches us nothing else, he teaches us that if ideas are to have a future, we must understand their past. — Lessig’s doomsday look at cyberspace, Knowledge@Wharton.

Lessig bounces between a utopian vision of the ?original Internet?, which he believes to have come into existence with the rollout of TCP/IP on the ARPANet in 1981, and his apocalyptic vision of the Internet of the future, which will be a vehicle for the control of intellectual property, the denial of civil liberties, the withering away of the Commons, and myriad unsavory commercial enterprises.

Expressing sympathy for “open source” ideas, Lessig argues that although artists certainly deserve compensation for their creations, they should not be rewarded with added control. At times, his argument to this effect is compelling. But at other points in The Future of Ideas, Lessig seems to unduly romanticize a “utopian,” open Internet, and unfairly dismiss the interests of content providers and the lawyers who seek to defend their rights. Private Property, the Public Use of Creativity, and the Internet, Findlaw.

The Future of Ideas attempts to explain the technologies the Internet is built on, and to clarify what he believes to be the core values informing its architecture. The technical descriptions are largely inaccurate, and these inaccuracies are compounded by Lessig?s unfortunate effort to anthropomorphize machine communications systems in order to find eternal verities:

Lessig, now a law professor at Stanford University, believes that the Net was built with the same values as the American Constitution. Embedded into the architecture of the network was the right to privacy and the ability to spread knowledge freely. But recently, says Lessig, those values are being corrupted on multiple fronts. — The Corruption of the Internet

Despite (or perhaps because of) its Utopian origins, the Internet is under attack by uninformed regulators and nefarious commercial interests only interested in its potential for profit:

From his opening rally-“The forces that the original Internet threatened to transform are well on their way to transforming the Internet”-Lessig offers a timely polemic against the sterilization of cyberspace. Created both as a venue for the quick dissemination of information and above all as a fiercely open medium, cyberspace, he argues, now suffers from innumerable and insuperable barriers created by corporate interests to protect their dominance. Maneuvering through a twisted thicket of scientific and legal arcane, his prose and reasoning could not be clearer or more passionate. — Barnes and Noble

The only proper way to regulate the Internet is to understand its foundation principles, reify them, and allow them and them alone to guide regulatory policy. We?re to believe the Internet was fully and completely hatched once and for all in an instant in 1981, and that any attempt to modify it in the interests of larger public concerns can only do it fatal damage. The Internet is a sovereign space, like a tribal reservation, that can?t be touched by Constitutional law or by statute:

One big theme of the book is that we need to get people to stop thinking about regulation as if it’s only something the government does. We need to start thinking about regulation in the sense that the architecture of the Internet regulates,” he says. “The Constitution has yet to catch up with this shift, to develop a way to express Constitutional values in the context of indirect regulation. Code becomes a sovereign power all its own in cyberspace. But the question is: Who authorizes this sovereignty and with what legitimacy? — Constitutionalist in Cyberspace, Penn Gazette

The Future of Ideas goes beyond the Internet to address larger issues of copyright and public domain:

Many of Lessig’s other proposals — limiting imposed contracts, promoting a public domain, removing barriers to innovation — follow sensibly from his analysis. One could imagine a Congress prepared to preserve innovation in the emerging electronic environment beyond the reach of special-interest lobbyists and the financial pressures of modern politics. But that Congress is not the one that has repeatedly told the public the Internet “can’t be regulated” to protect public interests, such as privacy or consumer interests, while simultaneously uncovering ever more creative regulation to preserve private interest. The No Electronic Theft Act, the Digital Millennium Copyright Act, the Copyright Term Extension Act and the Uniform Computer Information Transaction Act are just a few of the clever ways that legislators have found to regulate that which cannot be regulated. Lessig is well aware of this history, but rightly argues that it remains the responsibility of public officials and public agencies to consider how best to protect the interests of the, well, public. == Internet liberation theology, Salon.com.

?but in the end, Lessig seeks to ground such policies in his Internet architecture.

A second way in which the Internet is relevant to this problem is that it did provide and could provide opportunities for creativity and innovativeness previously unknown. The world-wide reach of the Internet, digitalization, speed of transfer, the creation of software that will make translation easier (a point that adds to Lessig’s analysis), open code and open access — what possibilities lay before us? To argue that the best way to reach these possibilities is to move ‘the market’ to the Internet is to ignore the weaknesses and failures of the market. Rather, Lessig would like us to find a balance between market and state that would ensure a healthy public domain, a healthy commons, so that creativity and innovation will not shrivel. — The Future of Ideas: The Fate of the Commons in a Connected World, RCCS.

This flows out of Lessig’s understanding of the Internet and of the law:

My first work in constitutional law was in Eastern Europe. There I learned that constitutional law is about trying to set up structures that embed certain values within a political system. Once I started thinking about constitutional law like that, it was a tiny step to see that that?s exactly what the architecture of cyberspace does: It?s a set of structures embedding a set of values. To the extent that we like those values, we ought to be defending the architecture of cyberspace. To the extent that we?re skeptical about those values, we should be asking whether the architecture is justified or not. Either way, the architecture is analogous to the Constitution — Larry Lessig, Reason magazine interview

Thus, understanding Internet architecture is a vital piece of Lessig?s project, because he believes that the architecture alone provides the relevant policy framework for regulating the Internet now and in the future. This approach is very different, of course, from the traditional application of well-accepted notions of rights and property to digital things, it?s an approach that very few in legal theory circles are able to critique, and it?s an approach that may have some value, to the extent that it could clarify issues that would be ambiguous in the traditional understanding of Internet law.

Does Lessig understand the Internet?

We have to ask whether Lessig understands the Internet as a prelude to examining whether his argument for unique sovereignty makes any sense. The following claims from Future show that he doesn’t.

?The World Wide Web was the fantasy of a few MIT computer scientists? ? p. 7

Most people recognize Tim Berners-Lee as the inventor of the Web, building on previous work on hyperlinked text by Ted Nelson and others going back to Vannevar Bush. Berners Lee’s web site says:

With a background of system design in real-time communications and text processing software development, in 1989 [Tim Berners-Lee] invented the World Wide Web, an internet-based hypermedia initiative for global information sharing. while working at CERN, the European Particle Physics Laboratory.

This is not obscure information.

?The [telephone] system would better utilize the wires if the architecture enabled the sharing of the wires.? p. 33

Which wires doesn’t the telephone system share? The backbone wires of the system are shared by each telephone call in place, using a Time Division Multiple Access (TDMA) scheme that places samples of each phone call into common data packets. The wires to consumers’ houses now carry voice at one frequency and DSL at another, merging at Digital Loop Carriers before reaching the Central Office. The telephone system would be impossible without sharing wires.

?And arguably no principle of network architecture has been more important to the success of the Internet than this single principle of network design ? e2e.? p. 35

Anything is arguable, but this is a very difficult argument to support. The end-to-end principle contrasts with the hop-to-hop principle as a strategy for error detection and recovery. Hop-to-hop has one critical limitation: while it protects against errors affecting packets as they traverse the network, it doesn’t protect against errors at the interface between end systems and the network. End-to-end detects and corrects errors regardless of where they occur, but it does so at the price of network responsiveness. In a regime that’s strictly end to end, an error that occurs halfway through the hop-to-hop path has to be corrected by a retransmission from a endpoint, which consumes more network resources than a hop-to-hop recovery action. It also consumes more time, compromising time-critical network applications such as voice and video.

The end-to-end principle was, in the first instance, an over-reaction to a problem MIT had with one of the gateways to the ARPANet, described in the infamous paper End-to-End Arguments in System Design by J.H. Saltzer, D.P. Reed and D.D. Clark:

An interesting example of the pitfalls that one can encounter turned up recently at M.I.T.: One network system involving several local networks connected by gateways used a packet checksum on each hop from one gateway to the next, on the assumption that the primary threat to correct communication was corruption of bits during transmission. Application programmers, aware of this checksum, assumed that the network was providing reliable transmission, without realizing that the transmitted data was unprotected while stored in each gateway. One gateway computer developed a transient error in which while copying data from an input to an output buffer a byte pair was interchanged, with a frequency of about one such interchange in every million bytes passed. Over a period of time many of the source files of an operating system were repeatedly transferred through the defective gateway. Some of these source files were corrupted by byte exchanges, and their owners were forced to the ultimate end-to-end error check: manual comparison with and correction from old listings.

While we can sympathize with this problems, it’s fairly obvious that the efficient solution to the problem would have been to add error-checking between the system and the network interface, an alternative that was rejected by the authors because it would have allowed network interface contractor BB&N to retain lucrative research grants they wanted for themselves. The end-to-end principle is directly responsible for the complexity of TCP and the inability of Internet routers to respond gracefully to the overload conditions that affect all networks, especially those in which Denial of Service attacks are common.

?Because the design effects a neutral platform ? neutral in the sense that the network owner can?t discriminate against come packets while favoring others ? the network can?t discriminate against a new innovator?s design.? p. 37

The Internet can and does “discriminate” among packets in several ways: by honoring Type of Service requests in the IP header, by honoring the “Urgent Data” flag in the TCP header, by assigning priority in the Network Access Points to packets according to bilateral agreements among Network Service Providers, and by honoring reservations made by Integrated Services. In no sense is the Internet a neutral platform, and in no sense has it ever been blind to applications. The Internet Assigned Numbers Authority assigns well-known-sockets (WKS) to applications in order to discriminate among and between them in the interest of running the Internet efficiently, and has always done so.

?[The Internet] deals with this congestion equally ? packets get transported on a first-come, first-served basis. Once packets leave one end, the network relays them on a best-efforts basis. If nodes on the network become overwhelmed, then packets passing across those nodes slow down.? P. 46

This is simply false. The Internet employs a number of strategies such as Random Early Discard (RED) that enable it to smooth traffic flows and recover from congestion as gracefully as the its End-to-End architectural mistake allow. RED discards large packets before small packets on the theory that small packets are acknowledgements, the loss of which will cause large amounts of data to be retransmitted. As we’ve already mentioned, Type of Service and Quality of Service classes give some packets priority with respect to latency, and to other packets with respect to bandwidth. Given that most Internet links operate at a fixed speed, “slowing down” some packets is physically impossible, although queue placement may cause their transmission to be delayed.

?The real danger [of QoS] comes from the unintended consequences of these additional features ? the ability of the network to then sell the feature that it will discriminate in favor of (and hence also against) certain kinds of content.? p. 46

A primitive form of QoS has always been a part of the Internet, so the current proposals around Integrated Services break no new ground, they simply enable the Internet to be useful to a broader range of applications.

??while these technologies will certainly add QoS to the Internet, if QoS technologies like RSVP technology do so only at a significant cost, then perhaps increased capacity would be a cheaper social cost solution.? P 47

Network designers don’t consider “social costs”, they consider actual costs that can be measured and assessed by business people acting rationally. If RSVP — an Integrated Services technology that reserves buffers and link access time slots for time-critical applications — serves a genuine need at a reasonable cost, it will be adopted, and if it doesn’t it won’t be. The alternative to Integrated Services is Differentiated Services, which relies on the QoS options we’ve already discussed. As most of the Internet’s access links and internal data links already support reservation, it will be more and more of a factor in the Internet of the future simply for utility reasons.

?To understand the possibility of free spectrum, consider for a moment the way the old versions of Ethernet worked?the machine requests?to reserve a period of time on the network when it can transmit?The machine would first determine that the network was not being used; if it wasn?t, it would then send a request to reserve the network.? p. 77

The old versions of Ethernet Lessig refers to are the coaxial cable systems using Ethernet Version 2 (Blue Book Ethernet) and IEEE 802.3 10BASE5. These systems lacked a reservation system, and simply transmitted data when the network was free, and only when it was free.

?The network does not have to ask what your application is before it reserves space on a network.? p. 78

The Ethernet doesn’t reserve space, but network drivers and TCP/IP implementations have always been aware of applications and therefore capable of ordering packets according to internal policies. The Blue Book Ethernet required applications to be prominently advertised in the Type field of each packet.

?No system for selling rights to use the Ethernet network is needed. Indeed, many different machines share access to this common resource and coordinate its use without top-down control.? p. 78

Ethernets are private networks, internal to organizations, and are in fact managed as any other corporate resource may be. In the typical case, organizations assign Ethernet links of varying speeds (10, 100, 1000 megabits per second) to departments and functions according to their requirements, and manage these links with top-down network management systems which are highly ordered.

?all the intelligence is in the [TV] broadcaster itself? p. 78

Except for the channel change, volume, and power switches this was true pre-TiVo. It’s no longer the case for many of us.

?[On Wi-Fi networks] collisions and mistransmissions are retransmitted, as on the Internet.? p. 79

This could hardly be further from the truth. Wi-Fi networks do error detection and recovery hop-to-hop, and with 802.11e, they do comprehensive RSVP-like resource reservation and dynamic bandwidth assignment.

“Innovations came from the decentralized nature of the Internet, so its architecture is and must remain libertarian. ? Ch. 8

This is an assertion that Lessig makes repeatedly, with no attempt at substantiation; while the Internet has many advantages, the proximate one to the innovations Lessig mentions is any-to-any addressability, a feature it shares with the telephone network. After that, the time-shifting made possible by the queuing of e-mail messages ranks highly, as does the digital nature of Internet communications. It’s unlikely that “decentralization” has contributed more than headaches to network managers.

?Only those functions that must be placed in the network are placed in the network.? p. 149

Who determines the boundary between “must have” and “nice to have”?

Cable technology was developed in the 1960s as a way of giving remote communities access to television. CATV stood for ?community access television?When it was first built, cable tv was essentially an end-to-end system ? there was little intelligence in the network, all the power was in the broadcaster or the TV?? p. 151

Community Antenna Television was devised in the 1940s. This is a trivial nit, but it’s indicative of the shoddy research that went into The Future of Ideas.

?network address technologies (NATs) ? are devices for multiplying IP addresses. [They] insert points of control into the network. p 121

As with the previous example, this is an example of shoddy research. NAT stand for Network Address Translation.

We can only conclude that Lessig has a very poor understanding of the Internet, and even less of an appreciation of the technologies upon which it was built.

The implications of these fuzzy ideas will be dealt with in the next and final installment of this review.

6 thoughts on “The Future of Mediocrity”

  1. This rather reminds me of some of the neary-hysterical arguments about Microsoft’s Internet Explorer. While some concerns about it were valid, a lot were just a lot of hot air–including the view that somehow, simply by having its own browser, Microsoft would be able to prevent anyone from developing web pages that didn’t use Microsoft technologies.

  2. thanks for the review.

    still reading it, but I see some shoddy research that you’ve done, yourself. for one, the piece on NATs is on page 171, not page 121, and it doesn’t say “network address technologies”, it correctly says “network address translations”, followed by a correct layperson’s explanation of NAT.

    Vintage Books edition, Nov 2002 (the edition that is in most bookstores now)

    as for “collisions and mistransmissions”…so tcp/ip doesn’t resolve contention by using collision detection and retransmission ? is tcp/ip work differently on Wi-Fi networks than on wired ones ?

  3. richard, there’s such a Rosanne Rosanna-Dana character to so much you write. For example, you write above:

    >?The World Wide Web was the fantasy of a few MIT >computer scientists? ? p. 7

    >Most people recognize Tim Berners-Lee as the inventor of >the Web, building on previous work on hyperlinked text by >Ted Nelson and others going back to Vannevar >Bush.Berners Lee’s web site says:

    >With a background of system design in real-time >communications and text processing software >development, in 1989 [Tim Berners-Lee] invented the >World Wide Web, an internet-based hypermedia initiative >for global information sharing. while working at CERN, the >European Particle Physics Laboratory.

    >This is not obscure information.

    You use this to suggest the quote you include says MIT computer scientists, and not Berners-Lee, invented the internet. But if you’d practice a bit of what you preach, you’ll see that in fact, the quote was saying exactly what you’re saying — that the WWW was envisoned by Bush et al., well as Nelson and others, but as I plainly state (and perhaps you just missed?), it was Berners-Lee who gave us the protocols that enabled the WWW. See, e.g., p37 (“As the inventor of the World Wide Web, Tim Berners-Lee, describes it”).

  4. TCP doesn’t know anything about collisions, Sty, it only knows that some packets reached their destination and some didn’t. It’s retransmission strategy is end-to-end, while Wi-Fi’s is hop-to-hop.

  5. Looking at page 7, where the “fantasy” quote comes from, there is no context about HTTP and no mention of Berners-Lee, Nelson, or Bush. The general set of remarks says that network designers of the early Internet era weren’t aware of the impact the Net would have, and I certainly can’t agree with that. We were quite certain that we were building a network that would connect all computers everywhere, just as the Telco net had already connected all phones everywhere, and has in fact laid the very wires we were using.

    It really has unfolded according to plan.

Comments are closed.