The myth of the decentralized future

Attractive illusion gets people more excited, temporarily, than hard truth. The pricey Supernova 2002 conference reached out to geeks hankering to make a difference and won some converts in Palo Alto this week. The conference organizer, Kevin Werbach, is a veteran of Esther Dyson’s consulting organization, sponsor of the highly-regarded (and even pricier) PC Forum. … Continue reading “The myth of the decentralized future”

Attractive illusion gets people more excited, temporarily, than hard truth. The pricey Supernova 2002 conference reached out to geeks hankering to make a difference and won some converts in Palo Alto this week. The conference organizer, Kevin Werbach, is a veteran of Esther Dyson’s consulting organization, sponsor of the highly-regarded (and even pricier) PC Forum.

Werbach’s premise is that decentralized communications technology is revolutionizing the way we relate to each other, empowering the individual, spreading democracy (probably ending world hunger, global warming, and the oppression of indigenous peoples), and generally making the world a better place, at least for us geeks. He expressed his theory about networking in an article titled It’s in the Chips for The Feature:

Intel is particularly excited about WiFi and other unlicensed wireless technologies, because that’s where it sees the strongest resonance with its familiar PC industry. Says Kahn: “If you look historically at our industry, one way of looking at it is as a sequence of battles beyond chaotic and orderly things. In almost every instance that I can think of, the chaotic thing won.” The PC beat the mainframe and Ethernet beat centralized LAN protocols. Now, WiFi is challenging top-down wireless data technologies such as 3G. If history is any guide, the messy bottom-up approach will win. In addition to building WiFi chips and devices, Intel is lobbying the US government to provide more flexibility and spectrum for unlicensed wireless technologies. (emphasis added)

There’s only one thing wrong with Werbach’s model: it’s complete nonsense. The PC didn’t beat the mainframe, it beat the dumb terminal. There are more mainframes now than ever before, but we call them web servers. The PC isn’t the product of decentralization, its the product of miniaturization, which actually packs circuits together in a more centralized form. This allows PCs to do more than terminals, and some of what they do is to request more services from central servers. Corporate computing topology is as it ever was, only more densely so.

Ethernet beat the IBM Token Ring, Datapoint ArcNet, and Corvus Omninet primarily because it was faster, more open, and more cost efficient on a dollars per bandwidth unit basis. It was actually much more centralized than the alternatives, and in fact didn’t really take off until it became more centralized than it was in its original form.

Early Ethernet – the network specified in the Blue Book by DEC, Intel, and Xerox, and then slightly modified by the IEEE 802.3 committee – was a highly decentralized network in which computers were connected to each other by a big, fat, coaxial cable. This topology was supposed to be highly reliable and easily extendable, but it proved to be a nightmare to install, configure, and manage. So Ethernet languished until the 1BASE5 task force of IEEE 802.3 wrote a standard for a variation of Ethernet using twisted pair wiring to connect computers to a centralized piece of electronics called an “active hub”. I know because I’m one of the people who wrote this standard.

Token Ring and ArcNet already used passive hubs, but these devices didn’t have processors and couldn’t perform signal restoration and network management. By centralizing these functions, twisted pair Ethernet lowered overall network deployment costs and made a more robust network, one in which meaningful troubleshooting and bandwidth management could be performed efficiently by skilled network technicians.

When wireless LANs were first developed, in the early 90s, there were those who tried to build completely decentralized systems where computers send messages directly to each other without a hub mediating traffic. These systems also turned out to be nightmares to operate, and were replaced by systems in which computers congregate around active hubs again, although they were renamed “access points”. If you use a wireless LAN today, you have an access point (the ad hoc version of 802.11, iBBS, simply allows one computer to serve as an access point for the others).

These networks are managed top-down, just as 3G networks are. The difference is that 802.11 has minimal capabilities for traffic shaping, priority assignment, security, and management, all the things that a large, massively distributed network needs to do. Lacking this intelligence, they aren’t going to scale up as well as entrepreneurs active in building out semi-public networks through hotspots need them to in order to live up to the claims they’re making.

802.11 isn’t a sound basis for building a global, wireless network, and it was never intended to be. It’s a lightweight network with minimal overhead intended to fill one floor of an office building at most, and I know this because I was one of the people who laid out the MAC protocol framework on which it’s based. It’s a testament to the ingenuity of RF hardware engineers that it can now go farther and faster than we ever imagined it would, back in the day.

A survey reported by Glenn Fleishman in the New York Times reports that 70% of the 802.11 WLANs in New York are completely insecure, not even using the marginal security included in the basic specs. That’s only the beginning of the problems.

The much larger issue is bandwidth management, when you start cramming more and more networks together, each of which is separately managed, and all of which have to share the same handful of communications channels, which they do in an extremely inelegant, first-come-first-served fashion.

It’s going to be as if all the private telephones are removed from our homes and offices, and we have to line up at a few pay phones to make a call, often waiting behind people who never stop talking. Basically, the system will implode as soon as a certain density of access points is reached.

In order to manage spectrum, and I mean to manage it in such a way that everyone can access it on a fair and reasonable basis, access has to be controlled, bandwidth has to be allocated, hogs have to be disconnected, and broken computers have to be isolated and repaired. This means centralization, and no amount of hand-waving will make it otherwise.

It’s annoying that a whole new generation of snake oil peddlers are trying to pick up where the Dot Com bubble left off and over-hype 802.11 the same way they did Internet commerce. I hope this time around investors will hang on to their wallets and demand profit potential out of their business models, fraud will be punished, and genuine innovations won’t be crowded out by scams.

UPDATE: In a more recent essay (via Lotus Notes creator Ray Ozzie), Werbach tempers his view of decentralization:

The most decentralized system doesn’t always win. The challenge is to find the equilibrium points–the optimum group sizes, the viable models and the appropriate social compromises.

This is almost there. Certain things lend themselves to decentralization, such as CPU cycles, user interfaces, and access to networks; other things don’t, such as databases of time-critical information, security, and spectrum management. We create decentralized systems where they’re appropriate, and centralized ones where they’re appropriate. This isn’t new, and there’s nothing in the new technologies to suggest otherwise. We may very well need a “new paradigm” to lead networking out of its current slump, but decentralization isn’t it. What it might be is a topic that I’ll discuss shortly.

5 thoughts on “The myth of the decentralized future”

  1. I don’t think it’s fair to characterize Werbach as a snake oil salesman. He’s merely selling his punditry, which is providing the value of every penny you and I are paying for it. I’ll leave it up to Supernova 2002 conference attendees to asses the whether or not they got their money’s worth. I believe that folks like Kevin Werbach actually generate notable value in that they are the “early pundits”, some would call them visionaries. Granted, their visions might be hallucinations or just plain wrong, but they are generating ideas which are the wellspring of products and profitable businesses. It’s the pundits who follow, who are usually trying to sell the products or shares in stock offering for whom I share your skepticism.

    Your analysis of the evolution of networks is spot on. But your description of computing models suffers uncharacteristically from the same breezy oversimplification that make’s Werbach’s model such nonsense. PCs didn’t replace dumb terminals; they moved computing power out of the data center and to the desktop and created whole new models of computing. Breaking the hegemony of centralized computing caused the pendulum to swing toward the desktop and whole new models of computing developed like client-server and P2P, both of which we saw developing in the early 80’s. As the pendulum swings back in the other direction, we see a judicious re-centralization of resources. But the corporate computing topology is not like it was before the PC and Mac. Nor is it accurate to portray the evolution of the mainframe as the modern web server.

    Corporate mainframes still do what they’ve always done: concentrate significant computing power for large scale, multi-user applications, high volume transaction based systems, and large scale manufacturing systems. There are tens of thousands of systems running out there that are pretty much the same thing they were thirty years (or more!) ago and they certainly weren’t replaced by web servers which barely do any computing at all. There is so much more computing being done now than before the advent of the PC and the dissemination of computing power into a variety of devices (both specialized and general) have spawned variety of computing models.

    Now that I’ve written this, I see you’ve updated this post – clearly demonstrating that already grok what I’ve written above. You’re right – there’s no magic panacea in decentralization.There’s no silver bullet in either centralization or decentralization in computing. I don’t believe there’s any grand, unified model of computing that will guide us into the future, either. But I do believe you are right about centralization being the key to managing the spectrum, for the “the network” is as you’ve described it. I’m interested to read what you have to say about new networking models.

  2. I didn’t mean to call Werbach a snake-oil peddler as much as the people who’re claiming to build nationwide 802.11 networks. I think Werbach genuninely believes his “decentralization” pitch, but that he doesn’t understand tech because he’s a lawyer and not an engineer.

    I think we agree that the dynamics of the personal computer, microprocessor, and RAM are quite different from those of shared spectrum, and the management practices and evolutionary path of one don’t provide us with insight about the other. Shared spectrum is a “commons”, not personal property, and it has to be managed accordingly.

    Now what would our tech-topian decentralist comrades say about a decentralized approach to global warming? Somehow, I don’t think that dog would hunt.

  3. Shared spectrum is a commons, I agree, and a crowded commons at that.

    I enjoy your blog a great deal, Richard. I’m not a political conservative and I enjoy one who, like you, is a thoughtful proponent of their views. Your blog makes me think and often amuses me.

Comments are closed.