Attractive illusion gets people more excited, temporarily, than hard truth. The pricey Supernova 2002 conference reached out to geeks hankering to make a difference and won some converts in Palo Alto this week. The conference organizer, Kevin Werbach, is a veteran of Esther Dyson’s consulting organization, sponsor of the highly-regarded (and even pricier) PC Forum.
Werbach’s premise is that decentralized communications technology is revolutionizing the way we relate to each other, empowering the individual, spreading democracy (probably ending world hunger, global warming, and the oppression of indigenous peoples), and generally making the world a better place, at least for us geeks. He expressed his theory about networking in an article titled It’s in the Chips for The Feature:
Intel is particularly excited about WiFi and other unlicensed wireless technologies, because that’s where it sees the strongest resonance with its familiar PC industry. Says Kahn: “If you look historically at our industry, one way of looking at it is as a sequence of battles beyond chaotic and orderly things. In almost every instance that I can think of, the chaotic thing won.” The PC beat the mainframe and Ethernet beat centralized LAN protocols. Now, WiFi is challenging top-down wireless data technologies such as 3G. If history is any guide, the messy bottom-up approach will win. In addition to building WiFi chips and devices, Intel is lobbying the US government to provide more flexibility and spectrum for unlicensed wireless technologies. (emphasis added)
There’s only one thing wrong with Werbach’s model: it’s complete nonsense. The PC didn’t beat the mainframe, it beat the dumb terminal. There are more mainframes now than ever before, but we call them web servers. The PC isn’t the product of decentralization, its the product of miniaturization, which actually packs circuits together in a more centralized form. This allows PCs to do more than terminals, and some of what they do is to request more services from central servers. Corporate computing topology is as it ever was, only more densely so.
Ethernet beat the IBM Token Ring, Datapoint ArcNet, and Corvus Omninet primarily because it was faster, more open, and more cost efficient on a dollars per bandwidth unit basis. It was actually much more centralized than the alternatives, and in fact didn’t really take off until it became more centralized than it was in its original form.
Early Ethernet – the network specified in the Blue Book by DEC, Intel, and Xerox, and then slightly modified by the IEEE 802.3 committee – was a highly decentralized network in which computers were connected to each other by a big, fat, coaxial cable. This topology was supposed to be highly reliable and easily extendable, but it proved to be a nightmare to install, configure, and manage. So Ethernet languished until the 1BASE5 task force of IEEE 802.3 wrote a standard for a variation of Ethernet using twisted pair wiring to connect computers to a centralized piece of electronics called an “active hub”. I know because I’m one of the people who wrote this standard.
Token Ring and ArcNet already used passive hubs, but these devices didn’t have processors and couldn’t perform signal restoration and network management. By centralizing these functions, twisted pair Ethernet lowered overall network deployment costs and made a more robust network, one in which meaningful troubleshooting and bandwidth management could be performed efficiently by skilled network technicians.
When wireless LANs were first developed, in the early 90s, there were those who tried to build completely decentralized systems where computers send messages directly to each other without a hub mediating traffic. These systems also turned out to be nightmares to operate, and were replaced by systems in which computers congregate around active hubs again, although they were renamed “access points”. If you use a wireless LAN today, you have an access point (the ad hoc version of 802.11, iBBS, simply allows one computer to serve as an access point for the others).
These networks are managed top-down, just as 3G networks are. The difference is that 802.11 has minimal capabilities for traffic shaping, priority assignment, security, and management, all the things that a large, massively distributed network needs to do. Lacking this intelligence, they aren’t going to scale up as well as entrepreneurs active in building out semi-public networks through hotspots need them to in order to live up to the claims they’re making.
802.11 isn’t a sound basis for building a global, wireless network, and it was never intended to be. It’s a lightweight network with minimal overhead intended to fill one floor of an office building at most, and I know this because I was one of the people who laid out the MAC protocol framework on which it’s based. It’s a testament to the ingenuity of RF hardware engineers that it can now go farther and faster than we ever imagined it would, back in the day.
A survey reported by Glenn Fleishman in the New York Times reports that 70% of the 802.11 WLANs in New York are completely insecure, not even using the marginal security included in the basic specs. That’s only the beginning of the problems.
The much larger issue is bandwidth management, when you start cramming more and more networks together, each of which is separately managed, and all of which have to share the same handful of communications channels, which they do in an extremely inelegant, first-come-first-served fashion.
It’s going to be as if all the private telephones are removed from our homes and offices, and we have to line up at a few pay phones to make a call, often waiting behind people who never stop talking. Basically, the system will implode as soon as a certain density of access points is reached.
In order to manage spectrum, and I mean to manage it in such a way that everyone can access it on a fair and reasonable basis, access has to be controlled, bandwidth has to be allocated, hogs have to be disconnected, and broken computers have to be isolated and repaired. This means centralization, and no amount of hand-waving will make it otherwise.
It’s annoying that a whole new generation of snake oil peddlers are trying to pick up where the Dot Com bubble left off and over-hype 802.11 the same way they did Internet commerce. I hope this time around investors will hang on to their wallets and demand profit potential out of their business models, fraud will be punished, and genuine innovations won’t be crowded out by scams.
UPDATE: In a more recent essay (via Lotus Notes creator Ray Ozzie), Werbach tempers his view of decentralization:
The most decentralized system doesn’t always win. The challenge is to find the equilibrium points–the optimum group sizes, the viable models and the appropriate social compromises.
This is almost there. Certain things lend themselves to decentralization, such as CPU cycles, user interfaces, and access to networks; other things don’t, such as databases of time-critical information, security, and spectrum management. We create decentralized systems where they’re appropriate, and centralized ones where they’re appropriate. This isn’t new, and there’s nothing in the new technologies to suggest otherwise. We may very well need a “new paradigm” to lead networking out of its current slump, but decentralization isn’t it. What it might be is a topic that I’ll discuss shortly.