David Weinberger has been thinking about the Internet, and the webheads and greedheads. He’s especially fascinated by a 20-year-old paper on network design:
I’ve been thinking about the end of the Internet. No, not its collapse, but as in the”End-to-End” (E2E) argument, put definitively by David P. Reed, J.H. Saltzer, and D.D. Clark in their seminal article, End-to-End Arguments in System Design. The concept is simple: whenever possible, services should not be built into a network but should be allowed to arise at the network’s ends.
Let me prick this bubble, if I may: the Internet was not designed correctly. This is especially true from the standpoint of real-time services, such as streaming audio and video. The fundamental problem is that the end-to-end model only works when timed delivery is not important, because it’s not able to manage the system-to-system, network-to-network, and router-to-router links that have to be managed for bandwidth to be reserved and used efficiently by real-time services. The Internet runs on telephony-based services such as ATM and SONET that provide for real-time delivery, quality of service selection, and bandwidth sensitive billing, but the Internet protocols, especially IPv4, mask access to the controls that run these links and make real-time at best a matter of faith and prayer and massively over-built datalinks.
If the connection-oriented, end-to-end services provided by TCP had been implemented at the network layer instead of at the transport layer, the Internet would be poised to gracefully carry the next generation of services. But it wasn’t, so it’s not, and IPv6 doesn’t fully remedy the deficiencies. Don’t hold up any engineering exercise done twenty-five and thirty years ago as state-of-the-art, and don’t try and build a model of human morality on it – it’s a losing proposition.
It seems a bit harsh to say that it wasn’t designed correctly, given that when it was created it wasn’t meant to do the things we’re hoping to make it do today.
In a sense, this is like saying that the IBM PC wasn’t designed correctly, given all the design headaches motherboard designers have to put up with to maintain backward compatibility. Well, they never thought the entire world would be running everything based on it when they first put the thing together.
Crikey. How long have we been waiting for IPv6? I touted it at 3Com six (seven?) long years ago. What’s the hold-up?
This is an area where I think Gilder’s influence has harmed the industry. Smart edges are goodness most of the time, but it takes a supremely intelligent network that’s hidden from the user to make them work well. Nothing new there.
I think it’s fair to say the IBM PC wasn’t designed correctly. I think they specifically wanted to make it a crippled system that wouldn’t compete with their “real” computers, but I might be wrong about that. In any case, a better design would have assumed that eventually memory was going to get cheaper and would have, for instance, given lots of room to video RAM instead of requiring it to be accessed using pages. Parenthetically, didn’t BillG say something like “no one’s going to need more than 640k of RAM?”
If you read about the history of the IBM PC, it was specifically a “greenhouse” type project, where a team was assembled entirely apart from anyone in the corporate offices. Their working model was the Apple II. Their goal was to create something cheap, made almost entirely out of off-the-shelf components. Easy to build, cheap, and a simple experiment to see if they could make a profit competing with Apple, Atari, Commodore, Osbourne, and Radio Shack.
They weren’t thinking about the future. It was an experiment. At the time, 64K was a lot of RAM; building in capacity for 10 times that, and reserving the rest for whatever they felt like certainly seemed reasonable. And it was, in fact, many years before most computers even reached that 640K barrier.
All they were trying to do was create a glorified typewriter, on a fast and easy design. It was the industry that turned them into a standard–in part because of their own miscalculation.
Back then, all vendors owned their machines. You basically didn’t make clones. They certainly had no plans for there to be clones, any more than they had plans for clones of the selectric. That simply wasn’t the industry mentality in the late 1970s.
But because everything was off-the-shelf components, and IBM had the kind of industry credibility that Microsoft has now, everyone wanted to clone it. And because it was a fast-and-easy design, with little thought to the future. Why think about the future? You own the hardware and you can just throw everything out and start with a new design when it’s time to upgrade, right?
That was how the whole microcomputer industry thought back then.
So they had a cheap, easy to put together, all off-the-shelf component design that they had no intention of making into any kind of major sdtandard. But it was so easy to duplicate, all it took was someone able to duplicate their BIOS. Once done, you had clones, and then…
And there it is. For almost a quarter century now, we’ve been bending over backwards to maintain backwards compatibility with something that none of the original designers intended to be little more than an experiment, something they’d tear up and throw away and start over with if the experiment was successful. Then it spun out of their hands and…
Anyone who knows anything about PC design will tell you that the modern motherboard and Intel chipset is a joke compared to what could be done if we would take the pain of tearing everything up and starting over from scratch. But no one will do that.
It is much the case with the internet. The very idea that you’d send audio and video over the internet would have been almost laughable when it was first established. For God’s sake, it was a network of teletype machines, and that’s all it was intended to be!
There are always trade-offs, but I usually try to take the time to future-proof my software designs, and, frankly, I look askance at those who don’t (IBM PC, Sun’s Java, etc. etc.)
I think the problem with the Internet is that the people who mainly designed it were OS guys who didn’t really know very much about networking. I remember going to an meeting of the ACM or the IEEE or somebody back around 1980 where a panel of guys including Bill Joy from Berkeley and some guys from DEC and BB&N were talking about the problems they had implementing TCP/IP on their various operating systems and it was pretty obvious they didn’t know much or care much about the network itself.
Computer networking people in those days were all in love with packet switching and firmly convinced that all the stuff the phone company had learned about building real-time networks for digital voice was irrelevent to them. So they made this very important tradeoff of the connectionless network layer that simply did their packet switching for them and moved all the smarts back into the OS. This made it easy for BB&N, the network guys, to do their piece.
Now if they had consciously built a connection-oriented network layer than provided visibility to the partial T1 at the link layer, the transport layer would have been easier to build, and they wouldn’t have painted themselves into a corner with real-time. When the OSI guys came along, they recognized that TCP/IP was built wrong, and tried to correct this error with a network layer that could support connectionless as well as connection-oriented services, and didn’t have address constipation either, but a Holy War broke out between the TCP guys, who were mainly academics and government contractors, and the OSI guys, who were mainly capitalists, and OSI got sent to the back burner.
Now the IETF has gotten so big and ungainly that it doesn’t do architecture any more, and we’re stuck with a network that can’t support the kinds of uses it needs to have in order to actually converge real-time with packets without just throwing bandwidth at the problem.
The trouble with that is that chain is only as fast as its slowest link, so to speak.
The IBM PC is a funny story too, and the key element probably is like y’all say the fact that the standard open PC architecture at the time was CP/M, based on 8080s with 64K DRAM maximum, so a 10X improvement seemed massive. Somebody should have told the boys about Moore’s Law, but the CPU designers from Intel may not have understood it, ha ha.
I didn’t know you were at 3Com, Scott. What did you go for them?
I was just your basic peddler pitching hardware to the feds here in the Central region. I got to go to a lot of cool military installations, specifically B1Bs doing touch-and-gos in Abilene, and the SAC Museum in Nebraska. Outside of those highlights, it was a daily exercise in pounding my head against the Brick Wall of Cisco.