Reader Dave Johnson argues that QoS isn’t necessary on big packet networks because the carrier can simply provision the network adequately to carry all possible traffic at once:
If an ISP has ample internal and outgoing bandwidth, as in more than enough to provide for the sum total of their customers allocations, then where does that leave the latency issue? The only way packets are delayed or refused is because there is not enough capacity on the destination wire. If there is enough capacity then *where* is the problem? Customers are by definition limited in the amount of data (under any protocol) that they can send, so all quantities are known.
As idiotic as this sounds, it’s a common Urban Myth, originally expressed in the works of David Isenberg, if not elsewhere. Bandwidth is free, you see, so just hook me up and don’t worry about it.
OK, let’s play along. How much bandwidth does an ISP have to have on its internal network to allow all of its customers to use all the bandwidth in their hookups all the time? Verizon’s FIOS customers have 100 megabit/sec connections, and there are 375,000 of them. So all Verizon needs for “ample” bandwidth inside its own network is a 37.5 terabit/sec (terabit being a million million) switch, and a similar sized connection to the public Internet.
Of course, that kind of bandwidth doesn’t exist.
Will it in the future? Maybe, but by then instead of worrying about 375,000 customers on the Verizon network, we’ll be worrying about 200 million Americans with 100 megabit/sec each. That adds up to 20,000 terabits/sec. I don’t see any switches capable of handing that load on the horizon, of course. This is a ridiculous exercise, and I only do it because the argument from hyper-regulation side is so lame.
Now lets assume that ISPs can cap bandwidth for each user to some level of transport per day, week, or month. Does that alter the arithmetic above? Actually, no, because you still have to design for peak load. If everybody wants to download American Idol at the same time, you need to accommodate that, so that’s where we are.
The fastest datalinks we have today are 40 gigabits/sec. So let’s take a bunch of them and bond them together to get a 20,000 terabit/sec pipe. We only need 500,000 of them. Supposing we can build a switch that handles 20 such pipes (not really practical today, because of limitations on bus speeds, but let’s be generous) you need 25,000 of them. But now how do we interconnect these switches to each other? Well, we just need to interconnect them to each other in a big mesh, but we’re playing with probabilities again, betting that no combination of users will over-use the path between one switch and anohter. So we’ll have to add another level of switches to enable each end-user to reach each end-user through any intermediate switch, and there will be a lot of these. Somebody has to pay for all these switches, because even if they were cheap (and they aren’t), they’re not free.
This is why QoS is needed: “more bandwidth” only works up to the economic and physical constraints on bandwidth, both of which are real.
So here’s the lab problem in summary: the fastest pipes we have are 40 gigabits/second. How many of them, and in what topology, do you need in order for 100 million users to transmit 100 megabits/second of traffic with no delay?
That is an idiotic statement, but I think it mischaracterizes what Dave said. Dave said:
(1) “If” there is enough bandwidth to eliminate any congestion, then you don’t need QoS. (If A, then not B.)
(2) There will be enough bandwidth to eliminate congestion if and only if supply and demand for bandwidth make it so (i.e. whether consumers are willing to pay for it). (A iff C.)
Those statements seem rather uncontroversial. Richard, you infer from these statements that he is arguing that supply and demand will yield sufficient bandwidth (i.e. C). I think that inference is not appropriate because he has argued in other posts that QoS should be offered and charged for (B).
Dave’s Statements:
If A, then not B.
A iff C.
B.
Following Conditions (Contrapositives):
If B, then not A.
not A iff not C.
Argument:
B.
If B, then not A.
not A.
not A iff not C.
not C.
Dave, therefore, thinks that supply and demand will not be sufficient to eliminate congestion. It might be for the very reason you point out: There are economic limits to bandwidth: how much consumers are willing to pay and how it ISPs are willing to supply given the costs. (The technical limits you mentioned are included in supply, that is the ISPs’ willingness and ability to provide bandwidth.)
Although I think your criticism of Dave is unfounded, I found your exercises and examples instructive. Thanks.
I don’t care if he says “IF”. If what he’s saying is pie in the sky fantasy and all his minions repeat the myth of “just add more bandwidth”, then it’s idiotic.
Ahh! Robert, a trick question!
It is not about (absolute) no delay but rather about no perceived delay.
Most applications (HTTP, FTP even the piggish BitTorrent) will tolerate huge delays without a problem and the users don’t really care if a large download takes 7 minutes instead of 5 minutes.
The applications that do care (VoIP and Streaming Video) are real time, typically low bandwidth (and where dropping a lost packet is preferred to retransmit).
And of course the answer is QoS where we pay for the higher levels of QoS (since economics will lead us to a state where price keeps demand in check). As a consumer it is a no brainer for me to pay my ISP a few bucks a month to ensure high quality voice service since I will dump my Telco in a quick minute for their higher priced ‘dedicated’ phone service. (VoIP uses so little bandwidth that this make great technological and business sense.) [And forget both Skype and Vonage two proprietary losers – give me SIP and standards based VoIP that require a service provider stand on its regular performance/price levels.]
Also multicasting enabled infrastructure will give us the appearance of ‘free’ bandwidth. Perhaps we need to wait for IPv6 to make this a reality.
Multicasting is not tied to IPv6. Those things were added to IPv4. Multicasting on the Internet is a very difficult thing. The most important thing for video right now is caching at the edges where all the users live. I’m talking at the POP level close to where the DSLAMs live.
Right. This deserves a response. Despite some valid points inherent in Richard’s perspective as I understand it from elsewhere, I wonder how accurate his picture of my picture is (IYSWIM).
Quote [Though I wish I knew how to do that properly in wiki/blog language..? References anyone?]
” OK, let’s play along. How much bandwidth does an ISP have to have on its internal network to allow all of its customers to use all the bandwidth in their hookups all the time? Verizon’s FIOS customers have 100 megabit/sec connections, and there are 375,000 of them. So all Verizon needs for “ample” bandwidth inside its own network is a 37.5 terabit/sec (terabit being a million million) switch, and a similar sized connection to the public Internet.” /Quote
Terrabit being 10^12 bits …. Thanks 🙂
First off, in this country (UK) all ADSL connections (the only ones I know anything like enough about) are contended by their very nature. There will be a clause in the ISPs’ ToS about fair usage to prevent any given customer from saturating out other customers, meaning that it is *not* legitimate for a customer to constantly hammer the link at its technical maximum bandwidth. Many ISPs have mechanisms in place to regulate this. AFAIK The ISPs (should and probably do) design their internal infrastructure to deal with the theoretical maximum of all customers using the maximum average bandwidth that they can *legitimately* use. Weight of numbers means that as long as most of the customers follow the ToS it is a legitimate approximation.
Does that not alter the math a little?
If the suppliers you speak of do not have similar rules then perhaps they should because I agree it is obviously not a simply solved problem coping with the sum total of the theoretical maxima.
I’m irritated at the assumption that I think core bandwidth to be free. I know very well that it is not. If ISPs are selling more bandwidth (calculated from that which is within their terms of use) than they are leasing then surely that is a central causative problem that deserves a law in its own right? They are in effect selling that which they do not have. Personally I’d see it as fraud. [By Core Bandwidth I mean intra peer-group and transnational bandwidth]
There is a reasonable leeway on the terms of service, as IMO there should be, and there is certainly adequate headroom on the linkups that I have used; despite the contention it is almost always possible to make sporadic use of the absolute maximum, but just about any ISP would come down heavily on any customer of theirs who attempted to make constant use of that maximum. The sporadic maximum is *not* what you are buying, you are (in theory, RL practice differs somewhat) buying a share of a link with that maximum, contended with ‘N’ other users. If you used that max constantly you would expect to degrade the experience of other customers, and so should expect action to be taken against you.
The other part of my point of view concerns the ISP’s internal networks. Over these (as a temporary extension to any intelligent customer’s own priority based gateway routing) I see no problem at all with there being priority, perhaps individually (customers who choose to pay more) but preferably universally, given to one service over another. The assumption here again is that the ISP has not provided adequately *internally* for the bandwidth they have sold to customers, my personal view is that this will be eventually solved by demand driven evolution of ISPs but that honouring QoS variables is a reasonable stopgap.
The question I see, and the point at which my opinion still tends toward net-neutrality, is about priority being given over the core linkups, those not owned by the ISP.
The central issue is the idea of people being able to buy higher QoS for packets originating from their IP *irrespective* of the protocol being used. For instance a web host being able to buy higher QoS for their http feed all the way between their host and their final client. This is the dragon I fear.
Possibly some sort of pseudo tunnel, an infrastructure wide reservation of some overhead bandwidth for certain time critical services, exclusively for packets that fit a certain filter, could be acceptable. But (and it’s a big but) IMO if such is necessary then those filters should be protocol based rather than purely IP based. Here I think my opinion maybe leans slightly toward Richard’s. It certainly leaves room for part of it as a possibility. I however still can’t see why better regulation of the dataflow along linkups and the necessary follow-on provision would not be a preferable route. [pun ignored]
Personally I think that such an alteration (if really necessary) should be infrastructure wide, and paid for universally by customers rather than on a choice / client-by-client basis. My reasoning there revolves around the additional overheads involved in rationing what I think would need to be a global change. If it needs doing then do it but only do that which is necessary, don’t allow the prioritisation to revolve around money paid, make it revolve around genuine unusability without lower latency. I would hate it to be done on an IP by IP basis and it should only be allowed on things that are truly latency critical. That said, it’s still only a stopgap. If we need the bandwidth then why *can’t* we perishin well pay for it? It’s fine for there to be a max bandwidth through a link that is well in excess of the max-transfer/period-of-contract. Averaging of 10s of thousands of links deals with that as far as I can tell.
I stand by my defence of the main point under attack. If there is more average bandwidth being sold (remember the ToS) than there is capacity for, why is this? Why are the profits not being reinvested in the provision of more bandwidth or why is not less (average) bandwidth sold? Does *that* not call for legislation?
I ignore the networking problem presented at the end of the castigation, I’m not sure it’s based on RL quantities for the reason I’ve given, and even if it is, then surely this would be a good focus for an initial piece of legislation. How is it legal to sell what you don’t have? If it ceased to be legal to sell what you cannot provide then would that not go a long way toward an improvement?
I really want clarification on the question of whether Richard supports higher QoS on an IP by IP basis or purely on a protocol by protocol basis. This makes a big difference to my feelings on the matter. (On whether or not they are essentially cast in stone or whether I see room for rational debate) An additional clarification would be good, on his feelings about whether it should be done on a customer by customer basis (with all the additional overheads and the higher potential for spoofing of the various header flags from a point outside the ISP) or whether it should be done universally with the cost shared by all. My apologies for the length of this posting but I’ve tried to make myself clearer on the things I’m sure of and the things I think are debateable. I welcome correction, I am truly here to learn. From my POV it’d be an absolute waste of the time invested by myself and others otherwise.
I don’t have time to deal with all of that,
DougDave, so let me hit the high points.You ask: If there is more average bandwidth being sold (remember the ToS) than there is capacity for, why is this?
This is the nature of packet-switched communication networks. They’re designed around the concept that users share a common channel by taking turns. Each packet is sent at the highest possible speed, and the bandwidth of the channel is provisioned at a level high enough to handle average load with little or no delay. As load increases beyond the average load, so does delay but the degradation of performance should be graceful, and at no time does the system become unstable. This contrasts with circuit-switched networks like the traditional phone network, where bandwidth is allocated to each active user in chunks of regular size that only he can use. Packet-switching adapts better to the requirements of applications that use bandwidth in bursts, and circuit switching to applications that use it in regular sized streams.
Now that we want to run traditional bursty data apps and phone apps on the same wire at the same time, we have to combine the concepts of traditional packet- and circuit-switched networks into a single scheme, and we do that by assigning higher priority to the phone packets. These packets aren’t in a race against the data packets, as they’re smaller and more regular. They just want access to the shared medium at regular intervals, and their overall bandwidth consumption is low.
It makes no sense to give priority handling to web packets as in general that would slow them down by enforcing a regular interval between them.
Ok Richard, thanks even for the brief reply (It’s still Dave BTW, not Doug: hope that doesn’t make a difference).
If, as it seems, you do support prioritisation exclusively for latency critical applications, then we are over halfway to agreement. (Confirmation in so many words and perhaps a tiny elaboration on what implementation you’d envisage for filtering?)
I sincerely hope I can persuade you to put your name behind a desire to encourage it as a global upgrade rather than trying to do it on a piecemeal IP by IP basis because then you’d have convinced me not only that your PoV is valid, but possibly (as in fact I’m beginning to suspect) that it’s completely right.
I do understand what you’re saying both about the criticality, and about the small proportion of the bandwidth we’re talking about. My phrasing, calling it a ‘pseudo tunnel’ in the form of a small reservation of overhead bandwidth exclusively for VoIP etc (maybe for multicasting routing too?), is that about right?
The only remaining snag I see (if above assumptions are right) is trying to do it by allowing customers to pay extra for use of that ‘tunnel’. This seems to introduce unnecessary detail into the routers’ filtering requirements. That’s before we consider the needless overheads in trying to bill and assign bit-by-bit [pun liked] something that actually needs to be done universally and in having to dynamically adjust the ‘width’ of the reservation (complexity of which beggars belief) in proportion to the number of people who are buying it.
Just to unfortunately protract a side point, is there really no validity in the partial amelioratory tactic of insisting that what is sold is restricted to what can be supplied? Rather than phrasing it the way ISPs here put it (“You’re buying 2Mbit contended with N other users”) could it not be better described as something like “You’re buying 2Gbit contended with N,000 users with a 2Mbit personal ceiling” amd left like that, with the appropriate overall bandwidth being ISP enforced when not inherent in the route. If such honesty was required, would the problem not diminish? (Remember I can fully see a need for QoS prioritisation over many ISPs’ internal networking) It’d surely fix a huge bunch of users suddenly deciding they want to fetch file XYZ.
Finally, do you think that rather than being squashed entirely, the net-neutrality folk should just be re-educated into pushing for a bill which insists on prioritisation only being allowed for the truly time critical services, and perhaps insiting that it should not be done on a pure IP by IP basis? That is certainly my current (slightly altered) perspective. I still think there’s a call for outlawing the dragon I mentioned previously of higher internetwork wide QoS being granted for specific IPs irrespective of protocol.
I thank you for your time. If it’s any encouragement, you’ve gone a long way toward winning me over to your PoV [heh – not that my ‘approval’ is actually worth anything much beyond that I’ll propagate the idea]. I am certainly not in a rush to see your response. Once again, I’m used to Usenet (from which I’m currently sadly divorced) where a reply can easily take a week without reducing its validity in the slightest. I personally prefer it for the way that quoting the points you’re replying to is automatic, making it so much easier (after snipping) to dissect faulty assertions.
Richard wrote
“It makes no sense to give priority handling to web packets as in general that would slow them down by enforcing a regular interval between them.”
Oops, missed that, and hadn’t thought of it either. Thanks. On thinking about it, if you reply to my last mutterings, could you include clarification that it is really impossible for higher throughput to be granted on a ‘pay by IP’ basis under the systems that are being planned? It’s already mentioned by me, so it should fit in easily enough.
Dave says: “I sincerely hope I can persuade you to put your name behind a desire to encourage it as a global upgrade rather than trying to do it on a piecemeal IP by IP basis because then you’d have convinced me not only that your PoV is valid, but possibly (as in fact I’m beginning to suspect) that it’s completely right.”
The Internet is privately owned, so I don’t see any legitimate basis for demanding that each and every NSP and ISP upgrade their links to enhanced QoS services. I hope they would, but there may be those who prefer to leave their service offerings as they are now in order to continue using existing equipment. Going from a single service model to a tiered service model involves an upgrade, and everybody’s finances are different.
I’ve described a rational and non-discriminatory QoS service, but it doesn’t close the door on discriminatory application of packet prioritization. It’s still possible to use legitimate engineering practice toward less-legitimate ends, but it’s very hard to detect this without analysis of market distortion. If Comcast wants to make a deal with Yahoo to deliver Yahoo video faster than Google video, they can certainly use some form of queuing to do that. But it’s not obvious to me that this kind of deal should be banned unless we can determine that it hurts consumers in some way. I think that’s the sort of thing that our FTC monopoly cops are supposed to do.
It’s certainly not a case for the FCC and packet inspectors.
Quotation of Richard
“The Internet is privately owned, so I don’t see any legitimate basis for demanding that each and every NSP and ISP upgrade their links to enhanced QoS services. I hope they would, but there may be those who prefer to leave their service offerings as they are now in order to continue using existing equipment. Going from a single service model to a tiered service model involves an upgrade, and everybody’s finances are different.” /quote
I suspect you missed the difference in tone between ‘demand’ and ‘encourage’ (the latter being the word I used)
As different carriers realise the common sense behind prioritisation for latency critical services, I believe that the demand driven ‘infectiousness’ of such a policy would do the rest.
Further Quote
“I’ve described a rational and non-discriminatory QoS service, but it doesn’t close the door on discriminatory application of packet prioritization. It’s still possible to use legitimate engineering practice toward less-legitimate ends, but it’s very hard to detect this without analysis of market distortion. If Comcast wants to make a deal with Yahoo to deliver Yahoo video faster than Google video, they can certainly use some form of queuing to do that. But it’s not obvious to me that this kind of deal should be banned unless we can determine that it hurts consumers in some way. I think that’s the sort of thing that our FTC monopoly cops are supposed to do.” /quote
I know nothing about your FTC, but I see legislation aimed specifically and exclusively against such prioritisation as a legitmate and preferable demand for the net-neutrality folk to make. I have to admit I don’t understand why you would not see legislation against such a practice as a desirable way to prevent money from becoming the controling factor over who receives what at which rate. Bye bye level playing field.
I assume the FTC is something like our monoplies commission. I’ve just had a quick look at their page and it hasn’t told me much. If I’m right then it would only have a right to take action if the companies concerned were linked in some way, rather than just taking an ISP up on the offer of a service. Even if I’m wrong, I’m not sure they could argue that it wasn’t ‘Fair Trade’, or at least I could make a convincing sounding case that it is.
I’m happy to agree to disagree though I’d very much like to undersand your reasoning.
My current POV is that prioritisation of latency critical services should be (perhaps) gently encouraged, and certainly not legislated against, but that legislation against prioritisation on an IP by IP basis is something which could do only good, and is therefore desirable.
If companies are as fair and even handed as you seem to believe they are then it would have no effect, but if they are not then it would prevent poorer people from being ring-fenced into prefering the web-services offered by the companies with the most money (the dreaded exponentially growing giants) who can then pay even more for ever more priority over more and more links.
It would certainly make a good business model do you not think? Say ‘Budget Broadband @ $5 a month’ with the small print mentioning that it’s subsidised by payments from the larger advertising-based companies for a higher priority over the network for their packets. That would make a few people an absolute mint and would rapidly become my absolute nightmare.
I’m puzzled by your position, but time is precious, so maybe that’s just how it goes … Thank you for the replies so far.
As a footnote, I stand by my thinking that ISPs (here and in the USA) should be made to sell no more than they can actually provide were everyone to simultaneously us their connection at its maximum permissible average rate. I hope that’s already how it is. If not then the possible slight increase in charges would be worth it (AFAIAC) for the rationalisation which should have been there from the word go.
OK, I think we can summarize the Neut position as follows: it’s possible to use QoS in legitimate as well as illegitimate ways, so we have to ban it.
That’s the very definition of “over-broad legislation.”
Perhaps we could summarise your position as “demanding absolute net neutrality is an obviously bad idea because there are definite valid uses for QoS guarantees. Therefore opposition to net neutrality should be total”
Sounds like a declaration of war war rather than a desire to demonstrate (to their supporters) that some of the fears are so easily proven false that they invalidate the rest of their case and that some of the advantages are so obvious that they’re in danger of coming across as overly extreme Luddites. (Basically my position).
Sadly however, it also sounds like you don’t agree with me that one branch of those fears is based on genuine insight into nasty potential future abuses and that a complete freedom to filter and prioritise data, according to no more than whim (or rather money), is a dangerous thing.
Opposing extremes don’t often produce a sensible medium.
[Just to drag politics in, neocon warmongers versus lunatic muslims is as good an example as any]
We can’t ban all potential abuses without shutting down the Internet, Dave. When I see a bill with a reasonable approach to promoting competition, I’ll support it.