It’s rare that I read anything by Vint Cerf these days that doesn’t make me laugh. He’s taken to making outlandish statements that foment PR crises for Google such that they’re softened and re-framed in a few days. Writing on Google Public Policy Blog he simply buries us in the obvious and contradicts himself:
At least one proposal has surfaced that would charge users by the byte after a certain amount of data has been transmitted during a given period. This is a kind of volume cap, which I do not find to be a very useful practice. Given an arbitrary amount of time, one can transfer arbitrarily large amounts of information. Rather than a volume cap, I suggest the introduction of transmission rate caps, which would allow users to purchase access to the Internet at a given minimum data rate and be free to transfer data at at least up to that rate in any way they wish.
And here I thought pricing tiers were all standard practice all over the world. But he’s obviously not talking about that so much as providing a Committed Information Rate for low-cost residential Internet access like the much pricier business accounts have. You can tell who pays the bills in the Cerf household.
He also does the Kevin Martin two-step, applauding ISPs for raising the priority of VoIP:
In my view, Internet traffic should be managed with an eye towards applications and protocols. For example, a broadband provider should be able to prioritize packets that call for low latency (the period of time it takes for a packet to travel from Point A to Point B), but such prioritization should be applied across the board to all low latency traffic, not just particular application providers.
…and then slamming the means by which this is done:
Over the past few months, I have been talking with engineers at Comcast about some of these network management issues. I’ve been pleased so far with the tone and substance of these conversations, which have helped me to better understand the underlying motivation and rationale for the network management decisions facing Comcast, and the unique characteristics of cable broadband architecture. And as we said a few weeks ago, their commitment to a protocol-agnostic approach to network management is a step in the right direction.
So prioritizing is good, but not prioritizing is better? These people need to take some logic courses.
But I’m being too mean. Adam Thierer finds something to like about Cerf’s statesmanship:
But we know that countless more technical disputes will arise in the future at every layer of the Internet — not just with Comcast and BitTorrent. Thus, if we are really going to achieve “a broader dialogue and cooperation across industries†then what we really need is the equivalent of a multilateral trade negotiating process or forum to achieve sensible resolutions to complex technical difficulties surround Internet network management.
I am not prepared to say whether a new, formal organization is needed to accomplish this or if existing institutions and individuals (academic, trade associations, etc) might be able to work together to make this happen. For example, and I am just thinking out loud here so don’t quote me on this, what if we had the Internet Society working in conjunction with several major industry trade associations and some respected academic institutions to form some sort of collaborative, dialogue-oriented dispute resolution process? Sort of GATT or WTO for technical Internet dispute resolution.
Certainly that would be preferable to a politicized FCC taking over the show and making all these technical decisions, no? I’d be interested in hearing some input from others.
A relevant organization is not a bad idea.
Technorati Tags: net+neutrality
Vint Cerf is contradicting his own Google Policy people. At the Innovation 2008 forum, both Vuze and Google representatives stated that volume caps would be a preferable way to manage the Internet.
As for Cerf calling for a minimum speed, there’s nothing wrong with selling a minimum speed to consumers and there’s nothing wrong with selling a burst speed to consumers. I’ve always advocated the disclosure of minimum and maximum speeds. The problem with people like Lessig and the Free Press is that they believe every advertised speed must be the minimum speed which is just plain wrong.
Our ISP actually does offer a minimum speed. We WANT users to be able to access content at a satisfying rate, and we ask them to call us if that (testable) speed is ever not available to them.
However, if a user ran at that minimum speed 24 hours a day, he or she would cost us more for bandwidth then he or she is paying us per month for service.
Therefore, we also impose a “duty cycle” limitation. In the case of residential internet, that limit is 10%, because we can then do 5:1 oversale and not have problems with jitter, latency, etc. This is necessary to offer attractive prices.
We also offer business service, in which the bandwidth is just capped at a maximum and the user can push it to that limit 24/7. This costs more — naturally — because we have to have that amount of bandwidth available at all times and must pay for it whether the user is consuming it or not.
Neither of these pricing schemes is “evil” — in fact, the ability to pay less for oversold bandwidth is very attractive to consumers. Why ban any of them?
Brett, the minimum speed you offer is still based on a statistical likelihood that those will be the minimum speeds. As you pointed out, you’re using a duty cycle on volume which allows you to be able to mostly deliver those speeds but it’s not a certainty. The absolute minimum speed is your actual backhaul speeds divided by the number of users though it’s extremely unlikely that all your users will be active at the same time.