Google’s Telephony Patent Application not Novel

Google has apparently filed an application for a system that allows bandwidth provider to bid on phone calls: Google’s patent is called “Flexible Communication Systems and Methods” and the abstract says: “A method of initiating a telecommunication session for a communication device include submitting to one or more telecommunication carriers a proposal for a telecommunication … Continue reading “Google’s Telephony Patent Application not Novel”

Google has apparently filed an application for a system that allows bandwidth provider to bid on phone calls:

Google’s patent is called “Flexible Communication Systems and Methods” and the abstract says:

“A method of initiating a telecommunication session for a communication device include submitting to one or more telecommunication carriers a proposal for a telecommunication session, receiving from at least one of the one or more of telecommunication carriers a bid to carry the telecommunications session, and automatically selecting one of the telecommunications carriers from the carriers submitting a bid, and initiating the telecommunication session through the selected telecommunication carrier.”

Read the full patent here

The thing I find interesting about this is that I invented a similar technique in 1997, motivated by the desire to get bandwidth-on-demand for video conferences. If this is granted, it certainly won’t survive a court challenge.

I’ll post some details on my invention, which was never patented, shortly.

Technorati Tags: , , ,

Comcast files their compliance plan

Today was the deadline for Comcast to tell the FCC how its existing congestion management system works, as well as how its “protocol agnostic” replacement is going to work. To the dismay of some critics, they’ve done just that in a filing that was hand-delivered as well as electronically filed today. It will be posted … Continue reading “Comcast files their compliance plan”

Today was the deadline for Comcast to tell the FCC how its existing congestion management system works, as well as how its “protocol agnostic” replacement is going to work. To the dismay of some critics, they’ve done just that in a filing that was hand-delivered as well as electronically filed today. It will be posted to the Comcast web site shortly.

The filing corrects some of the false allegations made by critics with respect to privacy, making it very clear that the existing system simply inspects protocol headers (“envelopes”) and not personal data. David Reed in particular got himself worked into a tizzy over the idea that Comcast was deciding which streams to delay based on content, but this is clearly not the case. Inside the IP envelope sits a TCP envelope, and inside that sits a BitTorrent envelope. User data is inside the BitTorrent (or equivalent) envelope, and Comcast doesn’t look at it.

The current system sets a bandwidth quota for P2P, and prevents P2P as a group from crossing the threshold from this quota (about 50% of total upstream bandwidth) with new uni-directional upload (AKA, file-server-like) streams by tearing down requested new streams with the TCP Reset bit. The system is a bit heavy-handed, but reserving 50% of the network for one class of application seems pretty reasonable, given that no more than 20% of customers use P2P at all.

Nonetheless, the new system will not look at any headers, and will simply be triggered by the volume of traffic each user puts on the network and the overall congestion state of the network segment. If the segment goes over 70% utilization in the upload direction for a fifteen-minute sample period, congestion management will take effect.

In the management state, traffic volume measurement will determine which users are causing the near-congestion, and only those using high amounts of bandwidth will be managed. The way they’re going to be managed is going to raise some eyebrows, but it’s perfectly consistent with the FCC’s order.

High-traffic users – those who’ve used over 70% of their account’s limit for the last fifteen minutes – will have all of their traffic de-prioritized for the next fifteen minutes. While de-prioritized, they still have access to the network, but only after the conforming users have transmitted their packets. So instead of bidding on the first 70% of network bandwidth, they’ll essentially bid on the 30% that remains. This will be a bummer for people who are banging out files as fast as they can only to have a Skype call come in. Even if they stop BitTorrent, the first fifteen minutes of Skyping are going to be rough. A more pleasant approach would be to let excessive users out of QoS jail with credit for good behavior – if their utilization drops to Skype level, let them out in a few seconds, because it’s clear they’ve turned off their file sharing program. This may be easier said than done, and it may raise the ire of Kevin Martin, given how irrational he is with this anti-cable vendetta.

The user can prevent this situation from arising, of course, if he wants to. All he has to do is set the upload and download limits in BitTorrent low enough that he doesn’t consume enough bandwidth to land in the “heavy user” classification and he won’t have to put up with bad VoIP quality. I predict that P2P applications and home gateways are going to incorporate controls to enforce “Comcast friendly” operation to prevent de-prioritization. There are other more refined approaches to this problem, of course.

At the end of the day, Comcast’s fifteen/fifteen system provides users with the incentive to control their own bandwidth appetites, which makes it an “end-to-end” solution. The neutralitarians should be happy about that, but it remains to be seen how they’re going to react.

It looks pretty cool to me.

UPDATE: Comcast-hater Nate Anderson tries to explain the system at Ars Technica. He has some of it right, but doesn’t seem to appreciate any of its implications. While the new system will not look at protocol headers (the evil “Deep Packet Inspection” that gets network neophytes and cranks so excited) , and it won’t use TCP Resets, that doesn’t mean that P2P won’t be throttled; it will.

That’s simply because P2P contributes most of the load on residential networks. So if you throttle the heaviest users, you’re in effect throttling the heaviest P2P users, because the set of heavy users and the set of heavy P2P users is the same set. So the “disparate impact” will remain even though the “disparate treatment” will end.

But the FCC has to like it, because it conforms to all of Kevin Martin’s rabbit-out-the-hat rules. The equipment Comcast had had to purchase for this exercise in aesthetic reform will have utility down the road, but for now it’s simply a tax imposed by out-of-control regulators.

Ars Technica botches another story

Why is it so hard for the tech press report on the broadband business with some semblance of accuracy? I know some of this stuff is complicated, but if it’s your business to explain technology and business developments to the public, isn’t it reasonable to suppose you’re going to get the facts right most of … Continue reading “Ars Technica botches another story”

Why is it so hard for the tech press report on the broadband business with some semblance of accuracy? I know some of this stuff is complicated, but if it’s your business to explain technology and business developments to the public, isn’t it reasonable to suppose you’re going to get the facts right most of the time?

Case in point is Matthew Lasar at Ars Technica, the hugely popular tech e-zine that was recently purchased by Conde Nast/Wired for $25 million, healthy bucks for a web site. Lasar is a self-appointed FCC watcher who seems to consistently botch the details on targets of FCC action. The most recent example is a story about a clarification to AT&T’s terms of use for its U-Verse triple play service. The update advises customers that they may see a temporary reduction in their Internet download speed if they’re using non-Internet U-Verse television or telephone services that consume a lot of bandwidth. Lasar has no idea what this means, so he turns to Gizmodo and Public Knowledge for explanation, and neither of them gets it either. So he accepts a garbled interpretation of some AT&T speak filtered through Gizmodo’s misinterpretation as the gospel truth of the matter:

Ars contacted AT&T and was told by company spokesperson Brad Mays that the firm has no intention of “squeezing” its U-verse customers. “It’s more a matter of the way data comes into and travels around a home,” Mays said. “There are things (use of PCs, video, etc.) that can impact the throughput speed a customer gets. We are not doing anything to degrade the speed, it’s just a fact of the way data travels.”

The AT&T guy is trying to explain to Lasar that U-Verse TV uses the same cable as U-Verse Internet, but U-Verse TV has first call on the bandwidth. The cable’s bandwidth is roughly 25 Mb/s, and HDTV streams are roughly 8 Mb/s. If somebody in your house is watching two HDTV shows, 16 of that 25 is gone, and Internet can only use the remaining 9, which is a step down from the 10 Mb/s that it can get if you’re running one HDTV stream alongside an SDTV stream.

This isn’t a very complicated issue, and it shouldn’t be so muddled after multiple calls to AT&T if the writers in question were mildly up-to-speed on IPTV.

Lasar botched another recent story on Comcast’s agreement with the Florida Attorney General to make its monthly bandwidth cap explicit as well, claiming that Comcast had adopted the explicit cap in a vain attempt to avoid a fine:

Ars contacted the Florida AG about this issue, and received the following terse reply: “We believe the change pursuant to our concerns was posted during our investigation.” When asked whether this means that when the AG’s probe began, Comcast didn’t post that 250GB figure, we were told that the aforementioned one sentence response explains everything and to have a nice day.

In fact, the cap was part of its agreement with Florida, as the AG’s office explains on its web site:

Under today’s settlement, reached with Comcast’s full cooperation, the company has agreed not to enforce the excessive use policy without prior clear and conspicuous disclosure of the specific amount of bandwidth usage that would be considered in violation of the policy. The new policy will take effect no later than January, 1, 2009.

And everybody who follows the industry knows that. The Comcast cap is also less meaningful than Lasar reports, since Comcast says they’re only going to get tough on customers in excess of the cap who are also in the top 1% of bandwidth consumers, so simply going over 250 GB won’t get you in trouble at the future date in which everyone is doing it.

The tech press in general and Ars Technica in particular needs to upgrade its reporting standards. It’s bad enough when Ars trots out opinion pieces on network neutrality by Nate Anderson thinly disguised as reporting; most sensible readers understand that Anderson is an advocate, and take his “reporting” with the necessary mix of sodium chloride. But Anderson doesn’t consistently get his facts wrong the way Lasar does.

It would be wise for Ars to spend some of the Conde Nast money on some fact-checkers, the better to avoid further embarassment. We understand that Gizmodo is simply a gadget site that can’t be counted on for deep analysis, and that Public Knowledge is a spin machine, but journalists should be held to a higher standard.

Technorati Tags: , ,

Our Efforts Bearing Fruit

Regular readers are aware of my Op-Ed criticizing Google’s rapacious ways, written for the San Francisco Chronicle and subsequently re-printed in the Washington Times. That doesn’t happen too often, BTW. The Wall St. Journal reports that the Justice Department is paying attention: WASHINGTON — The Justice Department has quietly hired one of the nation’s best-known … Continue reading “Our Efforts Bearing Fruit”

Regular readers are aware of my Op-Ed criticizing Google’s rapacious ways, written for the San Francisco Chronicle and subsequently re-printed in the Washington Times. That doesn’t happen too often, BTW. The Wall St. Journal reports that the Justice Department is paying attention:

WASHINGTON — The Justice Department has quietly hired one of the nation’s best-known litigators, former Walt Disney Co. vice chairman Sanford Litvack, for a possible antitrust challenge to Google Inc.’s growing power in advertising.

Mr. Litvack’s hiring is the strongest signal yet that the U.S. is preparing to take court action against Google and its search-advertising deal with Yahoo Inc. The two companies combined would account for more than 80% of U.S. online-search ads.

Google shares tumbled 5.5%, or $24.30, to $419.95 in 4 p.m. trading on the Nasdaq Stock Market, while Yahoo shares were up 18 cents to $18.26.

For weeks, U.S. lawyers have been deposing witnesses and issuing subpoenas for documents to support a challenge to the deal, lawyers close to the review said. Such efforts don’t always mean a case will be brought, however.

An 80% market share in search ads is not good for democracy, of course, so we applaud the impending suit in advance.

Technorati Tags: , ,

Your broadband service is going to get more expensive

See my article in The Register to understand why your broadband bill is going to rise: Peer-to-peer file sharing just got a lot more expensive in the US. The FCC has ordered Comcast to refrain from capping P2P traffic, endorsing a volume-based pricing scheme that would “charge the most aggressive users overage fees” instead. BitTorrent, … Continue reading “Your broadband service is going to get more expensive”

See my article in The Register to understand why your broadband bill is going to rise:

Peer-to-peer file sharing just got a lot more expensive in the US. The FCC has ordered Comcast to refrain from capping P2P traffic, endorsing a volume-based pricing scheme that would “charge the most aggressive users overage fees” instead. BitTorrent, Inc. reacted to the ruling by laying-off 15 per cent of its workforce, while network neutrality buffs declared victory and phone companies quietly celebrated. Former FCC Chairman Bill Kennard says the legal basis of the order is “murky.”

Comcast will probably challenge on grounds that Congress never actually told the regulator to micro-manage the Internet. In the absence of authority to regulate Internet access, the Commission has never had a need to develop rules to distinguish sound from unsound management practice. The order twists itself into a pretzel in a Kafka-esque attempt to justify sanctions in the absence of such rules.
Technically speaking, they’re very confused

The FCC’s technical analysis is puzzling, to say the least.

The order describes an all-powerful IP envelope, seeking to evoke an emotional response to Deep Packet Inspection. The order claims the DPI bugaboo places ISPs on the same moral plane as authoritarian regimes that force under-aged athletes into involuntary servitude. But this is both uninformed and misleading. Network packets actually contain several “envelopes”, one for each protocol layer, nested inside one another like Russian dolls. Network management systems examine all envelopes that are relevant, and always have, because there’s great utility in identifying protocols.

The FCC’s order is especially bad for people who use both P2P and Skype. The comments lack the usual snarkiness, and I don’t know if that’s good or bad.

UPDATE: Right on cue, a price war is breaking out between cable and phone companies, according to the Wall St. Journal. I wonder if the converts are going to be the high-volume users worried about the caps, or the nice, low volume grannies every carrier wants.

Technorati Tags: , ,

FCC finally issues Comcast memo

Kevin Martin and his Democratic Party colleagues at the FCC have issued their Comcast order, available at this link. They find some novel sources of authority and apply some interesting interpretations of the facts. I’ll have some detailed commentary after I’ve read it all and checked the footnotes. It’s an amusing exercise, if you like … Continue reading “FCC finally issues Comcast memo”

Kevin Martin and his Democratic Party colleagues at the FCC have issued their Comcast order, available at this link. They find some novel sources of authority and apply some interesting interpretations of the facts. I’ll have some detailed commentary after I’ve read it all and checked the footnotes. It’s an amusing exercise, if you like that sort of thing.

For a good summary of the order, see IP Democracy.

Testing Internet capacity

NBC is streaming the Olympics over the Internet, in multiple resolutions, in what amounts to a massive test of the ability of the Internet fabric to handle load. Nothing on this scale has been done before, although BCC did stream the last Olympics inside the UK using Multicast. So we’re going to learn just how … Continue reading “Testing Internet capacity”

NBC is streaming the Olympics over the Internet, in multiple resolutions, in what amounts to a massive test of the ability of the Internet fabric to handle load. Nothing on this scale has been done before, although BCC did stream the last Olympics inside the UK using Multicast. So we’re going to learn just how realistic net neutrality really is:

This will be the biggest test today of Internet viewers’ appetite for streaming video of live sporting events – and of the Internet’s ability to handle that.

If the Internet service providers networks start getting maxed out, you can probably expect some “rate shaping” or other bandwidth management techniques to come into play, Eksten notes. After all, you still have to get the e-mail through for non-sports fans.

Which means not just technologists like Eksten but network neutrality proponents should spend a lot of time looking at logs and statistical reports from the service providers, after this is all over to see how the streaming affected the Internet’s fabric of networks.

Stay tuned, if you can.

Technorati Tags: , , ,

No skin in the game

An experiment in publicly-owned fiber to the home in Utah was on the brink of bankruptcy in April. The project was oversold and underfunded, and found itself at an impasse where it had to go back to the taxpayers for a bailout or liquidate. They built it, but nobody came. A big part of the … Continue reading “No skin in the game”

An experiment in publicly-owned fiber to the home in Utah was on the brink of bankruptcy in April. The project was oversold and underfunded, and found itself at an impasse where it had to go back to the taxpayers for a bailout or liquidate. They built it, but nobody came. A big part of the problem, apparently, is that the project was saddled with a structural separation ideal that forced the public infrastructure to act as a wholesaler with third parties providing retail services. See The case for UTOPIA and iProvo: Double down or cut bait?

From the beginning, UTOPIA and iProvo either chose, or were saddled with, a business model that has proved least successful in fiber rollouts, analysts say.

In 2001, the state Legislature passed the Utah Municipal Cable Television and Public Telecommunications Services Act, which allows cities to construct telecommunication infrastructure but not become the retail service provider for those systems. Instead, they have to use a wholesale model in which they build the digital pipe and then lease the lines to retail service providers such as Mstar.

That leads to underselling of the system and friction between the municipality, which needs to see a return on its multi-million dollar investment, and the service providers, which haven’t risked as much, says Michael Render, president of RVA, a market research company that focuses on private and public fiber systems.

“They don’t have skin in the game,” he said. “The more difficult ones have been the wholesale systems such as iProvo and UTOPIA.”

Projects like this are similar to publicly-financed sports arenas. They’re great if you happen to be a fan of the sport, but not so great if you’re simply a hapless taxpayer footing the bill for somebody else’s entertainment. Not to mention the mismanagement that goes hand-in-glove with free money. In order for muni networking to be successful, it apparently needs to be run as a hard-core vertically-integrated monopoly, and that’s pretty distasteful.

The resolution for iProvo was a sale to a private company:

Leaders of Provo City and Broadweave Networks were harder to find Monday (June 30th) than cheap gas.

They were holed up in city offices hammering out the details of the $40.6 million deal that privatizes the fiber optic network, in turn taking the money-losing venture off the city’s hands. The deadline was Monday and by now, Broadweave owns the system — we think.

“I’m holding my breath hoping that it gets done,” said Councilman George Stewart, who was also awaiting word Monday. There was supposed to be a small ceremony, but nothing had been made public, even to council members, by 5 p.m. Monday. The council approved the sale in June.

The sale was concluded in July.

UPDATE: UTOPIA lumbers along, with only one city, Payson, voting to leave the consortium. Take-up rates have been much lower than anticipated, due to good competitive options, but supporters continue to have high hopes for its ultimate success.

For an up-beat view, see the Free UTOPIA wiki and its links. One bright spot is a citizen advisory board, U-CAN, which brings fresh ideas to the civil servants.

Technorati Tags: , ,

Google is Dead

They don’t know it yet, of course. I’ve just checked the new alternative to Google, Cuil (pronounced “cool”) and found it amazingly accurate. They show me as the number 1 Richard Bennett and the number 1 Bennett. Very sweet, even though I’m only the number 12 Richard; that gives me something to strive for. UPDATE: … Continue reading “Google is Dead”

They don’t know it yet, of course. I’ve just checked the new alternative to Google, Cuil (pronounced “cool”) and found it amazingly accurate. They show me as the number 1 Richard Bennett and the number 1 Bennett. Very sweet, even though I’m only the number 12 Richard; that gives me something to strive for.

UPDATE: Esteemed BITS blogger Saul Hansell interviewed Cuil president Anna Patterson on her “36 hours of fame” and got an explanation of the site’s first day troubles: “We were overwhelmed with traffic that was not the standard pattern,” Ms. Paterson said. “People were looking for their names a lot.”

Doh.

Upgrading to IPv6

Speaking of Comcast, the cable giant is offering an interesting proposal to the standards community concerning the long overdue transition from IPv4 to IPv6, using NATs and tunnels: Comcast is upgrading its networks from IPv4, the Internet’s main communications protocol, to the standard known as IPv6. IPv4 uses 32-bit addresses and can support 4.3 billion … Continue reading “Upgrading to IPv6”

Speaking of Comcast, the cable giant is offering an interesting proposal to the standards community concerning the long overdue transition from IPv4 to IPv6, using NATs and tunnels:

Comcast is upgrading its networks from IPv4, the Internet’s main communications protocol, to the standard known as IPv6. IPv4 uses 32-bit addresses and can support 4.3 billion devices connected directly to the Internet. IPv6 uses 128-bit addresses and supports an unlimited number of devices.

At issue is how Comcast will support new customers when IPv4 addresses run out, which is expected in 2011. Comcast can give these customers IPv6 addresses, but their home computers, printers, gaming systems and other Internet-connected devices are likely to support only IPv4.

Comcast engineers have come up with a solution to this problem, dubbed Dual-Stack Lite, which it says is backwards compatible with IPv4 and can be deployed incrementally.

Comcast outlined Dual-Stack Lite in a draft document published by the Internet Engineering Task Force on July 7. Dual-Stack Lite will be discussed at an IETF meeting in Dublin scheduled for later this month.

It’s a reasonable approach, putting the onus of dual stacks on the carrier NATs and home gateways where it belongs. It’s fortunate the IETF has companies like Comcast to give it guidance.

H/T CircleID.

UPDATE: Iljitsch van Beijnum has some further illumination on the Ars Technica blog, without using the “C” word; they don’t go for that sort of thing on Ars.