No deal

Google has announced an end to its monopolistic advertising agreement with Yahoo!: However, after four months of review, including discussions of various possible changes to the agreement, it’s clear that government regulators and some advertisers continue to have concerns about the agreement. Pressing ahead risked not only a protracted legal battle but also damage to … Continue reading “No deal”

Google has announced an end to its monopolistic advertising agreement with Yahoo!:

However, after four months of review, including discussions of various possible changes to the agreement, it’s clear that government regulators and some advertisers continue to have concerns about the agreement. Pressing ahead risked not only a protracted legal battle but also damage to relationships with valued partners. That wouldn’t have been in the long-term interests of Google or our users, so we have decided to end the agreement.

This is good. But Google didn’t strike out completely yesterday, as it successfully bent the ear of the FCC toward wasting the whitespaces on their hare-brained “Wi-Fi without testosterone” scheme. You win some, you lose some.

Technorati Tags: , ,

Congratulations, Phillies

A million fans came to the parade in Philly on Friday. I’d say baseball is still the National Pastime. This was a pretty decent World Series, apart from the Philly weather and the inept umpiring. I wanted the Rays to win, but the result’s not exactly heart-breaking either. Comcast had a lot to do with … Continue reading “Congratulations, Phillies”

A million fans came to the parade in Philly on Friday. I’d say baseball is still the National Pastime. This was a pretty decent World Series, apart from the Philly weather and the inept umpiring. I wanted the Rays to win, but the result’s not exactly heart-breaking either. Comcast had a lot to do with it, apparently, which must rankle Mr. NASCAR, Kevin Martin, whose car has crashed.

Million Fan March
Million Fan March

The Trouble with White Spaces

Like several other engineers, I’m disturbed by the white spaces debate. The White Space Coalition, and its para-technical boosters, argue something like this: “The NAB is a tiger, therefore the White Spaces must be unlicensed.” And they go on to offer the comparison with Wi-Fi and Bluetooth, arguing as Tom Evslin does on CircleID today … Continue reading “The Trouble with White Spaces”

Like several other engineers, I’m disturbed by the white spaces debate. The White Space Coalition, and its para-technical boosters, argue something like this: “The NAB is a tiger, therefore the White Spaces must be unlicensed.” And they go on to offer the comparison with Wi-Fi and Bluetooth, arguing as Tom Evslin does on CircleID today that “If we got a lot of innovation from just a little unlicensed spectrum, it’s reasonable to assume that we’ll get a lot more innovation if there’s a lot more [unlicensed] spectrum available.”

According to this argument, Wi-Fi has been an unqualified success in every dimension. People who make this argument haven’t worked with Wi-Fi or Bluetooth systems in a serious way, or they would be aware that there are in fact problems, serious problems, with Wi-Fi deployments.

For one thing, Wi-Fi systems are affected by sources of interference they can’t detect directly, such as FM Baby Monitors, cordless phones, and wireless security cameras. Running Wi-Fi on the same channel as one of these devices causes extremely high error rates. If 2.4 and 5.x GHz devices were required to emit a universally detectable frame preamble much of this nonsense could be avoided.

And for another, we have the problem of newer Wi-Fi devices producing frames that aren’t detectable by older (esp. 802.11 and 802.11b gear) without an overhead frame that reduces throughput substantially. If we could declare anything older than 802.11a and .11g illegal, we could use the spectrum we have much more efficiently.

For another, we don’t have enough adjacent channel spectrum to use the newest version of Wi-Fi, 40 MHz 802.11n, effectively in the 2.4 GHz band. Speed inevitably depends on channel width, and the white spaces offer little dribs and drabs of spectrum all over the place, much of it in non-adjacent frequencies.

But most importantly, Wi-Fi is the victim of its own success. As more people use Wi-Fi, we have share the limited number of channels across more Access Points, and they are not required to share channel space with each other in a particularly efficient way. We can certainly expect a lot of collisions, and therefore packet loss, from any uncoordinated channel access scheme, as Wi-Fi is, on a large geographic scale. This is the old “tragedy of the commons” scenario.

The problem of deploying wireless broadband is mainly a tradeoff of propagation, population, and bandwidth. The larger the population your signal covers, the greater the bandwidth needs to be in order to provide good performance. The nice thing about Wi-Fi is its limited propagation, because it permits extensive channel re-use without collisions. if the Wi-Fi signal in your neighbor’s house propagated twice as far, it has four times as many chances to collide with other users. So high power and great propagation isn’t an unmitigated good.

The advantage of licensing is that the license holder can apply authoritarian rules that ensure the spectrum is used efficiently. The disadvantage is that the license holder can over-charge for the use of such tightly-managed spectrum, and needs to in order to pay off the cost of his license.

The FCC needs to move into the 21st century and develop some digital rules for the use of unlicensed or lightly-licensed spectrum. The experiment I want to see concerns the development of these modern rules. We don’t need another Wi-Fi, we know how it worked out.

So let’s don’t squander the White Spaces opportunity with another knee-jerk response to the spectre of capitalism. I fully believe that people like Evslin, the White Space Coalition, and Susan Crawford are sincere in their belief that unlicensed White Spaces would be a boon to democracy, it’s just that their technical grasp of the subject matter is insufficient for their beliefs to amount to serious policy.

Google open-sources Android

I lost my Blackberry Curve somewhere in England last week, so I ordered an HTC G1 from T-Mobile as a replacement. The Curve doesn’t do 3G, so it’s an obsolete product at this point. And as I’m already a T-Mobile customer (I chose them for the Wi-Fi capability of their Curves,) the path of least … Continue reading “Google open-sources Android”

I lost my Blackberry Curve somewhere in England last week, so I ordered an HTC G1 from T-Mobile as a replacement. The Curve doesn’t do 3G, so it’s an obsolete product at this point. And as I’m already a T-Mobile customer (I chose them for the Wi-Fi capability of their Curves,) the path of least resistance to 3G goes through the G1. Just yesterday I was explaining to somebody that Android wasn’t really open source, but Google was apparently listening and decided to make a liar of me by open-sourcing Android:

With the availability of Android to the open-source community, consumers will soon start to see more applications like location-based travel tools, games and social networking offerings being made available to them directly; cheaper and faster phones at lower costs; and a better mobile web experience through 3G networks with richer screens.The easy access to the mobile platform will not only allow handset makers to download the code, but to build devices around it. Those not looking to build a device from scratch will be able to take the code and modify it to give their devices more of a unique flavor.

“Now OEMs and ODMs who are interested in building Android-based handsets can do so without our involvement,” Rich Miner, Google’s group manager for mobile platforms, told us earlier today. Some of these equipment makers are going to expand the role of Android beyond handsets.

This is good news, of course. I haven’t enjoyed the fact that T-Mobile sat between me and RIM for Blackberry software upgrades. The first add-on app that I’d like to see for the G1 is something to allow tethering a laptop to 3G via Bluetooth. I could tether the Curve, but as it only supports Edge it wasn’t incredibly useful.

In a more perfect world, I’d prefer the Treo Pro over the G1, but it doesn’t work on T-Mobile’s crazy array of AWS and normal frequencies, and is also not subsidized, so the G1 is a better deal. The Blackberry Storm is probably a better overall device than the G1, but it’s exclusive to Verizon so I would have had to pay a $200 early termination fee to get it. These phones are mainly for fun, so paying a fee to leave a carrier I basically like makes it all too serious.

Obama’s CTO short list

According to Business Week, Obama’s CTO will be one of these guys: Among the candidates who would be considered for the job, say Washington insiders, are Vint Cerf, Google’s (GOOG) “chief internet evangelist,” who is often cited as one of the fathers of the Internet; Microsoft (MSFT) chief executive officer Steve Ballmer; Amazon (AMZN) CEO … Continue reading “Obama’s CTO short list”

According to Business Week, Obama’s CTO will be one of these guys:

Among the candidates who would be considered for the job, say Washington insiders, are Vint Cerf, Google’s (GOOG) “chief internet evangelist,” who is often cited as one of the fathers of the Internet; Microsoft (MSFT) chief executive officer Steve Ballmer; Amazon (AMZN) CEO Jeffrey Bezos; and Ed Felten, a prominent professor of computer science and public affairs at Princeton University.

I can’t see Ballmer taking this job when he’s having so much fun, but I imagine any of the others would bite. Trouble is, they’re mostly business guys rather than tech guys, so it’s not an elite group. I’d have to go with Felten, for the fact that he has actual technical knowledge as well as a blog. I’ve debated him about net neutrality, of course.

Technorati Tags:

Europe’s Choice

Andrew Orlowski explains the state of Internet regulation in both the US and Europe in The Register: For almost twenty years, internet engineers have persuaded regulators not to intervene in this network of networks, and phenomenal growth has been the result. Because data revenues boomed, telecoms companies which had initially regarded packet data networking with … Continue reading “Europe’s Choice”

Andrew Orlowski explains the state of Internet regulation in both the US and Europe in The Register:

For almost twenty years, internet engineers have persuaded regulators not to intervene in this network of networks, and phenomenal growth has been the result. Because data revenues boomed, telecoms companies which had initially regarded packet data networking with hostility, preferred to sit back and enjoy the returns.

But that’s changing fast. Two months ago the US regulator, which scrupulously monitors public radio for profanity, and which spent months investigating a glimpse of Janet Jackson’s nipples, decided it needed to start writing technical mandates. And so off it went.

Unnoticed by almost everyone, so did the EU.

“It’s the revenge of the unemployed Telecomms Regulator”, one seasoned observer in Brussels told us this week. “The internet really put them out of business. Now they’re back.”

The Internet is indeed the most lightly-regulated network going, and it’s the only one in a constant state of improvement. Inappropriate regulation – treating the Internet like a telecom network – is the only way to put an end to that cycle.

A Turgid Tale of Net Neutrality

An article by Glenn Derene on net neutrality in Popular Mechanics is getting a lot of attention this week. It attempts to define net neutrality – always a perilous task – and to contrast the positions of our two presidential candidates on it: …there’s no accepted definition of network neutrality itself. It is, in fact, … Continue reading “A Turgid Tale of Net Neutrality”

An article by Glenn Derene on net neutrality in Popular Mechanics is getting a lot of attention this week. It attempts to define net neutrality – always a perilous task – and to contrast the positions of our two presidential candidates on it:

…there’s no accepted definition of network neutrality itself. It is, in fact, more of a networking philosophy than a defined political position. A pure “neutral” network is one that would treat all content that traveled across it equally. No one data packet would be prioritized above another. Image files, audio files, a request from a consumer for a web page—all would be blindly routed from one location to another, and the network would neither know nor care what kind of data was encompassed in each packet. For most but not all kinds of files, that’s how it works now.

When they were created, TCP/IP protocols were not intended to discriminate routinely between packets of data. The idea was to maintain a “best effort” network, one that moved packets from place to place in an effort to maximize overall throughput. But the protocols did allow for discrimination when it was needed. “Even the very first design for IP, back in 1980, had a “type of service” field, intended to provide different levels of traffic priority in a military setting,” says John Wroclawski, the director of the computer networks division at the University of Southern California’s revered Information Sciences Institute.

“The big question is not ‘can you do this technically,'” Wroclawski says. “It’s ‘how do you decide who to favor?'” In today’s multimedia-saturated Internet, streams of time-sensitive voice and video data are routinely prioritized over nonsequential data transfers such as Web pages. If one bit doesn’t follow another in a videoconference, for instance, the stream falls apart. For the most part, even proponents of net neutrality are okay with that level of discrimination.

This passage illustrates the problem with the kind of hardcore neutrality that was bandied about prior to the introduction of bills in the Congress to mandate fair treatment of network traffic, and it misses the point of a non-discriminatory network. There’s nothing wrong with prioritizing packets according to application requirements, and it would be silly not to do so. That’s one of the reasons that the IP header has a TOS field, as the quote indicates. The problem of who sets the TOS (actually DSCP in the current iteration of IP) is also not at all troubling – the application does it. So a proper definition of net neutrality is to treat all packets with the same requirements the same way, regardless of their origin, destination, or the application that generated them. And in fact that’s what the bills required: they didn’t ban QoS, they banned fees for QoS, embracing a flat-rate billing model.

And that’s a problem, of course. If we’re going to allow carriers to work with users to prioritize packets, which we should, we should also allow them to create service plans for this kind of treatment, and it should be legal for the carriers to sell QoS services to third parties (think VoIP providers) that would take effect when the consumer hasn’t purchased any QoS services. The problem of applications that set all their packets to highest priority is controlled by establishing account quotas for volume-per-minute (or less) for each priority. If you use up your quota for high-priority traffic with BitTorrent, your Skype is going to suck. And you have to deal with that. If your applications don’t signal their priority requirements to the network – and most don’t – you can allow your ISP to classify them for you, as they’ll be happy to do.

The flat-rate billing model that’s insensitive to load is a primary reason for the American controversy for net neutrality. Countries like Australia that have volume-metered pricing simply don’t have this issue as their ISP networks aren’t magnets for P2P file distribution. Net Neutrality is indeed an American problem. And moreover, there’s no particular need to cap data volume as long as the carrier is free to deprioritize bulk data. The postal service does this with very good effect, after all.

The fundamental dilemma behind the net neutrality controversy is the desire of activists to have it both ways: they want a QoS guarantee on the one hand, but no prioritization on the other. We can certainly do that in network engineering, but not without substantial changes in the network protocols and routers in use today. What we can do quite practically is provide high-confidence QoS for small amounts of data, sufficient for a single VoIP or gaming session over the typical DSL or wireless broadband link, and that should be sufficient for the time being.

If we can’t prioritize, then it follows that the only way to control network congestion is with crude caps and user-based QoS schemes that have unfortunate side-effects. And nobody really wants that, once they understand what it means

Both candidates are clueless on the issue, so I don’t see it as determinative of which to vote for.

Technorati Tags: .

Ultra-cool Computers

My next personal computer is going to be an ultra-portable tablet. I’ve never bought a laptop of my own, since my employers tend to shower me with them, and they’ve had so many drawbacks I couldn’t see any point in shelling out for one of my own. But recent research shows that we’re officially in … Continue reading “Ultra-cool Computers”

My next personal computer is going to be an ultra-portable tablet. I’ve never bought a laptop of my own, since my employers tend to shower me with them, and they’ve had so many drawbacks I couldn’t see any point in shelling out for one of my own. But recent research shows that we’re officially in the Dynabook Era with great gear like the Dell Latitude XT Tablet, the Lenovo X200 Tablet, the Asus R1E, Fujitsu LifeBook T5010, and the recently-announced HP Elitebook 2730p

What these babies have in common is light weight, sharp but small screens, long battery life, a wealth of connectivity features, and other goodies like web cams and mikes, GPS locators, touch-sensitive displays, and handwriting recognition. They’re more like Smartphones than traditional PCs, but without all the annoying limitations that make Blackberries better in the demo than in real life. Unlike pure slate computers that lack keyboards, they have swivel-mounted screens that can be twisted and folded to cover the laptop’s clamshell base, so you have a touch-sensitive display for when you need to jot notes or draw, and a regular keyboard for high-volume typing.

Each excels in some areas. The Dell seems to have the clearest screen and the best handwriting recognition since it uses a capacitive touchscreen. It draws a bit more power, since capacitive touch keeps an electric field active across the screen, where the more common resistive touch relies on a magnetic stylus to alert the touch sensor that something’s happening. The stylus-activated system rules out using your finger as a pointing device, which is also unfortunate, and has a thicker overlay on the screen than the Dell. The iPhone uses a capacitive touch system.

Dell also has a nice graphics chip with some dedicated memory which signficantly outperforms the shared-memory systems that are commonplace. But Dell’s CPU is at the low end of the scale, and the 1.2 GHz Intel U7600, an ultra-low voltage 65nm dual-core CPU, is as good as it gets. This is apparently a soldered-in part that can’t be upgraded. Dell is also super-expensive.

The Lenovo is too new for much in the way of evaluation, but it has very nice specs and a great pedigree. While the XT Tablet is Dell’s first convertible, the X200 is Lenovo’s third or so, and the details show. If they would only stop white-listing their own wireless cards in the BIOS they’d be at the top of my list. X200 Tablet uses a more substantial and higher power Intel CPU, around 1.8 GHz, which makes is considerably faster than* the Dell. They also use Intel’s Centrino graphics, and suffer a bit for it, but that’s a classic engineering tradeoff. Lenovo has an amazing array of connectivity choices, including the UWB system AKA Wireless USB. With an internal Wireless WAN card with GPS, internal Wi-Fi (including 3×3 11n,) Bluetooth, and Wireless USB, this system has five kinds of wireless without a visible antenna, awfully sharp.

The Fujitsu and Asus convertibles have larger screens – 13.3 in. vs. 12.1 for the Dell and the Lenovo – and add a pound or so of weight. Asus is concentrating on their netbooks these days, and doesn’t seem to be serious about keeping up to date, while the Fujitsu makes some strange choices with noisy fans and heat.

To be avoided are the older HP’s using the AMD chipset. AMD can’t keep up with Intel on power efficiency, so convertible systems that use their parts are only portable between one wall socket and another.

None of these little Dynabooks has made me swipe a card yet, but the collections of technology they represent say a lot about the future of networking. With all that wireless, the obligatory Gigabit Ethernet looks like an afterthought.

Which brings me to my point, gentle readers. What’s your experience with Wireless WANs in terms of service – between AT&T, Sprint, and Verizon, who’s got it going on? I get my cell phone service from friendly old T-Mobile, but they’re not player in the 3G world. I like Verizon’s tiered pricing, as I doubt I’ll use 5GB/mo of random wireless, as close as I tend to be to Wi-Fi hotspots, but it seems like a much nicer fall-back than using my Blackberry Curve as a modem.

For a nice demonstration of the XT’s capacitive touch screen in comparison to the more primitive Lenovo, see Gotta Be Mobile.

*Edited. The X200 non-tablet has a faster processor than the X200 Tablet. The tablet sucks power out of the system, and Lenovo had to de-tune the CPU to provide it.

Skype defense not persuasive

Now that the whole world knows that Skype’s Chinese partner, TOM, has been censoring IM’s and building a database of forbidden speakers for the government of China, Skype President Josh Silverman had to respond: In April 2006, Skype publicly disclosed that TOM operated a text filter that blocked certain words in chat messages, and it … Continue reading “Skype defense not persuasive”

Now that the whole world knows that Skype’s Chinese partner, TOM, has been censoring IM’s and building a database of forbidden speakers for the government of China, Skype President Josh Silverman had to respond:

In April 2006, Skype publicly disclosed that TOM operated a text filter that blocked certain words in chat messages, and it also said that if the message is found unsuitable for displaying, it is simply discarded and not displayed or transmitted anywhere. It was our understanding that it was not TOM’s protocol to upload and store chat messages with certain keywords, and we are now inquiring with TOM to find out why the protocol changed.

We also learned yesterday about the existence of a security breach that made it possible for people to gain access to those stored messages on TOM’s servers. We were very concerned to learn about both issues and after we urgently addressed this situation with TOM, they fixed the security breach. In addition, we are currently addressing the wider issue of the uploading and storage of certain messages with TOM.

I don’t know what’s more disturbing, the fact that one of most vocal net neutrality advocates is colluding with the government of China to finger dissidents, or the fact that they didn’t know they were collaborating. Frankly, this corporate defense raises more questions than it answers.

There are always going to be countries where the local laws are antithetical to post-enlightenment values. I think the correct response to such situations is to just say “no” and go somewhere else. For particularly compelling services, such as Google and Skype, the fact that the foreign service provide can’t do business in the fascist state then becomes a pressure point for change. The companies that collaborate with China are selling out their futures to fund the current quarter. How much money does Skype need to make, anyhow?

Technorati Tags: , ,

FCC fills empty job

Kevin Martin’s FCC has hired a new chief technologist, Jon Peha: Federal Communications Commission chairman Kevin Martin named John Peha chief technologist, the senior adviser post at the commission on technology issues, based out of the Office of Strategic Planning and Policy Analysis. I’m a bit disappointed. Peha is the guy who delivered strong testimony … Continue reading “FCC fills empty job”

Kevin Martin’s FCC has hired a new chief technologist, Jon Peha:

Federal Communications Commission chairman Kevin Martin named John Peha chief technologist, the senior adviser post at the commission on technology issues, based out of the Office of Strategic Planning and Policy Analysis.

I’m a bit disappointed. Peha is the guy who delivered strong testimony denouncing the Comcast management of BitTorrent without bothering to study BitTorrent’s use of TCP connections. His testimony was substantially wrong on a factual basis. Perhaps Peha can persuade me that he means well, but his performance so far has not been encouraging.

UPDATE: What am I talking about? Well take a look at the comments Peha filed in the Comcast matter, which are on-line at the FCC’s web site. He understands what’s at stake:

In the debate over network neutrality, both sides can make points that deserve serious consideration from policymakers. Such consideration requires clear and accurate statements of the facts, to say nothing of the broader issues at stake. Unfortunately, the public debate has often been filled with hyperbole and spin from advocates on both sides.1 Such rhetoric, combined with issues of technical complexity and subtlety, has made it unnecessarily difficult for policymakers to make informed decisions.

So what did he do? He misrepresented the facts and engaged in advocacy spin, to wit:

Comcast sends Device A a reset packet, with parameters set such that Device A will believe the reset is coming from Device B. Device A is therefore led to believe (incorrectly) that Device B is unwilling or unable to continue the session. The same may be occurring at Device B. Thus, the devices determine that the session must be ended, and no further packets can be sent.

It is factually incorrect to say that the process described above merely delays P2P traffic.

Bzzzttt, wrong answer. BitTorrent “sessions” consist of multiple TCP connections, so terminating one, or two, or any number less than the total number of TCP connections a given instance of BitTorrent can use at any particular time is in fact “delaying” instead of “blocking.” Peha makes the assumption that BitTorrent “sessions” are the same as TCP “sessions” and they clearly aren’t. Most of what makes BitTorrent troublesome, in fact, is the large number of TCP “sessions” it uses. It’s particularly outrageous that Peha charges Comcast with misrepresentation and then goes on to misrepresent in his own right.

He then goes on to contradict himself and admit that it’s really “delaying” after all:

After the flow of P2P from a given sender and recipient is blocked or terminated, the recipient is likely to seek some other source for the content. If the content is extremely popular, there are many options available. Consequently, this leads to a small delay, somewhat decreasing the rate at which this recipient can gather content.

So which is it, Dr. Peha, “blocking” or “delaying?” He can’t even make up his own mind. He then goes on to whack Comcast for targeting P2P:

Comcast has elected to employ mechanisms that degrade service for a particular application, i.e. P2P, instead of relying only on congestion control mechanisms that deal with traffic of all application types. Central to their justification of this approach has been the assertion that it is specifically P2P that has an adverse impact on other traffic. This assertion is untrue.

…and he goes on talk about blue cars and red cars, a lot of nonsensical fluff. The fact remains that P2P is the only application with such a great ability to consume bandwidth on a non-stop basis as to degrade the Internet experience of web browsing, and that’s what Comcast was trying to protect.

And more significantly, Peha fails to grasp the fact that applications are not created equal in terms of their tolerance for delay. P2P has no particular time constraints when running as a seeder (serving files to the rest of the Internet) but interactive applications like Web browsing and VoIP have very little tolerance for delay. And now we have a standard in place that requires ISPs to ignore these technical distinctions, thanks largely to the inept analysis of people like Peha.

In additional remarks he confesses his ignorance of network management techniques generally, and compares the Comcast method to a “man in the middle attack.” If that’s what he thinks, really and truly, he’s seriously under-informed. A “man in the middle attack” is means of breaking into a system by stealing passwords. What system did Comcast break into, and what password did they use to do so?

In Kevin Martin’s FCC this outlandish foolishness is a job interview. Peha is smarter than Sarah Palin, but he’s no Dave Farber. Surely the FCC can do better than to employ an advocate in the position that requires depth of technical knowledge and a commitment to impartiality. Kevin Martin has failed the American people again.

A more suitable candidate exists: Just a Girl in Short Shorts Talking about Whatever:

Comcast was regulating the download speeds of peer to peer networks, such as BitTorrent. I like to pirate movies as much as next cheapskate, but I do not think it is necessary that it be given equal priority with VoIP (voice over Internet).

That’s the level of insight we need in a Chief Technologist.

Technorati Tags: ,