Google’s Sweet Book Deal

If you read books, you’ll want to know what Robert Darnton has to say about the pending Google book deal, in Google & the Future of Books – The New York Review of Books. Here’s a teaser: As an unintended consequence, Google will enjoy what can only be called a monopoly—a monopoly of a new … Continue reading “Google’s Sweet Book Deal”

If you read books, you’ll want to know what Robert Darnton has to say about the pending Google book deal, in Google & the Future of Books – The New York Review of Books. Here’s a teaser:

As an unintended consequence, Google will enjoy what can only be called a monopoly—a monopoly of a new kind, not of railroads or steel but of access to information. Google has no serious competitors. Microsoft dropped its major program to digitize books several months ago, and other enterprises like the Open Knowledge Commons (formerly the Open Content Alliance) and the Internet Archive are minute and ineffective in comparison with Google. Google alone has the wealth to digitize on a massive scale. And having settled with the authors and publishers, it can exploit its financial power from within a protective legal barrier; for the class action suit covers the entire class of authors and publishers. No new entrepreneurs will be able to digitize books within that fenced-off territory, even if they could afford it, because they would have to fight the copyright battles all over again. If the settlement is upheld by the court, only Google will be protected from copyright liability.

A policy change of this magnitude should not be negotiated behind closed doors to the detriment of all purveyors of information but Google.

Time Warner Cable bides its time

Not surprisingly, Time Warner Cable has decided to put its consumption-based billing trials on hold: Time Warner Cable Chief Executive Officer Glenn Britt said, “It is clear from the public response over the last two weeks that there is a great deal of misunderstanding about our plans to roll out additional tests on consumption based … Continue reading “Time Warner Cable bides its time”

Not surprisingly, Time Warner Cable has decided to put its consumption-based billing trials on hold:

Time Warner Cable Chief Executive Officer Glenn Britt said, “It is clear from the public response over the last two weeks that there is a great deal of misunderstanding about our plans to roll out additional tests
on consumption based billing. As a result, we will not proceed with implementation of additional tests until further consultation with our customers and other interested parties, ensuring that community needs are being met. While we continue to believe that consumption based billing may be the best pricing plan for consumers, we want to do everything we can to inform our customers of our plans and have the benefit of their views as part of our testing process.”

Time Warner Cable also announced that it is working to make measurement tools available as quickly as possible. These tools will help customers understand how much bandwidth they consume and aid in the dialog going forward.

The public response was somewhat less public than it may appear, as most of it was ginned-up by a few activist bloggers and the interest groups that are generally in the middle of these things, such as Free Press’ “Save the Internet” blog. In this case, the Internet was saved from a plan that Free Press’ chairman Tim Wu had previously lauded for its fairness in allocating network resources:

“I don’t quite see [metering] as an outrage, and in fact is probably the fairest system going — though of course the psychology of knowing that you’re paying for bandwidth may change behavior,” said Tim Wu, a law professor at Columbia University and chairman of the board of public advocacy group Free Press.

Of course, the “psychology of knowing that you’re paying for bandwidth” is actually meant to change behavior.

Free Press is now crowing that the postponement of the trial signals a great victory for the Internet:

“We’re glad to see Time Warner Cable’s price-gouging scheme collapse in the face of consumer opposition. Let this be a lesson to other Internet service providers looking to head down a similar path. Consumers are not going to stand idly by as companies try to squeeze their use of the Internet.

The Freeps should have chosen their words a bit more carefully. The dilemma that TWC faces does indeed relate to “squeezing,” but that doesn’t actually originate exclusively (or even primarily) at the cable company’s end of the bargain. TWC’s consumption per user has been increasing roughly 40% per year, and there’s no reason to assume it will do anything but increase as more HDTV content becomes available on the web, people connect more devices, and video calling becomes more popular. TWC’s capital expenditures are 20% of income, and the company lost $7.3 billion in the course of spinning out from Time Warner, Inc. last year. Some of TWC’s critics have charged that their bandwidth is free (or nearly so,) citing “high speed data costs of $146 million.” In reality, TWC pays six times that much for the interest on its capital expenditures alone ($923M.)

Heavy users squeeze light users by leaving less bandwidth on the table, and the flat-rate pricing system squeezes them even more by making them pay a larger share of the costs of bandwidth upgrades than those who actually use them. No fair-minded and rational person can look at the costs of operating a network and conclude that flat-rate pricing for a single Quality of Service level is the best we can do.

Continuous upgrades are a fact of life in the broadband business, and aligning their costs with the revenues carriers collect is one of the keys to creating an economically sustainable broadband ecosystem. We’ll take that up in another post.

UPDATE: Dig into the comments for some discussion of transit and peering prices.

,

See you in Washington

I’ve been asked to join a panel at the Congressional Internet Caucus’ short conference on the State of the Mobile Net on April 23rd. I’ll be on the last panel: What Policy Framework Will Further Enable Innovation on the Mobile Net? Richard Bennett, [bio forthcoming] Harold Feld, Public Knowledge [bio] Alexander Hoehn-Saric, U.S. Senate Commerce … Continue reading “See you in Washington”

I’ve been asked to join a panel at the Congressional Internet Caucus’ short conference on the State of the Mobile Net on April 23rd. I’ll be on the last panel:

What Policy Framework Will Further Enable Innovation on the Mobile Net?

Richard Bennett, [bio forthcoming]
Harold Feld, Public Knowledge [bio]
Alexander Hoehn-Saric, U.S. Senate Commerce Committee [bio]
Larry Irving, Internet Innovation Alliance [bio]
Blair Levin, Stifel Nicolaus [bio]
Ben Scott, Free Press [bio]
Kevin Werbach, Wharton School of Business [bio]

I suspect we’ll spend the bulk of our time on the interaction between regulatory agencies, standards bodies, and industry groups. The case studies are how the process worked for Wi-Fi with the FCC opening up some junk spectrum, the IEEE 802.11 writing some rules, and the Wi-Fi Alliance developing compliance tests. In the UWB world, the model was a novel set of rules for high-quality spectrum followed by an IEEE 802.15.3a collapse and the subsequent attempt by the Wi-Media Alliance to save it. We probably will have UWB someday (wireless USB and Bluetooth 4.0 will both use it,) but the failure of the standards body was a major impediment.

With White Spaces up for grabs, we’d like to have something that’s at least as good as 802.11, but we really need to do a lot better.

Another topic of interest is whether mobile Internet access services should be regulated the same way that wireline services are regulated, and how we go about drafting that set of rules. The current state of the art is the 4 or 5 prongs of the FCC’s Internet Policy Statement, but these principles leave a lot to the imagination, as in all of the interesting questions about network management, QoS-related billing, third party payments, and the various forms of disclosure that may or may not be interesting.

The Internet is troubled by the fact that it’s worked pretty damn well for past 25 years, so there’s been no need to make major changes in its services model. It’s clear to me that some fairly disruptive upgrades are going to be needed in the near future, and we don’t want to postpone them by applying a legacy regulatory model to a network that’s not fully formed yet.

Life in the Fast Lane

No more dirt roads to the Internet for me. Comcast came out and hooked up a DOCSIS 3.0 modem (it’s a Motorola) this morning, speeding up my downloads to 53 Mb/s per Speedtest.net and jacking up the upload to a bit over 4 Mb/s. Both of these numbers are about double what I had before … Continue reading “Life in the Fast Lane”

No more dirt roads to the Internet for me. Comcast came out and hooked up a DOCSIS 3.0 modem (it’s a Motorola) this morning, speeding up my downloads to 53 Mb/s per Speedtest.net and jacking up the upload to a bit over 4 Mb/s. Both of these numbers are about double what I had before with the Blast! service that’s advertised at 16/2. I had the dude put the modem in the living room to get my router closer to the center of the house in order to improve my Wi-Fi coverage, which only took a splitter off the TiVo’s feed. The old modem remains installed for phone service, but its MAC address has been removed from the DHCP authorization list. It turns out the backup battery had been installed incorrectly in the old modem, so he fixed that. The only incident that turned up in the install was the discovery that my TiVo HD is feeding back a noticeable voltage from the cable connection, which can apparently cause bad things to happen to the DOCSIS connection. He installed a voltage blocker off some kind to keep that at bay, but I’ll have to complain to TiVo about that feature.

As I had to go to the office as soon as the installation was completed, I haven’t had time to play with my privileged fast lane service, but I did enough to notice a fairly dramatic difference even in ordinary activities like reading e-mail. I use an IMAP server on the host that handles bennett.com, and its location in Florida tends to make for sluggish response when deleting mail or simply scanning a folder. It’s so fast now it’s like a local service. (People who use the more popular POP3 e-mail protocol won’t understand this, so don’t worry about it – when you delete an e-mail it’s a local copy, but mine is on the network.)

So the main effect of this super-fat Internet pipe is to make network services and content as readily accessible as local services and content. Which is a very wonderful thing for a couple of reasons: accessing content and services from the various machines I have connected to the Internet from home involves maintenance and security hassles that aren’t always worthwhile, so it’s convenient to outsource data to a system in the cloud that’s secure, well maintained, and backed up. It’s very easy to do that now, all the way around. And for the data that I still access locally, such as media files and the like, an off-site backup will be very painless.

One of the next exercises is going to be media streaming from my server in Florida to my TV in California, after I’ve got all my data encrypted and backed up. At this point, I’ve got three devices at home connected to the Internet that are something other than general-purpose computers: a TiVo, a Blu-Ray player that also does Netflix streaming, and a Blackberry that does goes to the web via 802.11a/g Wi-Fi. At any given time, I’ve got two to four general-purpose computers on the ‘net as well (more if we count virtual machines,) so it’s clear that the balance is turning in the direction of the special-purpose machines. This is what makes Zittrain sad, but it shouldn’t. It’s in the nature of general-purpose systems not to require much multiplication; one that’s fast but stationary and another that’s lighter and mobile and one more that’s super light and ultra-mobile is about all you’ll ever need. But special purpose machines multiply like rabbits, as more and more purposes are discovered for networked devices.

So the future is obviously going to embrace more specialized (“sterile tethered appliance”) machines than general purpose machines; that’s a given. The “Future of the Internet” question is actually whether the general-purpose machines also become more powerful and capable of doing more things than they do now. In other words, don’t just count machines, count functions and applications. The failure to understand this issue is Zittrain’s fundamental error. (Gee, the fast Internet made me smarter already.)

Attaching a controller/monitor to my aquarium that I can access across the Internet is the next exercise, and after that some security cameras and an outdoor Wi-Fi access point. It never ends.

Digital Britain and Hokey Tools

It’s helpful to see how other countries deal with the typically over-excited accusations of our colleagues regarding ISP management practices. Case in point is the Digital Britain Interim Report from the UK’s Department for Culture, Media and Sport and Department for Business, Enterprise and Regulatory Reform, which says (p. 27): Internet Service Providers can take … Continue reading “Digital Britain and Hokey Tools”

It’s helpful to see how other countries deal with the typically over-excited accusations of our colleagues regarding ISP management practices. Case in point is the Digital Britain Interim Report from the UK’s Department for Culture, Media and Sport and Department for Business, Enterprise and Regulatory Reform, which says (p. 27):

Internet Service Providers can take action to manage the flow of data – the traffic – on their networks to retain levels of service to users or for other reasons. The concept of so-called ‘net neutrality’, requires those managing a network to refrain from taking action to manage traffic on that network. It also prevents giving to the delivery of any one service preference over the delivery of others. Net neutrality is sometimes cited by various parties in defence of internet freedom, innovation and consumer choice. The debate over possible legislation in pursuit of this goal has been stronger in the US than in the UK. Ofcom has in the past acknowledged the claims in the debate but have also acknowledged that ISPs might in future wish to offer guaranteed service levels to content providers in exchange for increased fees. In turn this could lead to differentiation of offers and promote investment in higher-speed access networks. Net neutrality regulation might prevent this sort of innovation.

Ofcom has stated that provided consumers are properly informed, such new business models could be an important part of the investment case for Next Generation Access, provided consumers are properly informed.

On the same basis, the Government has yet to see a case for legislation in favour of net neutrality. In consequence, unless Ofcom find network operators or ISPs to have Significant Market Power and justify intervention on competition grounds, traffic management will not be prevented.

(Ofcom is the UK’s FCC). Net neutrality is, in essence, a movement driven by fears of hypothetical harm that might be visited upon the Internet given a highly unlikely set of circumstances. Given the fact that 1.4 billion people use the Internet every day, and the actual instances of harmful discrimination by ISPs can be counted on one hand (and pales in comparison to harm caused by malicious software and deliberate bandwidth hogging in any case,) Ofcom’s stance is the only one that makes any sense: keep an eye on things, and don’t act without provocation. This position would have kept us out of Iraq, BTW.

Yet we have lawmakers in the US drafting bills full of nebulous language and undefined terms aimed at stemming this invisible menace.

Are Americans that much less educated than Brits, or are we just stupid? In fact, we have a net neutrality movement in the US simply because we have some well-funded interests manipulating a gullible public and a system of government that responds to emotion.

A good example of these forces at work is the freshly released suite of network test tools on some of Google’s servers. Measurement Lab checks how quickly interested users can reach Google’s complex in Mountain View, breaking down the process into hops. As far as I can tell, this is essentially a dolled-up version of the Unix “traceroute” which speculates about link congestion and takes a very long time to run.

The speed, latency, and consistency of access to Google is certainly an important part of the Internet experience, but it’s hardly definitive regarding who’s doing what to whom. But the tech press loves this sort of thing because it’s just mysterious enough in its operation to invite speculation and sweeping enough in its conclusions to get users excited. It’s early days for Measurement Lab, but I don’t have high expectations for its validity.

Doubts about Broadband Stimulus

The New York Times has a front page story today on the broadband stimulus bill which features an extensive quote from Brett: Critics like Mr. Glass say the legislation being developed in Congress is flawed in various ways that could mean much of the money is wasted, or potentially not spent at all — arguably … Continue reading “Doubts about Broadband Stimulus”

The New York Times has a front page story today on the broadband stimulus bill which features an extensive quote from Brett:

Critics like Mr. Glass say the legislation being developed in Congress is flawed in various ways that could mean much of the money is wasted, or potentially not spent at all — arguably just as bad an outcome given that the most immediate goal of the stimulus measure is to pump new spending into the economy.

An “open access” requirement in the bill might discourage some companies from applying for grants because any investments in broadband infrastructure could benefit competitors who would gain access to the network down the line.

Meeting minimum speed requirements set forth in the House version could force overly costly investments by essentially providing Cadillac service where an economy car would be just as useful. And some worry that government may pay for technology that will be obsolete even before the work is completed.

“Really the devil is in the details,” Mr. Glass said. “Yes, there is $9 billion worth of good that we can do, but the bill doesn’t target the funds toward those needs.”

The bill is still very rough. Some critics cite the bill’s preference for grants to large incumbents, others highlight the amorphous “open access” provisions and the arbitrary speed provisions as weaknesses. The only interest groups that appear altogether happy with it are Google’s boosters, such as Ben Scott of Free Press. This is a flip-flop for Free Press, who only last week was urging members to call Congress and ask that bill be killed.

A particularly odd reaction comes from friend of the blog Jeff Jarvis, who took time out from pitching his love letter to Google What Would Google Do? to tear into the article’s sourcing:

I found myself irritated by today’s story in the New York Times that asks whether putting money from the bailout toward broadband would be a waste. The question was its own answer. So was the placement of the story atop page one. The reporter creates generic groups of experts to say what the he wants to say (I know the trick; I used to be a reporter): “But experts warn…. Other critics say…. Other supporters said…”

I wish that every time he did that, the words “experts,” “critics,” and “supporters” were hyperlinked to a page that listed three of each.

It’s an obvious case of a story with an agenda: ‘I’m going to set out to poke a hole in this.’

The odd bit is that five people are named and quoted, and the terms “expert” and “critic” clearly refer to these named sources. It’s boring to repeat names over and over, so the writer simply uses these terms to avoid the tedium. It’s clear that Brett and Craig Settles are the critics and experts. Jeff seems not to have read the article carefully and simply goes off on his defensive tirade without any basis.

It’s a given in Google’s world that massive government subsidies for broadband are a good thing because they will inevitably lead to more searches, more ad sales, and more revenue for the Big G. But while that’s clearly the case, it doesn’t automatically follow that what’s good for Google is good for America, so it behooves our policy makers to ensure that the money is spent wisely, without too many gimmicks in favor of one technology over another or too many strings attached that don’t benefit the average citizen.

Raising questions about pending legislation and trying to improve it is as American as baseball, and the article in the Times is a step in the right direction. It may not be what Google would do, but it’s good journalism.

I want to make sure that the broadband money is spent efficiently, so I would bag the open access requirement (nobody knows what it means anyway) and give credit all improvements in infrastructure that increase speed and reduce latency.

The bill needs to support all technologies that have utility in the Internet access space, wireless, coax, and fiber, but should encourage the laying of new fiber where it’s appropriate, and high-speed wireless in less-populated areas. Eventually, homes and businesses are pretty much all going to have fiber at the doorstep, but that doesn’t need to happen overnight.

What recession?

So here’s your recession-proof business, ladies and gentlemen: Netflix, the company which mails out DVD rentals and also offers streamed programming via the internet, saw a 45% jump in profits and 26% rise in consumers to 9.4 million in the fourth quarter. This was the quarter in which Netflix released Watch Instantly on non-PC platforms. … Continue reading “What recession?”

So here’s your recession-proof business, ladies and gentlemen:

Netflix, the company which mails out DVD rentals and also offers streamed programming via the internet, saw a 45% jump in profits and 26% rise in consumers to 9.4 million in the fourth quarter.

This was the quarter in which Netflix released Watch Instantly on non-PC platforms. It’s so ubiquitous now I have it on three platforms: a home theater PC, TivoHD, and a Samsung BD-P2500 Blu-Ray player. It looks best on the Samsung, thanks to its HQV video enhancement chip.

Internet Myths

Among my missions in this life is the chore of explaining networking in general and the Internet in particular to policy makers and other citizens who don’t build network technology for a living. This is enjoyable because it combines so many of the things that make me feel good: gadgetry, technology, public policy, writing, talking, … Continue reading “Internet Myths”

Among my missions in this life is the chore of explaining networking in general and the Internet in particular to policy makers and other citizens who don’t build network technology for a living. This is enjoyable because it combines so many of the things that make me feel good: gadgetry, technology, public policy, writing, talking, and education. It’s not easy, of course, because there are a lot of things to know and many ways to frame the issues. But it’s possible to simplify the subject matter in a way that doesn’t do too much violence to the truth.

As I see it, the Internet is different from the other networks that we’re accustomed to in a couple of important ways: for one, it allows a machine to connect simultaneously to a number of other machines. This is useful for web surfing, because it makes it possible to build a web page that draws information from other sources. So a blog can reference pictures, video streams, and even text from around the Internet and put it in one place where it can be updated in more-or-less real time. It enables aggregation, in other words. Another thing that’s unique about the Internet is that the underlying transport system can deliver information at very high speed for short periods of time. The connection between a machine and the Internet’s infrastructure is idle most of the time, but when it’s active it can get its information transferred very, very quickly. This is a big contrast to the telephone network, where information is constrained by call setup delays and a very narrow pipe.
Continue reading “Internet Myths”

Briscoe explains Re-ECN in plain English

See the current issue of IEEE Spectrum for a nice description of Bob Briscoe’s Re-ECN, A Fairer, Faster Internet Protocol: Refeedback introduces a second type of packet marking—think of these as credits and the original [ECN] congestion markings as debits. The sender must add sufficient credits to packets entering the network to cover the debit … Continue reading “Briscoe explains Re-ECN in plain English”

See the current issue of IEEE Spectrum for a nice description of Bob Briscoe’s Re-ECN, A Fairer, Faster Internet Protocol:

Refeedback introduces a second type of packet marking—think of these as credits and the original [ECN] congestion markings as debits. The sender must add sufficient credits to packets entering the network to cover the debit marks that are introduced as packets squeeze through congested Internet pipes. If any subsequent network node detects insufficient credits relative to debits, it can discard packets from the offending stream.

To keep out of such trouble, every time the receiver gets a congestion (debit) mark, it returns feedback to the sender. Then the sender marks the next packet with a credit. This reinserted feedback, or refeedback, can then be used at the entrance to the Internet to limit congestion—you do have to reveal everything that may be used as evidence against you.

Refeedback sticks to the Internet principle that the computers on the edge of the network detect and manage congestion. But it enables the middle of the network to punish them for providing misinformation.

The limits and checks on congestion at the borders of the Internet are trivial for a network operator to add. Otherwise, the refeedback scheme does not require that any new code be added to the network’s equipment; all it needs is that standard congestion notification be turned on. But packets need somewhere to carry the second mark in the “IP” part of the TCP/IP formula. Fortuitously, this mark can be made, because there is one last unused bit in the header of every IP packet.

This is a plan that will allow interactive uses of the Internet to co-exist happily with bulk data transfer. It’s quite brilliant and I recommend it as an alternative to a lot of nonsense that’s been floated around this space.

Technorati Tags: ,

Holy Moly

Hilda Solis is Obama’s labor pick: WASHINGTON (AP) — A labor official says Rep. Hilda Solis of California will be nominated as labor secretary by President-elect Barack Obama. The Democratic congresswoman was just elected to her fifth term representing heavily Hispanic portions of eastern Los Angeles County and east L.A. She is the daughter of … Continue reading “Holy Moly”

Hilda Solis is Obama’s labor pick:

WASHINGTON (AP) — A labor official says Rep. Hilda Solis of California will be nominated as labor secretary by President-elect Barack Obama.

The Democratic congresswoman was just elected to her fifth term representing heavily Hispanic portions of eastern Los Angeles County and east L.A. She is the daughter of Mexican and Nicaraguan immigrants and has been the only member of Congress of Central American descent.

I had the pleasure of working issues with and against Solis when she was in the California State Senate back in the day. Her speciality was “women’s issues” such as child custody and support, the marriage tax, domestic violence, affirmative action, and health care, so this comes as a bit of a surprise. She will be the first cabinet member of my personal acquaintance. Now if she owes me a favor…