Time Warner Cable bides its time

Not surprisingly, Time Warner Cable has decided to put its consumption-based billing trials on hold: Time Warner Cable Chief Executive Officer Glenn Britt said, “It is clear from the public response over the last two weeks that there is a great deal of misunderstanding about our plans to roll out additional tests on consumption based … Continue reading “Time Warner Cable bides its time”

Not surprisingly, Time Warner Cable has decided to put its consumption-based billing trials on hold:

Time Warner Cable Chief Executive Officer Glenn Britt said, “It is clear from the public response over the last two weeks that there is a great deal of misunderstanding about our plans to roll out additional tests
on consumption based billing. As a result, we will not proceed with implementation of additional tests until further consultation with our customers and other interested parties, ensuring that community needs are being met. While we continue to believe that consumption based billing may be the best pricing plan for consumers, we want to do everything we can to inform our customers of our plans and have the benefit of their views as part of our testing process.”

Time Warner Cable also announced that it is working to make measurement tools available as quickly as possible. These tools will help customers understand how much bandwidth they consume and aid in the dialog going forward.

The public response was somewhat less public than it may appear, as most of it was ginned-up by a few activist bloggers and the interest groups that are generally in the middle of these things, such as Free Press’ “Save the Internet” blog. In this case, the Internet was saved from a plan that Free Press’ chairman Tim Wu had previously lauded for its fairness in allocating network resources:

“I don’t quite see [metering] as an outrage, and in fact is probably the fairest system going — though of course the psychology of knowing that you’re paying for bandwidth may change behavior,” said Tim Wu, a law professor at Columbia University and chairman of the board of public advocacy group Free Press.

Of course, the “psychology of knowing that you’re paying for bandwidth” is actually meant to change behavior.

Free Press is now crowing that the postponement of the trial signals a great victory for the Internet:

“We’re glad to see Time Warner Cable’s price-gouging scheme collapse in the face of consumer opposition. Let this be a lesson to other Internet service providers looking to head down a similar path. Consumers are not going to stand idly by as companies try to squeeze their use of the Internet.

The Freeps should have chosen their words a bit more carefully. The dilemma that TWC faces does indeed relate to “squeezing,” but that doesn’t actually originate exclusively (or even primarily) at the cable company’s end of the bargain. TWC’s consumption per user has been increasing roughly 40% per year, and there’s no reason to assume it will do anything but increase as more HDTV content becomes available on the web, people connect more devices, and video calling becomes more popular. TWC’s capital expenditures are 20% of income, and the company lost $7.3 billion in the course of spinning out from Time Warner, Inc. last year. Some of TWC’s critics have charged that their bandwidth is free (or nearly so,) citing “high speed data costs of $146 million.” In reality, TWC pays six times that much for the interest on its capital expenditures alone ($923M.)

Heavy users squeeze light users by leaving less bandwidth on the table, and the flat-rate pricing system squeezes them even more by making them pay a larger share of the costs of bandwidth upgrades than those who actually use them. No fair-minded and rational person can look at the costs of operating a network and conclude that flat-rate pricing for a single Quality of Service level is the best we can do.

Continuous upgrades are a fact of life in the broadband business, and aligning their costs with the revenues carriers collect is one of the keys to creating an economically sustainable broadband ecosystem. We’ll take that up in another post.

UPDATE: Dig into the comments for some discussion of transit and peering prices.

,

Thinking about Caps

Time-Warner’s bandwidth metering plan continues to attract attention, in part because a couple of prominent tech journalists are taking an activist position against it: Nate Anderson is unabashedly opposed to most revenue-enhancing plans that come from ISPs and carriers, and Stacey Higginbotham imagines she’ll be personally affected since she lives in one of the trial … Continue reading “Thinking about Caps”

Time-Warner’s bandwidth metering plan continues to attract attention, in part because a couple of prominent tech journalists are taking an activist position against it: Nate Anderson is unabashedly opposed to most revenue-enhancing plans that come from ISPs and carriers, and Stacey Higginbotham imagines she’ll be personally affected since she lives in one of the trial cities, Austin. The latest development is a threat by Rep. Eric Massa of upstate New York to ban usage-based pricing by law:

Massa has wasted no time backing the issue, sending out two statements last week about his displeasure with TWC’s caps. “I am taking a leadership position on this issue because of all the phone calls, emails and faxes I’ve received from my district and all over the country,” he said in one. “While I favor a business’s right to maximize their profit potential, I believe safeguards must be put in place when a business has a monopoly on a specific region.”

TWC’s plan to meter usage, which differs from Comcast’s cap system in several significant respects*, wouldn’t seem odd in most of the world: volume-based service tiers are the norm for commercial Internet services in the US, and for residential services in most of the world. This is largely because the costs for providing Internet service are significantly related to volume, owing to the interconnect costs born by ISPs (it’s not continuously variable, it’s more like a step function that ratchets upward in chunks as new hardware has to be added to keep up with peak load.) These folks are essentially wholesalers who buy an interconnect to the larger Internet through a transit provider or a carrier. If they’re too small to build an extensive private network, they buy transit and if they’re larger they pay for circuits to and from peering centers, which aren’t free even if you build them yourself (they take parts to build, and parts aren’t free.)

It’s not unreasonable to tie pricing to volume in principle, given that some users consume hundreds or thousands of times more bandwidth than others; we certainly charge 18-wheelers more to use the freeways than Priuses. The argument is over what’s a reasonable fee.

And to answer that question, we have to understand the role that Internet service plays in paying for the infrastructure that supports it. There has never been a case in the United States or any other country where Internet service alone generated enough revenue for a carrier to cover the cost of building an advanced fiber optic network extending all the way from the core to the detached single-family residence, even in the muni fiber networks toward which the neutralists are so partial; in places like Burlington, VT, Lafayette, LA, and Morristown, TN, the service the city offers over fiber is triple play (Internet, TV, and voice.) Without TV and voice, the take-up rate of the service is too low to retire the bonds. It’s simple economics.

So what happens when triple-play customers decide to download all their TV programs from the Internet and replace their phone service with a combination of cell and Skype? Revenues plummet, obviously. So the cable company wants to hedge its bets by replacing triple-play revenue with a higher bill for the higher usage of the remaining indispensable service. That doesn’t seem evil to me, as long as there’s some competition in the market, and the infrastructure is continually upgraded. Over time, the infrastructure will be paid for, and the price per byte will decline.

One of problems that we have with broadband policy in the US is lack of connection between infrastructure costs and service prices. TWC seems to be trying to solve that problem, and I’d like them to have some freedom to experiment without every member of congress within striking distance of a camera crew giving them grief.

In the meantime, TWC would help themselves a great deal if they adopted the policy of printing each customer’s monthly usage on the bill. They shouldn’t do anything about it for the time being, just show the amount for the next six months. At the end of that period, if they want to run a trial or two, the consumers will be able to place the service levels in perspective, and there will be a lot less whining. If service levels are adopted, there also needs to be a policy of re-evaluating them every year. If TWC had done these two things, this whole brouhaha could have been avoided. And yes, I’d be glad to sign on as a consultant and keep them out of trouble.

*Comcast has an elastic cap that can’t be increased by paying higher fees. If you exceed it for three months in a row, you’re ejected. It’s elastic because it takes three simultaneous conditions to activate.

, ,

See you in Washington

I’ve been asked to join a panel at the Congressional Internet Caucus’ short conference on the State of the Mobile Net on April 23rd. I’ll be on the last panel: What Policy Framework Will Further Enable Innovation on the Mobile Net? Richard Bennett, [bio forthcoming] Harold Feld, Public Knowledge [bio] Alexander Hoehn-Saric, U.S. Senate Commerce … Continue reading “See you in Washington”

I’ve been asked to join a panel at the Congressional Internet Caucus’ short conference on the State of the Mobile Net on April 23rd. I’ll be on the last panel:

What Policy Framework Will Further Enable Innovation on the Mobile Net?

Richard Bennett, [bio forthcoming]
Harold Feld, Public Knowledge [bio]
Alexander Hoehn-Saric, U.S. Senate Commerce Committee [bio]
Larry Irving, Internet Innovation Alliance [bio]
Blair Levin, Stifel Nicolaus [bio]
Ben Scott, Free Press [bio]
Kevin Werbach, Wharton School of Business [bio]

I suspect we’ll spend the bulk of our time on the interaction between regulatory agencies, standards bodies, and industry groups. The case studies are how the process worked for Wi-Fi with the FCC opening up some junk spectrum, the IEEE 802.11 writing some rules, and the Wi-Fi Alliance developing compliance tests. In the UWB world, the model was a novel set of rules for high-quality spectrum followed by an IEEE 802.15.3a collapse and the subsequent attempt by the Wi-Media Alliance to save it. We probably will have UWB someday (wireless USB and Bluetooth 4.0 will both use it,) but the failure of the standards body was a major impediment.

With White Spaces up for grabs, we’d like to have something that’s at least as good as 802.11, but we really need to do a lot better.

Another topic of interest is whether mobile Internet access services should be regulated the same way that wireline services are regulated, and how we go about drafting that set of rules. The current state of the art is the 4 or 5 prongs of the FCC’s Internet Policy Statement, but these principles leave a lot to the imagination, as in all of the interesting questions about network management, QoS-related billing, third party payments, and the various forms of disclosure that may or may not be interesting.

The Internet is troubled by the fact that it’s worked pretty damn well for past 25 years, so there’s been no need to make major changes in its services model. It’s clear to me that some fairly disruptive upgrades are going to be needed in the near future, and we don’t want to postpone them by applying a legacy regulatory model to a network that’s not fully formed yet.

Verizon’s Vision of the Internet

Despite the fact that I’ve been trying to explain why companies like Time Warner need to impose broadband usage caps on their systems before going to the capital markets for assistance in beefing up their innards, I’m not a fan of usage caps generally. They’re a very crude tool for imposing an equitable distribution of … Continue reading “Verizon’s Vision of the Internet”

Despite the fact that I’ve been trying to explain why companies like Time Warner need to impose broadband usage caps on their systems before going to the capital markets for assistance in beefing up their innards, I’m not a fan of usage caps generally. They’re a very crude tool for imposing an equitable distribution of bandwidth, and one that ensures that the actual infrastructure in any given network will not be used efficiently. The key to network efficiency for a truly multi-service network like the Internet of the future is successful discrimination of application needs and traffic types. If the network can be made smart enough to follow orders, users can control their network usage according to their personal economics with no big surprises in the billing cycle. Network operators don’t need to manage traffic streams all the time, they need to manage them during periods of peak load (which better not be all that often.) And their best guidance in doing this comes from users and applications.

Many cities around the world manage access to the city core with something called congestion pricing: if you want to drive into the very heart of Singapore or London during peak hours, you have a pay a fee, which keeps traffic from gridlocking while permitting access by those who really need it. The Internet should work the same way: if you need low-latency service during peak load hours for Skype, you should be able to get it. And if you want to play P2P at the same time, you should be able to do so, but with higher latency (or at least higher jitter.) Accounts can be provisioned to allow a certain amount of congestion traffic for a flat rate, with additional portions available for an added fee. Users who demand a lot of transit from their networks should be able to get it, but at a reduced rate relative to average loads or for an additional fee.

The point is that networks are never going to be so fat that they can’t be overloaded, and local congestion is always going to occur. So the trick in managing networks is to allocate resources fairly and transparently, and let users control their use of whatever quota they have (not manually, but through home router and application signaling to the network.)

The least congested residential broadband service in the US today is Verizon FiOS. Verizon sells access at up to 50 MB/s, and has the capacity to increase this as consumers demand more.They can do this because they’ve invested money in a total infrastructure that consists of neighborhood loops, second hop infrastructure, and core network links. Their current system can carry 100 Mb/s per user without any contention short of the core, which is rather awesome. This is why you never hear anything about caps or quotas for FiOS: the system can’t be overloaded short of the core.

Despite that, Verizon’s visionaries realize that network management is going to be a part of the Internet of the future:

In part because most of the attention in the early days of the Internet was on connectivity and ensuring networks and devices could interconnect and communicate successfully, security and quality of service techniques were not a focus of the discussions around network protocols and functionality. Such features have instead often been offered “over the top”, usually as attributes in applications or as functionalities in web sites or distributed services.

The complexity and volume of Internet traffic today – and the fact that much more of it than ever before is “real time” or time sensitive – means that the Internet’s traditional routing and processing schemes are challenged more than ever. It is no longer realistic to expect that all of the heavy lifting to make applications and services work well on the Internet in today’s “two-way, heavy content, complex applications” world can be done through the old models. More work needs to be done at all levels to ensure better quality and improved services. This includes the network level as well.

This need not threaten the basic foundation of the Internet – its ability to provide consumers with access to any content they wish to use and connect any device they want to a broadband network. Competition, broad commitment to openness by industry and advocates, and oversight by regulators helps ensure this foundation remains. But it does mean that enhanced network based features and functionalities should not be automatically viewed with concern. Such features can be an important aspect of the Internet’s improvement and future evolution.

Indeed we shouldn’t fear rational and transparent management; it’s part of what has always made these systems work as well as they have for us.

, ,

Pitchforks in Austin: Time-Warner’s Bandwidth Cap

The fledgling high-tech community in the smokey little hipster ghetto called Austin is apoplectic about Time Warner’s announcement that it’s testing bandwidth caps in central Texas: When it comes to trialing its metered broadband service, Time Warner Cable’s choice to do so in the tech-savvy city of Austin, Texas, was no accident. And residents may … Continue reading “Pitchforks in Austin: Time-Warner’s Bandwidth Cap”

The fledgling high-tech community in the smokey little hipster ghetto called Austin is apoplectic about Time Warner’s announcement that it’s testing bandwidth caps in central Texas:

When it comes to trialing its metered broadband service, Time Warner Cable’s choice to do so in the tech-savvy city of Austin, Texas, was no accident. And residents may not be able to do much about it.

According to TWC spokesman Jeff Simmermon, Austin’s dedication to all things digital was precisely why it was chosen as one of four cities where the company plans to trial consumption-based broadband plans, which range from 5 GB to 40 GB per month (TWC says it has plans for a 100 GB-per-month tier as well). “Austin is a passionate and tech-savvy city, and the spirit that we’re approaching this (metered broadband) test with is that if it’s going to work, it has to work in a tech-savvy market where the use patterns are different,” he told me.

So far, Austin isn’t impressed, but since the local cable franchise it grants only deals with video, there may not be much it can do. Chip Rosenthal, one of seven commissioners on the City of Austin’s Technology and Telecommunications Commission (a strictly advisory body), hopes that concerned citizens will show up at the meeting it’s holding at City Hall this Wednesday and talk about metered broadband. He wants to get the metered bandwidth issue added to the agenda of the commission’s May meeting as well.

Rosenthal, a contract programmer who likes open source, has a blog where he holds forth on the issue, calling its rationale a series of “red herrings,” and complaining that the caps of the present will hurt applications of the future. This is no doubt true, but ultimately another red herring. The caps of the future won’t necessarily be the caps of the present.

The general theory is that TWC wants to stamp out web video in order to keep TV customers in the VoD fold. I don’t doubt that TWC would like to do that, but I doubt they’re dumb enough to believe they could ever get away with it. Austin is a stoner’s throw from San Antonio, the world headquarters of AT&T and the beta site for U-verse, the IPTV service that rides into the home atop VDSL. While U-verse isn’t universally available in Austin yet, it’s under construction so there are alternatives.

TWC’s CEO has issued a blog post by way of clarification that’s not entirely helpful:

With regard to consumption-based billing, we have determined that as broadband usage and penetration grow, there are increasing differences in the amount of bandwidth our customers consume. Our current pricing plans require all users to pay the same amount, whether they check email once a month or download six movies a day. As the amount of usage has dramatically diverged among users, this is becoming inherently unfair and not the way most consumers want to pay for goods they consume.

Like Rosenthal’s post, it’s true as far as it goes, but leaves runners in scoring position. Here’s the real story, as I see it: while Time Warner doesn’t have a large enough network to peer with the big boys (AT&T, Verizon, Qwest, Comcast, and L3,) it does have some peering agreements that protect it from transit charges as long as they deliver their packets to convenient locations, as well as some straight-up transit charges to pay. Their aggregation network – the links that carry data between the Internet exchange points and their CMTS’s – isn’t fat enough to support full-on DOCSIS 3 usage, and neither is its transit budget.

Consequently, they’re being hammered by the small number of high-bandwidth consumers in their network, and they’re looking to cut costs by running them off. While there are other ways to ensure fairness across user accounts, the cap is the best way to address the fraction of a percent who use something like half their available bandwidth.

TWC is betting that they can find a cap level that discourages hogs and doesn’t bother more typical users. They’re going into an area close to the heart of AT&T with the experiment to get a good sense of where that limit is.

VoD has a little bit to do with this, but not all that much. TWC customers with TiVo’s already have unlimited VoD, and the rest of the VoD they provide doesn’t cost transit dollars, it’s delivered over their local tree. DOCSIS 3 also doesn’t have much of anything to do with this, as it’s also a local service, albeit one with the potential to ring up big transit charges if not domesticated.

To a large extent, ISP’s play a marketing game where they advertise super-fast services that aren’t backed up by sufficient transit or peering to sustain a heavy duty cycle. This isn’t a bad thing, of course, as the efficient sharing of capacity is actually the Internet’s secret sauce. If we wanted peak and minimum bandwidth to be the same, we would have stuck with narrow-band modems on the PSTN. But we don’t, so we have to get hip to statistical sharing of network resources.

I’ll go out on a limb here and predict that the typical Austin consumer won’t switch to U-verse on account of TWC’s caps, but the heaviest users of gaming and BitTorrent will. And I’ll further predict that TWC’s bottom line will be glad to see them go.

The arguments against caps ultimately come down to the assertion that there’s some public good in making light users of Internet access capacity subsidize heavy users. Given that most of the heavy uses are either piracy or personal entertainment, I don’t happen to buy that argument, and moreover I find the alternatives to capping are generally less attractive, as they typically involve duty cycle restrictions of other types. The alternative that TWC should explore is peak/off peak handling that allows downloaders to utilize less restrictive bandwidth budgets at off hours.

I’d prefer to have a network that allowed me to label all of my traffic with the service level I expected, and scheduled and charged it appropriately. We don’t have that network yet, but we will one day as long as neutrality regulations don’t get in the way. Alternatively, a fat pipe to a Tier 1 like Verizon would be a better deal, but we can’t all buy one today either.

eComm Spectrum 2.0 Panel Video

Here’s the licensing panel from eComm live and in color. Seeing yourself on TV is weird; my immediate reaction is to fast for about a month. On a related note, see Saul Hansell’s musings on spectrum. The issue I wanted to raise at eComm and couldn’t due to lack of time and the meandering speculations … Continue reading “eComm Spectrum 2.0 Panel Video”

Here’s the licensing panel from eComm live and in color. Seeing yourself on TV is weird; my immediate reaction is to fast for about a month.

On a related note, see Saul Hansell’s musings on spectrum.

The issue I wanted to raise at eComm and couldn’t due to lack of time and the meandering speculations about collision-free networks is spectrum sharing. Two-way communications systems all need a shared pipe at some level, and the means by which access to the pipe are mediated distinguish one system from another. So far, the debate on white spaces in particular and open spectrum in general is about coding and power levels, the easy parts of the problem. The hard part is how the system decides which of a number of competing transmitters can access the pipe at any given time. The fact that speculative coding systems might permit multiple simultaneous connections on the same frequency in the same space/time moment doesn’t make this question go away, since they only help point-to-point communications. Internet access is inherently a point-to-multipoint problem as theses system all aggregate wireless systems in order to move them to the fiber backbone.

The advantage of licensing is that it provides the spectrum with an authorized bandwidth manager who can mediate among the desires of competing users and ensure fairness per dollar (or some similar policy.) The idea that we can simply dispense with a bandwidth manager in a wide-area network access system remains to be proved.

So I would submit that one of the principles that regulators need to consider when deciding between licensed and unlicensed uses is the efficiency of access. The notion that efficiency can be discarded in favor of ever-fatter pipes is obviously problematic in relation to wireless systems; they’re not making more spectrum.

Obama’s Missed Opportunity

According to National Journal, Susan Crawford is joining the Obama administration in a significant new role: Internet law expert Susan Crawford has joined President Barack Obama’s lineup of tech policy experts at the White House, according to several sources. She will likely hold the title of special assistant to the president for science, technology, and … Continue reading “Obama’s Missed Opportunity”

According to National Journal, Susan Crawford is joining the Obama administration in a significant new role:

Internet law expert Susan Crawford has joined President Barack Obama’s lineup of tech policy experts at the White House, according to several sources. She will likely hold the title of special assistant to the president for science, technology, and innovation policy, they said.

This does not make me happy. Crawford is not a scientist, technologist, or innovator, and the job that’s been created for her needs to be filled by someone who is; and an exceptional one at that, a person with deep knowledge of technology, the technology business, and the dynamics of research and business that promote innovation. A life as a legal academic is not good preparation for this kind of a job. Crawford is a sweet and well-meaning person, who fervently believes that the policy agenda she’s been promoting is good for the average citizen and the general health of the democracy and that sort of thing, but she illustrates the adage that a little knowledge is a dangerous thing.

As much as she loves the Internet and all that it’s done for modern society, she has precious little knowledge about the practical realities of its operation. Her principal background is service on the ICANN Board, where she listened to debates on the number of TLDs that can dance on the head of pin and similarly weighty matters. IETF engineers generally scoff at ICANN as a bloated, inefficient, and ineffective organization that deals with issues no serious engineer wants anything to do with. Her other qualification is an advisory role at Public Knowledge, a big player on the Google side of the net neutrality and copyright debates.

At my recent net neutrality panel discussion at MAAWG, I warned the audience that Crawford’s selection to co-manage the Obama transition team’s FCC oversight was an indication that extreme views on Internet regulation might become mainstream. It appears that my worst fears have been realized. Crawford has said that Internet traffic must not be shaped, managed, or prioritized by ISPs and core networking providers, which is a mistake of the worst kind. While work is being done all over the world to adapt the Internet to the needs of a more diverse mix of applications than it’s traditionally handled, Crawford harbors the seriously misguided belief that it already handles diverse applications well enough. Nothing could be farther from the truth, of course: P2P has interesting uses, but it degrades the performance of VoIP and video calling unless managed.

This is an engineering problem that can be solved, but which won’t be if the constraints on traffic management are too severe. People who harbor the religious approach to network management that Crawford professes have so far been an interesting sideshow in the network management wars, but if their views come to dominate the regulatory framework, the Internet will be in serious danger.

Creating a position for a special adviser on science, technology and innovation gave President Obama the opportunity to to lay the foundation of a strong policy in a significant area. Filling it with a law professor instead of an actual scientist, technologist, or innovator simply reinforces the creeping suspicion that Obama is less about transformational change than about business as usual. That’s a shame.

Cross-posted at CircleID.

, , , ,

Shutting down the Internet

The Internet is dying, according to advocacy group Free Press. The organization has published a report, Deep Packet Inspection: The End of the Internet as We Know It? that claims technology has evolved to the point that Internet carriers can control everything that we read, see, and hear on the Internet, something they’ve never been … Continue reading “Shutting down the Internet”

The Internet is dying, according to advocacy group Free Press. The organization has published a report, Deep Packet Inspection: The End of the Internet as We Know It? that claims technology has evolved to the point that Internet carriers can control everything that we read, see, and hear on the Internet, something they’ve never been able to do before. It’s the backdrop of a just so story Free Press’s network guru, Robb Topolski, delivered to a House of Lords roundtable in the UK recently. It’s an outlandish claim which echoes the Groundhog’s Day predictions about the Internet’s imminent demise Free Press has been making since 2005.

Suffice it to say it hasn’t exactly happened. Internet traffic continues to grow at the rate of 50-100% per year, more people than ever – some 1.5 billion – are using the Internet in more places and with more devices, and there hasn’t been an incident of an American ISP choking traffic since the dubiously alarming case of Comcast’s rationing of P2P bandwidth – mainly used for piracy – in 2007.

There are multiple errors of fact and analysis in the Free Press report, pretty much the same ones that the organization has been pumping since they jumped on the net neutrality bandwagon. There’s been no new breakthrough in Internet management. While it’s true that Moore’s Law makes computer chips run faster year after year, it’s also true that it makes networks run faster. So any reduction in the time it takes to analyze a packet on a network has to be balanced against the number of packets that cross the network in a given unit of time. Machines work faster. Some machines analyze Internet packets, and other machines generate Internet packets. They’re both getting faster, and neither is getting faster faster.

Network operators have been analyzing packets and rationing bandwidth as long as there have been IP networks. The first one to go live was at Ford Aerospace, where the discovery was made, more or less instantly, that user access to the network had to be moderated so that users of bulk data transfer applications didn’t crowd out interactive uses. More sophisticated forms of this kind of helpful “discrimination” are the principle uses of DPI today.

The complaint by Free Press is more or less on par with the shocking discovery that the sun has both good and bad effects: it causes plants to grow, and it can also cause skin cancer. Shall we now pass a legislative ban on sunlight?

The important new trend on the Internet is an increasing diversity of applications. Until fairly recently, the Internet’s traffic management system was occupied almost exclusively with a set of applications that had very similar requirements: e-mail, web browsing, and short file transfers are all concerned about getting exact copies of files from point A to point B, with no particular concern for how long it took, within seconds. Now we’ve added Skype to mix, which needs millisecond delivery, and P2P transactions that can run for hours and involve gigabytes of data. Add in some gaming and some video calling, and you’ve got a healthy diversity of applications with unique requirements.

The sensible way to manage Internet diversity is to identify application needs and try to meet them, to create “the greatest good for the greatest number” of people. DPI is really, really good at this, and it’s a win for all Internet users when it’s used properly.

Free Press’s jihad against helpful technologies echoes their previous war against newspaper consolidation. With the recent closures and printing plant shutdowns of daily papers in Seattle, Denver, and elsewhere, it’s clear that these efforts at media reform have been less than helpful.

Let’s not send the Internet the way of the Seattle Post-Intelligencer. Rather than buying Free Press’s shameless scare tactics, reflect on your own experience. Do you see even the slightest shred of evidence to support the wild claim that the Internet is withering on the vine? I certainly don’t.

Life in the Fast Lane

No more dirt roads to the Internet for me. Comcast came out and hooked up a DOCSIS 3.0 modem (it’s a Motorola) this morning, speeding up my downloads to 53 Mb/s per Speedtest.net and jacking up the upload to a bit over 4 Mb/s. Both of these numbers are about double what I had before … Continue reading “Life in the Fast Lane”

No more dirt roads to the Internet for me. Comcast came out and hooked up a DOCSIS 3.0 modem (it’s a Motorola) this morning, speeding up my downloads to 53 Mb/s per Speedtest.net and jacking up the upload to a bit over 4 Mb/s. Both of these numbers are about double what I had before with the Blast! service that’s advertised at 16/2. I had the dude put the modem in the living room to get my router closer to the center of the house in order to improve my Wi-Fi coverage, which only took a splitter off the TiVo’s feed. The old modem remains installed for phone service, but its MAC address has been removed from the DHCP authorization list. It turns out the backup battery had been installed incorrectly in the old modem, so he fixed that. The only incident that turned up in the install was the discovery that my TiVo HD is feeding back a noticeable voltage from the cable connection, which can apparently cause bad things to happen to the DOCSIS connection. He installed a voltage blocker off some kind to keep that at bay, but I’ll have to complain to TiVo about that feature.

As I had to go to the office as soon as the installation was completed, I haven’t had time to play with my privileged fast lane service, but I did enough to notice a fairly dramatic difference even in ordinary activities like reading e-mail. I use an IMAP server on the host that handles bennett.com, and its location in Florida tends to make for sluggish response when deleting mail or simply scanning a folder. It’s so fast now it’s like a local service. (People who use the more popular POP3 e-mail protocol won’t understand this, so don’t worry about it – when you delete an e-mail it’s a local copy, but mine is on the network.)

So the main effect of this super-fat Internet pipe is to make network services and content as readily accessible as local services and content. Which is a very wonderful thing for a couple of reasons: accessing content and services from the various machines I have connected to the Internet from home involves maintenance and security hassles that aren’t always worthwhile, so it’s convenient to outsource data to a system in the cloud that’s secure, well maintained, and backed up. It’s very easy to do that now, all the way around. And for the data that I still access locally, such as media files and the like, an off-site backup will be very painless.

One of the next exercises is going to be media streaming from my server in Florida to my TV in California, after I’ve got all my data encrypted and backed up. At this point, I’ve got three devices at home connected to the Internet that are something other than general-purpose computers: a TiVo, a Blu-Ray player that also does Netflix streaming, and a Blackberry that does goes to the web via 802.11a/g Wi-Fi. At any given time, I’ve got two to four general-purpose computers on the ‘net as well (more if we count virtual machines,) so it’s clear that the balance is turning in the direction of the special-purpose machines. This is what makes Zittrain sad, but it shouldn’t. It’s in the nature of general-purpose systems not to require much multiplication; one that’s fast but stationary and another that’s lighter and mobile and one more that’s super light and ultra-mobile is about all you’ll ever need. But special purpose machines multiply like rabbits, as more and more purposes are discovered for networked devices.

So the future is obviously going to embrace more specialized (“sterile tethered appliance”) machines than general purpose machines; that’s a given. The “Future of the Internet” question is actually whether the general-purpose machines also become more powerful and capable of doing more things than they do now. In other words, don’t just count machines, count functions and applications. The failure to understand this issue is Zittrain’s fundamental error. (Gee, the fast Internet made me smarter already.)

Attaching a controller/monitor to my aquarium that I can access across the Internet is the next exercise, and after that some security cameras and an outdoor Wi-Fi access point. It never ends.