Critics of Obama FCC transitioner Susan Crawford have correctly pointed out that she’s made some odd-sounding remarks to the effect that Internet access is just another utility, like water, power, or sewer service. If the intent of this remark is to suggest that everybody needs access to the Internet these days, just as much (or nearly as much) as we need electricity and running water, nobody has much of a problem with this observation, other than standard hyperbole objections.
But utilities in the USA tend to be provided by the government, so it’s reasonable to draw the implication from Crawford’s comparison that the government should be providing Internet access. This interpretation is underscored by her frequent complaints about the US’s ranking versus other countries in the broadband speed and price sweepstakes.
If you continually advocate for more aggressive spending to win a supposed international broadband arms race, you minimize the effectiveness of private investment, and you tout the planet’s fastest and cheapest Internet service as a necessity of life, you’re going to be regarded as either a garden-variety socialist or an impractical dreamer.
Some people have taken this “utility” notion to heart, obviously, and have launched various campaigns to bring publicly-funded Internet service to American towns and cities, with decidedly mixed results. Some small towns like Morristown, TN, (a place where I spent a good deal of my childhood,) have been successful in bringing fiber-to-the-home ahead of their telephone company’s schedule by passing revenue bonds and spending large sums of money. Other towns have been much less successful, creating half-built and poorly run network fragments that have to be sold to private companies to provide any service at all.
Another such example has turned up in Alameda, CA, a little island town south of Oakland. Alameda Power and Telecom built a muni cable system that’s just gone bust:
Comcast Corp. has put in the winning bid to buy the financially strapped municipal cable operation run by Alameda Power & Telecom in the San Francisco Bay Area.
According to a report this week by the muni, Comcast will pay $15 million for the system, which has 9,500 video customers and 6,600 data customers in the community east across the bay from San Francisco.
The municipal utility, which began as an electric company, launched video services in 2001 to leverage its existing infrastructure. However, the cable utility funded its plant and operations with revenue bond anticipation notes. A balloon payment is due next June on those notes of $35 million.
It’s not clear why the Alameda utility felt they had to undertake this effort, as AT&T and Comcast serve the area with perfectly acceptable broadband offerings, but such is life in the Bay Area. There’s no doubt that the system is a real mess. From the presentation to the public utilities board advocating sale we get this summary:
Summary of Telecom System Performance
• Revenues
– Limited by competition with Comcast, AT&T and satellite providers
– Cable subscriptions are declining slowly
– Internet subscriptions are growing slowly• Expenses
– Labor costs are relatively high compared to non-municipal telecom providers
– Programming costs continue to escalate
– Alameda P&T has cut costs significantly (including 1/3 reduction in staff)• Net revenues are now becoming slightly positive, but substantially less than the interest payments on the Telecom system debt
Alameda Power and Telecom is the oldest municipal utility west of the Mississippi, founded in 1887. If any utility can handle the intricacies of financing and operation, these folks should be capable.
One example of a public utility failing to deliver on its promised Internet access service doesn’t prove much of anything, of course. The particular approach to financing this effort was to blame for this failure, and that was doubtless influenced by the current state of the credit markets. The new ballparks on the drawing boards in the Bay Area for soccer, football, and the National Pastime are also in jeopardy. But why in the world do these municipal utilities want to compete with the private sector for services that are at best simply equivalent? The Morristown project is succeeding because it offers a superior service, not a me-too offering at a lower price.
The annals of Muni Wi-Fi are also littered with tales of failure, as Glenn Fleishman points out from his new perch in Ars Technica today. If we’ve learned anything at all from the history of Internet-as-utility, it’s that this strained analogy only applies in cases where there is no existing infrastructure, and probably ends best when a publicly-financed project is sold (or at least leased) to a private company for upgrades and management. We should be especially suspicious of projects aimed at providing Wi-Fi mesh because they’re slow as molasses on a winter’s day.
I don’t see any examples of long-term success in the publicly-owned and operated networking space. And I also don’t see any examples of publicly-owned and operated Internet service providers doing any of the heavy lifting in the maintenance of the Internet protocols, a never-ending process that’s vital to the continuing growth of the Internet. Comcast, for example, recently conducted an experiment to measure the effectiveness of P4P, a location-sensitive variation on P2P file transfer protocols. In a report to the IETF, Comcast finds that download speeds were improved by as much as 80%, a win-win for the ISP and the consumer. The ISP wins, of course, by reducing traffic to and from other networks. While the usual conspiracy theorists are poo-pooing the P4P method because it’s not helpful to pirates (content has to be clearly identified), this is the kind of advance that the Internet deeply depends on. While anybody can build a new application without permission easily enough, making it work well depends on broad cooperation and (frequently) a bit of re-engineering. I don’t see the government-owned and -operated networks helping, do you?
So I’d much rather we continue with private ownership and meaningful government oversight for Internet access than turn the problem lock, stock, and barrel over to the state. But that means we’re going to need some regulators who actually know what they’re regulating, who can take the protocols apart and put them back together again, who don’t depend on staff reports, and who don’t have a pre-determined agenda.
How much longer will we have to wait for that? If the transition team does its job right, not long. But there’s a great deal of uncertainty around this whole procedure. This is the first transition in history that’s had to re-task the FCC toward the Internet and away from TV and radio censorship, so whatever they do is going to be ground-breaking in some sense. So it’s very exciting to watch, especially since the new season of 24 doesn’t start until their work is done.
Technorati Tags: net neutrality, FCC, broadband
Richard, you’re correct that the model of Internet service as a “utility” doesn’t work. This is because the very design of the Internet was intended to depart from this model. When the Internet was created, we already had large, centrally owned and managed telephone monopolies.
The idea of the Internet protocol was to allow traffic to be carried between data networks that were NOT all owned and managed by a single entity, and had DIFFERENT rules of operation — drafted not by some central authority or by the government but by the owners of the networks. This decentralization of control and of authority may be anathema to those whose business is centralized regulation (or lawyers, such as Ms. Crawford, who can be assured of lifetime employment dealing with regulatory bodies), but it is part and parcel of the way the Internet works.
The only situation in which government should intervene in the Internet is in the case of anticompetitive tactics by providers — which are concentrated not on the level of local ISPs such as Comcast but in the markets for backbone bandwidth and wholesale transport. In short, Ms. Crawford and the “network neutrality” lobbyists are barking very much up the wrong tree.
By the way, in your message, there is one thing that you neglected to mention about “P4P.” Because it attempts to lighten the load on providers by causing file transfers to happen within their own networks, it raises two concerns. First of all, it does nothing to ease problems in situations where there are real constraints on bandwidth — e.g. for cellular data. For cellular carriers, the most scarce and precious resource is spectrum — and even if data flows only within the carrier’s network, it’s still consumed as excessively by P4P as by any other form of P2P.
Secondly, P4P discriminates against small, competitive, and independent providers. After all, the chances are far smaller that a file is available on the same provider’s network when the provider itself has fewer subscribers! The result of this, if you work out the statistics, is that requiring providers to allow P4P imposes a burden that is roughly inversely proportional to the SQUARE of the number of subscribers. In other words, if Comcast and “Mom and Pop Internet” (a competitive ISP which is 1/100th the size of Comcast) both allow P4P traffic on their networks, the backbone traffic cost to “Mom and Pop” network is 10,000 times as great. So, we may well see larger ISPs embracing P4P as an anticompetitive tool — a good reason NOT to require providers to carry it.
Far be it for me to insist that any ISP be required to implement P4P or any other such thing – I mention it as an example of the kind of work that’s needed on an ongoing basis to keep the Internet running well. The advocates of ISP regulation like to claim that “nobody needs permission to innovate on the Internet,” but the truth is a bit more complicated than that.
All “innovations,” loosely defined a popular new applications, alter the mix of traffic within and across the private networks that make up the Internet, and it always takes a lot of engineering and a great deal of investment to digest them. The publicly-owned networks don’t contribute to this engineering ecosystem, preferring to ride on the backs of the capitalists who do the hard work.
Similarly, every ISP network connected to the others through the Internet is engaged in a never-ending struggle to increase capacity, and the concomitant struggle to find the capital it takes to do so (and I don’t need to tell you that, of course, as you’re in the middle of it.)
It becomes more clear each day that the real problems with American broadband are not so much at the first and last mile as they are at the backhaul and peering points, so all this net neutrality emphasis on “end-to-end principles” is nothing more than a distraction. The “net-to-net principle” is the important one.
I do believe there is promise in the Internet as utility model although it doesn’t need to necessarily be a “public utility” as we understand it. The argument for the Internet utility model as I see it is not access but consistent speed of service. There are plenty of Public and Private Sector plans to extend broadband internet to the places that don’t have it. The problem is there is no consistent choice of medium, no definition of broadband speed and inconsistencies in quality of service from medium to medium.
The media currently in play for the last mile are wireless spectrum, coaxial cable, twisted pair copper, fiber optics and power lines. The media in play for middle mile and backbone are the same minus twisted pair and coax. The only one of these media that offers a consistent QoS, high speeds, low maintenance and is, most importantly, future-proof within a reasonable time-span is fiber.
If we set a policy goal that we want broadband internet for all and we don’t want to constantly be reshape the infrastructure to meet new demand and new bandwidth eating killer apps, we need to look to fiber for all. Just as rural electrification allowed for a competitive marketplace for refrigerators and the other home appliances we enjoy reasonably cheaply today, so would a uniform fiber infrastructure shift the market competition to the application market which has low sunk costs and barriers to entry and away from the infrastructure.
We have an infrastructure market failure. Let’s roll cheap fiber out to every home and let the tax revenue and positive externalities from the resulting application market cover the cost.
Consistent QoS isn’t something the Internet as currently operated can provide. This is because it’s not a single network operated by a single provider, but rather is a loose federation of networks operated by thousands of owners who apply their own management policies.
End-to-end QoS for some applications and/or data streams is achievable only by tweaking BGP (the Internet’s network-to-network routing protocol) or by private contractual agreements between networks. The core is already all fiber, as is most of the middle, but it’s not the wiring that makes QoS happen or not happen, it’s the traffic load.
Richard is correct. What’s more, fiber isn’t a one-medium-fits-all solution for broadband. If you have fiber running across the country, and happen to pass a single home that has no other way of connecting to the Net, current technology makes it completely financially infeasible to cut into that fiber to serve that one customer. But wireless is much more flexible and can do the job…. It is much better in situations where population density is low. (In fact, given a rational spectrum policy that gives it enough bandwidth to work in, it can have advantages in areas where the density is high as well.) As an engineer who operates an ISP (the first wireless ISP, in fact), I pick the technology that’s best for the job. (And, yes, even though wireless is our specialty I do run fiber or copper if it’s the right solution.) All of these technologies have different strengths and weaknesses in different situations, and it’s inappropriate to rule any one of them out or declare any one of them to be a panacea.