Proponents of network neutrality regulation are cheering the announcement this week that the Federal Communications Commission will seek to reclassify Internet Service Providers as “common carriers” under Title II of the Telecommunications Act. The move would trigger broad regulatory powers over Internet providers—some of which, such as authority to impose price controls, the FCC has said it will “forbear” from asserting—in the name of “preserving the open internet.”
Two initial thoughts:
First, the scope of the move reminds us that “net neutrality” has always been somewhat nebulously defined and therefore open to mission creep. To the extent there was any consensus definition, net neutrality was originally understood as being fundamentally about how ISPs like Comcast or Verizon treat data packets being sent to users, and whether the companies deliberately configured their routers to speed up or slow down certain traffic. Other factors that might affect the speed or quality of service—such as peering and interconnection agreements between ISPs and large content providers or backbone intermediaries—were understood to be a separate issue. In other words, net neutrality was satisfied so long as Comcast was treating packets equally once they’d reached Comcast’s network. Disputes over who should bear the cost of upgrading the connections between networks—though obviously relevant to the broader question of how quickly end-users could reach different services—were another matter.
Now the FCC will also concern itself with these contracts between corporations, giving content providers a fairly large cudgel to brandish against ISPs if they’re not happy with the peering terms on offer. In practice, even a “treat all packets equally” rule was going to be more complicated than it sounds on face, because the FCC would still have to distinguish between permitted “reasonable network management practices” and impermissible “packet discrimination.” But that’s simplicity itself next to the problem of determining, on a case by case basis, when the terms of a complex interconnection contract between two large corporations are “unfair” or “unreasonable.”
Second, it remains pretty incredible to me that we’re moving toward a broad preemptive regulatory intervention before we’ve even seen what deviations from neutrality look like in practice. Nobody, myself included, wants to see the “nightmare scenario” where ISPs attempt to turn the Internet into a “walled garden” whose users can only access the sites of their ISP's corporate partners at usable speeds, or where ISPs act to throttle businesses that might interfere with their revenue streams from (say) cable television or voice services. There are certainly hypothetical scenarios that could play out where I’d agree intervention was justified—though I’d also expect targeted interventions by agencies like the Federal Trade Commission to be the most sensible first resort in those cases.
Instead, the FCC is preparing to impose a blanket regulatory structure—including open-ended authority to police unspecified “future conduct” of which it disapproves—in the absence of any sense of what deviations from neutrality might look like in practice. Are there models that might allow broadband to be cheaper or more fairly priced for users—where, let’s say, you buy a medium-speed package for most traffic, but Netflix pays to have high-definition movies streamed to their subscribers at a higher speed? I don’t know, but it would be interesting to find out. Instead, users who want any of their traffic delivered at the highest speed will have to continue paying for all their traffic to be delivered at that speed, whether they need it or not. The extreme version of this is the controversy over "zero-rating" in the developing world, where the Orthodox Neutralite position is that it's better for those who can't afford mobile Internet access to go without rather than let companies like Facebook and Wikipedia provide poor people with subsidized free access to their sites.
The deep irony here is that “permissionless innovation” has been one of the clarion calls of proponents of neutrality regulation. The idea is that companies at the “edge” of the network introducing new services should be able to launch them without having to negotiate with every ISP in order to get their traffic carried at an acceptable speed. Users like that principle too; it’s why services like CompuServe and AOL ultimately had to abandon a “walled garden” model that gave customers access only to a select set of curated services.
But there’s another kind of permissionless innovation that the FCC’s decision is designed to preclude: innovation in business models and routing policies. As Neutralites love to point out, the neutral or “end-to-end” model has served the Internet pretty well over the past two decades. But is the model that worked for moving static, text-heavy webpages over phone lines also the optimal model for streaming video wirelessly to mobile devices? Are we sure it’s the best possible model, not just now but for all time? Are there different ways of routing traffic, or of dividing up the cost of moving packets from content providers, that might lower costs or improve quality of service? Again, I’m not certain—but I am certain we’re unlikely to find out if providers don’t get to run the experiment. It seems to me that the only reason not to want to find out is the fear that some consumers will like the results of at least some of these experiments, making it politically more difficult to entrench the sacred principle of neutrality in law. After all, you’d think that if provider deviations from neutrality in the future prove uniformly and manifestly bad for consumers or for innovation, it will only be easier to make the case for regulation.
As I argued a few years back, common carrier regimes might make sense when you’re fairly certain there’s more inertia in your infrastructure than in your regulatory structure. Networks of highways and water pipes change slowly, and it’s a good bet that a sound rule today will be a sound rule in a few years. The costs imposed by lag in the regulatory regime aren’t outrageously high, because even if someone came up with a better or cheaper way to get water to people’s homes, reengineering physical networks of pipes is going to be a pretty slow process. But wireless broadband is not a network of pipes, or even a series of tubes. Unless we’re absolutely certain we already know the best way to price and route data packets—both through fiber and over the air—there is something perverse about a regulatory approach that precludes experimentation in the name of “innovation.”