Open Networks and Regulation

Thomas Hazlett, professor at George Mason and one of the smartest people writing about telecom regulation today, has an interesting column about the iPhone that’s largely framed as a rebuttal to this Slate column by Columbia law professor Tim Wu. Wu’s column, published the week of the iPhone launch, argues that the iPhone isn’t truly revolutionary because like other cell phones, it’s a “walled garden.” It only works with AT&T’s wireless service, and it only offers the features that Apple and AT&T have approved ahead of time. A truly revolutionary phone, Wu says, would be an open platform that would allow third parties to develop new applications and services.

Hazlett, in contrast, feels that Apple’s walled garden represents the ingenuity of the market process:

Apple could have offered its device as an “open” platform, but instead chose (as with iTunes, iPods and Apple computers) to control how it builds, and how buyers use, its product. It aims for competitive superiority. Quashing its model bops the innovator on the head.

Unbundling phones from networks is suggested as a policy fix in the US. European phones, working with different Sim cards across carriers and borders, are the model. Innovation in the European Union is said to flourish. But the iPhone came first to the US, as did the BlackBerry and advanced broadband networks using CDMA data formats. That is not surprising given that US networks are afforded wide latitude in designing their systems. Licences in the EU mandate a GSM standard. What is recommended as “open” in fact deprives customers of a most basic cellular choice: technology.

Personally, I think they’re both right. Hazlett is right that government regulation of spectrum is a bad idea, and that robust property rights are far preferable. But Wu is right that open platforms tend to be more innovative than closed platforms. For example, during the 1990s the Internet’s open architecture allowed the creation of dozens of innovative startups like Netscape, Yahoo, and Google. The closed networks of companies like AOL and Compuserve simply couldn’t compete. There’s every reason to think a similar explosion of innovation would happen if it became easier for third parties to build new wireless devices and applications.

And indeed, if you read Wu’s article closely, nowhere does it advocate government regulation. Wu’s article is about technology and economics, not public policy. It doesn’t say anything a libertarian couldn’t whole-heartedly endorse. Of course, Wu has argued elsewhere in support of government regulations to force wireless networks more open. And I think he’s wrong about that—you can listen to a conversation Wu and I had on the subject back in June here. But it’s entirely possible to agree with his technological point about the merits of open networks without jumping to the conclusion that government regulations are called for.

Indeed, I think it’s important that when libertarians argue in opposition to some government regulation, that we not fall into the trap of reflexively opposing the goal the regulation is trying to achieve. There are a lot of computer geeks who are passionate advocates of open networks because they believe (correctly in my view) that open networks tend to provide greater opportunities for entrepreneurship. The argument that closed networks are superior is not only dubious on its merits, but it’s also guaranteed to drive a lot of people into the arms of the pro-regulatory side. I think it’s far better to leave debates about network architecture to the geeks, and focus on the more fundamental point that government regulations inevitably have unintended consequences such as regulatory capture.

Health Outcomes & Equity: the U.S. vs. Canada

Former CBO director June O’Neill and Dave O’Neill have a working paper comparing health outcomes in the semi-socialized U.S. health care sector and the fully socialized Canadian Medicare system.  From the abstract:

Does Canada’s publicly funded, single payer health care system deliver better health outcomes and distribute health resources more equitably than the multi-payer heavily private U.S. system? We show that the efficacy of health care systems cannot be usefully evaluated by comparisons of infant mortality and life expectancy. We analyze several alternative measures of health status… We find a somewhat higher incidence of chronic health conditions in the U.S. than in Canada but somewhat greater U.S. access to treatment for these conditions. Moreover, a significantly higher percentage of U.S. women and men are screened for major forms of cancer. Although health status, measured in various ways is similar in both countries, mortality/incidence ratios for various cancers tend to be higher in Canada… We also find that Canada has no more abolished the tendency for health status to improve with income than have other countries. Indeed, the health-income gradient is slightly steeper in Canada than it is in the U.S.

There’s also this interesting observation from their concluding comments:

The need to ration when care is delivered “free” ultimately leads to long waits or unavailable services and to unmet needs. In the U.S. costs are more often a source of unmet needs. But costs may be more easily overcome than the absence of services. When asked about satisfaction with health services and the ranking of the quality of services recently received, more U.S. residents than Canadians respond that they are fully satisfied and rank quality of care as excellent.

And this crucial caveat:

One important issue that we do not address concerns the large differential in per capita health care expenditures which are about twice as large in the U.S. Is the U.S. getting sufficient additional benefits to justify these greater expenditures and where should we cut back if cutbacks must be made? Alternatively, what would Canada have to spend to increase their technical capital and specialized medical personnel to match American levels or to eliminate the longer waiting times? And would it be worthwhile to them to do so? To answer these questions more research is needed…

Whitman on an Individual Health Insurance Mandate

Economist and blogger Glen Whitman has an excellent article in the latest Cato Policy Report on the latest fad in health policy: requiring people to purchase health insurance, a.k.a. an “individual mandate.” Hillary, Arnold, Mitt, John … the kids, they just love this individual mandate! If you read only one article on the topic, let this be it.

Whitman adds a post-script to that article in a recent post:

Something I don’t mention in the article is why some free-market types support the individual mandate. In short, I think the reason is that they have given too little attention to the political dynamics of such a mandate, instead naively assuming that the mandate could be crafted once-and-for-all in a wise and lobbying-resistant fashion.

More on Google/Doubleclick

Related to my last post, the New York Times’ technology blog has an excellent write-up about the case that illustrates just how arbitrary the standards in antitrust merger reviews can be:

Google’s $3.1 billion deal in April to buy DoubleClick set off a wave of advertising acquisitions by Microsoft, Yahoo, AOL and others. All of those deals have been completed, but Google is still waiting while regulators in Washington and Europe consider the antitrust and privacy implications of its proposed combination…

Several consumer groups are opposing the merger because they fear that Google and DoubleClick will have too much information about Internet users. Most observers suggest that although the commission regulates privacy, its opinion of the merger must reflect antitrust issues only. The groups opposing the deal have argued that there are in fact precedents for the commission to take privacy concerns into account. (The Electronic Privacy Information Center, which is against the merger, lists many documents supporting its view here. Google’s take is here.)

In the end, Mr. Lindsay writes that Google may well be forced to accept some limitations on its use of data about Internet users. It may be required in the United States to anonymize data about users after 18 months, something it already agreed to do with European regulators. And there may be some limits imposed on how data from DoubleClick’s ad serving system can be used by Google.

Now, as our own Jim Harper will be the first to tell you, there are reasons for consumers to be concerned about the data-retention policies of large Internet companies, Google included. But if new regulations about online privacy are needed, those regulations should be proposed and debated in Congress. It’s totally inappropriate for government regulators who are supposed to only be reviewing a merger on antitrust grounds to use the review as a pretext to single the company out for special, extra-legal privacy regulations. Whatever problems Google’s privacy policies might have, they’re certainly not attributable to monopoly power on Google’s part: even after the merger Google would have less than a third of the online advertising market.

Unfortunately, the lesson Google is likely to learn from this ordeal is the same one Microsoft learned a decade ago: you can never hire too many lobbyists. Regardless of what the law might say, Washington insiders will find ways to punish successful companies that don’t spend resources cultivating influence in Washington. Is it any wonder that Google has been pouring millions of dollars into a beefed-up Washington presence?

Dueling Antitrust Complaints

A decade ago, Cato scholars argued that the Justice Department’s antitrust case against Microsoft was a witch hunt instigated at the behest of Microsoft’s competitors. They also warned about the inevitably harmful consequences of the politicization of the technology industry. Once technology firms succeed in hobbling a competitor using antitrust law, other companies are likely to respond in kind, leading to a never-ending stream of antitrust litigation.

That prediction has been borne out in spades, as Microsoft, once a principled critic of antitrust law, has discovered the joys of using antitrust as a competitive weapon. This week we learn that Microsoft has enlisted the assistance of a public relations firm to build support for blocking Google’s acquisition of DoubleClick on antitrust grounds. Never mind that there are dozens of firms in the highly competitive online advertising industry, including aQuantive, a company Microsoft snapped up for $6 billion back in May.

Of course, Google’s hands aren’t clean either. In June, we learned that Google has asked the Justice Department to investigate Microsoft for “bundling” a search functionality with its operating system, despite the fact that desktop search has been a standard feature of operating systems for decades.

Unfortunately, we seem to have opened a Pandora’s Box that will be difficult to close. It’s a shame that Microsoft has backed down from its former, principled stance on antitrust, but it’s hardly surprising. Filing frivolous antitrust complaints is now just a part of doing business in the software industry. That’s great for antitrust lawyers, but it’s hard to see how anyone else benefits.

You Call That Rethinking?

In a maddening discussion with Robert Wright, AEI scholar David Frum promises a “rethinking” of his views on Iraq but, unsurprisingly, I suppose, provides no such thing. I’ll leave it to C@L readers to stomach as much of it as they can.

But at times like this, I am reminded of Anatol Lieven’s takedown of Eliot Cohen in The National Interest:

by contributing in this way to a hasty, poorly-planned military operation, it must be repeated that Dr. Cohen took on himself a measure of the moral, intellectual and political responsibility for precisely those U.S. administration mistakes in Iraq which he now denounces, and which have cost so many American lives. It is disappointing—though not surprising—that Dr. Cohen himself does not realize that this record demands from him, as an honorable man, a lengthy period of quiet, private reflection on his mistakes and the reasons for them.

Lieven is absolutely right, but if his advice were followed, housing prices in Northern Virginia could well plummet as the neocon commentariat flees for the hills to contemplate the err of their ways. We probably shouldn’t hold our breath.

Test Score Story the Media Will Miss

The latest 4th and 8th grade test scores for “The Nation’s Report Card,” or National Assessment of Educational Progress, were released this morning. They show improvement in reading and math, particularly at the 4th grade.

The story that the media will report will revolve around claims by No Child Left Behind advocates that their law is responsible for these improvements. In reality, NCLB almost certainly has little to do with these results, since they simply continue patterns that date back at least to 1990 – a dozen years before the law was passed.

But that’s not the real story. The real story is that none of these improvements have been persisting through to the end of high school. What families and business leaders care about is how well students are prepared for life and work at the end of high school. As the NAEP Long Term Trend results show, the mathematics achievement of 17-year-olds has been flat since 1990, and their reading achievement has actually declined. In fact, achievement among 17-year-olds is flat or declining in math, reading, and science since the first NAEP tests were administered in the late 60s and early 70s – despite the fact that real spending has doubled to more than $11,000 per pupil over that period.

What that means is that the improvements in the earliest grades simply represent a shifting of when learning is happening, not an increase in what students ultimately learn. We are, in the hackneyed phrase, merely rearranging the deck chairs on the Titanic as it continues to slip beneath the waves.

That’s the sad but true story that the American people need to be told.