Tag: data

Response to Conor Clarke, Part I

Last week Conor Clarke at The Atlantic blog , apparently as part of a running argument with Jim Manzi, raised four substantive issues with my study, “What to Do About Climate Change,” that Cato published last year. Mr. Clarke deserves a response, and I apologize for not getting to this sooner. Today, I’ll address the first part of his first comment. I’ll address the rest of his comments over the next few days.

Conor Clarke: 

(1) Goklany’s analysis does not extend beyond the 21st century. This is a problem for two reasons. First, climate change has no plans to close shop in 2100. Even if you believe GDP will be higher in 2100 with unfettered global warming than without, it’s not obvious that GDP would be higher in the year 2200 or 2300 or 3758. (This depends crucially on the rate of technological progress, and as Goklany’s paper acknowledges, that’s difficult to model.) Second, the possibility of “catastrophic” climate change events – those with low probability but extremely high cost – becomes real after 2100.

Response:  First, I wouldn’t put too much stock in analyses purporting to extend out to the end of the 21st century, let alone beyond that, for numerous reasons, some of which are laid out on pp. 2-3 of the Cato study. As noted there, according to a paper commissioned for the Stern Review, “changes in socioeconomic systems cannot be projected semi-realistically for more than 5–10 years at a time.”

Second, regarding Mr. Clarke’s statement that, “Even if you believe GDP will be higher in 2100 with unfettered global warming than without, it’s not obvious that GDP would be higher in the year 2200 or 2300 or 3758,” I should note that the conclusion that net welfare for 2100 (measured by net GDP per capita) is not based on a belief.  It follows inexorably from Stern’s own analysis.

Third, despite my skepticism of long term estimates, I have, for the sake of argument, extended the calculation to 2200. See here. Once again, I used the Stern Review’s estimates, not because I think they are particularly credible (see below), but for the sake of argument. Specifically, I assumed that losses in welfare due to climate change under the IPCC’s warmest scenario would, per the Stern Review’s 95th percentile estimate, be equivalent to 35.2 percent of GDP in 2200. [Recall that Stern’s estimates account for losses due to market impacts, non-market (i.e., environmental and public health) impacts and the risk of catastrophe, so one can’t argue that only market impacts were considered.]

The results, summarized in the following figure, indicate that even if one uses the Stern Review’s inflated impact estimates under the warmest IPCC scenario, net GDP in 2200 ought to be higher in the warmest world than in cooler worlds for both developing and industrialized countries.


Source: Indur M. Goklany, “Discounting the Future,” Regulation 32: 36-40 (Spring 2009).

The costs of climate change used to develop the above figure are, most likely, overestimated because they do not properly account for increases in future adaptive capacity consistent with the level of net economic development resulting from Stern’s own estimates (as shown in the above figure).  This figure shows that even after accounting for losses in GDP per capita due to climate change – and inflating these losses – net GDP per capita in 2200 would be between 16 and 85 times higher in 2200 that it was in the baseline year (1990).  No less important, Stern’s estimate of the costs of climate change neglect secular technological change that ought to occur during the 210-year period extending from the base year (1990) to 2200. In fact, as shown here, empirical data show that for most environmental indicators that have a critical effect on human well-being, technology has, over decades-long time frames reduced impacts by one or more orders of magnitude.

As a gedanken experiment, compare technology (and civilization’s adaptive capacity) in 1799 versus 2009. How credible would a projection for 2009 have been if it didn’t account for technological change from 1799 to 2009?

I should note that some people tend to dismiss the above estimates of GDP on the grounds that it is unlikely that economic development, particularly in today’s developing countries, will be as high as indicated in the figure.  My response to this is that they are based on the very assumptions that drive the IPCC and the Stern Review’s emissions and climate change scenarios. So if one disbelieves the above GDP estimates, then one should also disbelieve the IPCC and the Stern Review’s projection for the future.

Fourth, even if analysis that appropriately accounted for increases in adaptive capacity had shown that in 2200 people would be worse off in the richest-but-warmest world than in cooler worlds, I wouldn’t get too excited just yet. Even assuming a 100-year lag time between the initiation of emission reductions and a reduction in global temperature because of a combination of the inertia of the climate system and the turnover time for the energy infrastructure, we don’t need to do anything drastic till after 2100 (=2200 minus 100 years), unless monitoring shows before then that matters are actually becoming worse (as opposing merely changing), in which case we should certainly mobilize our responses. [Note that change doesn’t necessarily equate to worsening. One has to show that a change would be for the worse.  Unfortunately, much of the climate change literature skips this crucial step.]

In fact, waiting-and-preparing-while-we-watch (AKA watch-and-wait) makes most sense, just as it does for many problems (e.g., some cancers) where the cost of action is currently high relative to its benefit, benefits are uncertain, and technological change could relatively rapidly improve the cost-benefit ratio of controls. Within the next few decades, we should have a much better understanding of climate change and its impacts, and the cost of controls ought to decline in the future, particularly if we invest in research and development for mitigation.  In the meantime we should spend our resources on solving today’s first order problems – and climate change simply doesn’t make that list, as shown by the only exercises that have ever bothered to compare the importance of climate change relative to other global problems.  See here and here.  As is shown in the Cato paper (and elsewhere), this also ought to reduce vulnerability and increase resiliency to climate change.

In the next installment, I’ll address the second point in Mr. Clarke’s first point, namely, the fear that “the possibility of ‘catastrophic’ climate change events – those with low probability but extremely high cost – becomes real after 2100.”

Debate over Duncan’s Record in Chicago

At The Quick and the Ed, Chad Aldeman disputes my assertion that Duncan’s impact on Chicago public school achievement was near zero.  To make his case, Aldeman cites the fact that scores rose during Duncan’s tenure on 3 out of the 4 available NAEP tests. While true, this evidence actually supports my assertion rather than either Aldeman’s or Duncan’s.

Chicago’s gains on the NAEP tests ranged from 0.3 to 7.2 points on the 500 point scale, averaging out to a 1% increase in scale scores. I think 1% is pretty darn close to zero, and that’s what I said.

What’s more, as I wrote yesterday, the minuscule 1% improvement in Chicago NAEP scores was statistically identical to the improvement made by students in large central cities all over the country during the same period, so “The Duncan Effect” – his value-added over other large city superintendents – was precisely zero.

If there are other relevant data that I’m unaware of that paint a different picture, I’ll be happy to look at them. But the NAEP results flatly contradict Duncan’s own claims – routinely repeated in the media – that students made dramatic academic gains under his leadership.

Some Thinking on “Cyber”

Last week, I had the opportunity to testify before the House Science Committee’s Subcommittee on Technology and Innovation on the topic of “cybersecurity.” I have been reluctant to opine on it because of its complexity, but I did issue a short piece a few months ago arguing against government-run cybersecurity. That piece was cited prominently in the White House’s “Cyberspace Policy Review” and – blamo! – I’m a cybersecurity expert.

Not really – but I have been forming some opinions at a high level of generality that are worth making available. They can be found in my testimony, but I’ll summarize them briefly here.

First, “cybersecurity” is a term so broad as to be meaningless. Yes, we are constructing a new “space” analogous to physical space using computers, networks, sensors, and data, but we can no more secure “cyberspace” in its entirety than we can secure planet Earth and the galaxy. Instead, we secure the discrete things that are important to us – houses, cars, buildings, power lines, roads, private information, money, and so on. And we secure these things in thousands of different ways. We should secure “cyberspace” the same way – thousands of different ways.

By “we,” of course, I don’t mean the collective. I mean that each owner or controller of a prized thing should look out for its security. It’s the responsibility of designers, builders, and owners of houses, for exmple, to ensure that they properly secure the goods kept inside. It’s the responsibility of individuals to secure the information they wish to keep private and the money they wish to keep. It is the responsibility of network operators to secure their networks, data holders to secure their data, and so on.

Second, “cyber” threats are being over-hyped by a variety of players in the public policy area. Invoking “cyberterrorism” or “cyberwar” is near-boilerplate in white papers addressing government cybersecurity policy, but there is very limited strategic logic to “cyberwarfare” (aside from attacking networks during actual war-time), and “cyberterrorism” is a near-impossibility. You’re not going to panic people – and that’s rather integral to terrorism – by knocking out the ATM network or some part of the power grid for a period of time.

(We weren’t short of careless discussions about defending against “cyber attack,” but L. Gordon Crovitz provided yet another example in yesterday’s Wall Street Journal. As Ben Friedman pointed out, Evgeny Morozov has the better of it in the most recent Boston Review.)

This is not to deny the importance of securing digital infrastructure; it’s to say that it’s serious, not scary. Precipitous government cybersecurity policies – especially to address threats that don’t even have a strategic logic – would waste our wealth, confound innovation, and threaten civil liberties and privacy.

In the cacophony over cybersecurity, an important policy seems to be getting lost: keeping true critical infrastructure offline. I noted Senator Jay Rockefeller’s (D-WV) awesomely silly comments about cybersecurity a few months ago. They were animated by the premise that all the good things in our society should be connected to the Internet or managed via the Internet. This is not true. Removing true critical infrastructure from the Internet takes care of the lion’s share of the cybersecurity problem.

Since 9/11, the country has suffered significant “critical-infrastructure inflation” as companies gravitate to the special treatments and emoluments government gives owners of “critical” stuff. If “criticality” is to be a dividing line for how assets are treated, it should be tightly construed: If the loss of an asset would immediately and proximately threaten life or health, that makes it critical. If danger would materialize over time, that’s not critical infrastructure – the owners need to get good at promptly repairing their stuff. And proximity is an important limitation, too: The loss of electric power could kill people in hospitals, for example, but ensuring backup power at hospitals can intervene and relieve us of treating the entire power grid as “critical infrastructure,” with all the expense and governmental bloat that would entail.

So how do we improve the state of cybersecurity? It’s widely believed that we are behind on it. Rather than figuring out how to do cybersecurity – which is impossible – I urged the committee to consider what policies or legal mechanisms might get these problems figured out.

I talked about a hierarchy of sorts. First, contract and contract liability. The government is a substantial purchaser of technology products and services – and highly knowledgeable thanks to entities like the National Institutes of Standards and Technology. Yes, I would like it to be a smaller purchaser of just about everything, but while it is a large market actor, it can drive standards and practices (like secure settings by default) into the marketplace that redound to the benefit of the cybersecurity ecology. The government could also form contracts that rely on contract liability – when products or services fail to serve the purposes for which they’re intended, including security – sellers would lose money. That would focus them as well.

A prominent report by a working group at the Center for Strategic and International Studies – co-chaired by one of my fellow panelists before the Science Committee last week, Scott Charney of Microsoft – argued strenuously for cybersecurity regulation.

But that begs the question of what regulation would say. Regulation is poorly suited to the process of discovering how to solve new problems amid changing technology and business practices.

There is some market failure in the cybersecurity area. Insecure technology can harm networks and users of networks, and these costs don’t accrue to the people selling or buying technology products. To get them to internalize these costs, I suggested tort liability rather than regulation. While courts discover the legal doctrines that unpack the myriad complex problems with litigating about technology products and services, they will force technology sellers and buyers to figure out how to prevent cyber-harms.

Government has a role in preventing people from harming each other, of course, and the common law could develop to meet “cyber” harms if it is left to its own devices. Tort litigation has been abused, and the established corporate sector prefers regulation because it is a stable environment for them, it helps them exclude competition, and they can use it to avoid liability for causing harm, making it easier to lag on security. Litigation isn’t preferable, and we don’t want lots of it – we just want the incentive structure tort liability creates.

As the distended policy issue it is, “cybersecurity” is ripe for shenanigans. Aggressive government agencies are looking to get regulatory authority over the Internet, computers, and software. Some of them wouldn’t mind getting to watch our Internet traffic, of course. Meanwhile, the corporate sector would like to use government to avoid the hot press of market competition, while shielding itself from liability for harms it may cause.

The government must secure its own assets and resources – that’s a given. Beyond that, not much good can come from government cybersecurity policy, except the occassional good, long blog post.

Morozov vs. Cyber-Alarmism

I’m no information security expert, but you don’t have to be to realize that an outbreak of cyber-alarmism afflicts American pundits and reporters.

As Jim Harper and Tim Lee have repeatedly argued (with a little help from me), while the internet created new opportunities for crime, spying, vandalism and military attack, the evidence that the web opens a huge American national security vulnerability comes not from events but from improbable what-ifs. That idea is, in other words, still a theory. Few pundits bother to point out that hackers don’t kill, that cyberspies don’t seem to have stolen many (or any?) important American secrets, and that our most critical infrastructure is not run on the public internet and thus is relatively invulnerable to cyberwhatever. They never note that to the extent that future wars have an online component, this redounds to the U.S. advantage, given our technological prowess.  Even the Wall Street Journal and New York Times recently published breathless stories exaggerating our vulnerability to online attacks and espionage.

So it’s good to see that the July/ August Boston Review has a terrific article by Evgeny Morozov taking on the alarmists. He provides not only a sober net assessment of the various worries categorized by the vague modifier “cyber” but even offers a theory about why hype wins.

Why is there so much concern about “cyber-terrorism”? Answering a question with a question: who frames the debate? Much of the data are gathered by ultra-secretive government agencies—which need to justify their own existence—and cyber-security companies—which derive commercial benefits from popular anxiety. Journalists do not help. Gloomy scenarios and speculations about cyber-Armaggedon draw attention, even if they are relatively short on facts.

I agree.

Social Control as a Profit Center

Here’s an idea that should be killed in the crib: scanning automobiles for up-to-date insurance.

Says Gizmodo (via ars technica and the Chicago Sun-Times):

The system is anticipated to raise yearly earnings “well in excess” of $100 million (possibly even double that figure or more), with InsureNet taking a modest 30% for their services. Of course, all of this cash would be contingent on uninsured drivers actually paying their fines.

There will be thousands more reasons like this put forward for mass public surveillance. The answer should almost always be no because of the accumulations of data about law-abiding citizens such programs would collect in government (and government-contractor) databases.

Taxpayers and the Federal Diary

The Federal Diary column in the Washington Post is a curious piece of newspaper real estate. Most newspaper columns are aimed at the broad general public, but this column is aimed directly at the few hundred thousand government workers in the DC region. The result is that it takes a very government- and union-centric view of the world. The fact that the federal civilian workforce costs taxpayers an enormous $300 billion or so every year is beside the point for the column.

In a briefing with reporters yesterday, the head of the Office of Personnel Management complained about a Lou Dobbs television bit that featured this data that I assembled from the Bureau of Economic Analysis. The Federal Diary columnist called me yesterday about the data, and I explained to him the shortcomings of the OPM claims that federal workers are underpaid.

Unfortunately, the Federal Diary today simply parrots the OPM’s claims, calling the Dobbs/Edwards/BEA data “misleading.” Yet this data clearly shows that federal compensation has taken off like a rocket this decade.

Today’s column, like many of the Federal Diary columns, is about how to improve the pay, benefits, and working conditions of federal workers. What about the taxpayers who foot the bill? To provide some balance, the Post ought to at least have a side-by-side column entitled “Federal Taxpayers’ Diary.”

Euro VAT for America?

Desperate for fresh revenues to feed the giant spending appetite of President Obama, Democratic policymakers are talking up ‘tax reform’ as a way to reduce the deficit. Some are considering a European-style value-added tax (VAT), which would have a similar effect as a national sales tax, and be a large new burden on American families.

A VAT would raise hundreds of billions of dollars a year for the government, even at a 10-percent rate. The math is simple: total U.S. consumption in 2008 was $10 trillion. VATs usually tax about half of a nation’s consumption or less, say $5 trillion. That means that a 10% VAT would raise about $500 billion a year in the United States, or about $4,300 from every household. Obviously such a huge tax hit would fundamentally change the American economy and society, and for the worse.

Some fiscal experts think that a VAT would solve the government’s budget problems and reduce the deficit, as the Washington Post noted yesterday. That certainly has not happened in Europe where the average VAT rate is a huge 20 percent, and most nations face large budget deficits just as we do. The hard truth for policymakers to swallow is that the only real cure for our federal fiscal crisis is to cut spending.

Liberals like VATs because of the revenue-raising potential, but some conservatives are drawn to the idea of using VAT revenues to reduce the corporate tax rate. The Post story reflected this in noting “A 21 percent VAT has permitted Ireland to attract investment by lowering the corporate tax rate.” That implies that the Irish government lost money when it cut its corporate rate, but actually the reverse happened in the most dramatic way.

Ireland installed a 10% corporate rate for certain industries in the 1980s, but also steadily cut its regular corporate rate during the 1990s. It switched over to a 12.5% rate for all corporations in 2004. OECD data show that as the Irish corporate tax rate fell, corporate tax revenues went through the roof – from 1.6% of GDP in 1990, to 3.7% in 2000, to 3.8% in 2006.

In sum, a VAT would not solve our deficit problems because Congress would simply boost its spending even higher, as happened in Europe as VAT rates increased over time. Also, a VAT is not needed to cut the corporate income tax rate because a corporate rate cut would be self-financing over the long-term as tax avoidance fell and economic growth increased.