Tag: ipcc

A Clear Example of IPCC Ideology Trumping Fact

The Current Wisdom is a series of monthly articles in which Patrick J. Michaels and Paul C. “Chip” Knappenberger, from Cato’s Center for the Study of Science, review interesting items on global warming in the scientific literature that may not have received the media attention that they deserved, or have been misinterpreted in the popular press.

———
 

When it comes to global warming, facts often take a back seat to fiction. This is especially true with proclamations coming from the White House. But who can blame them, as they are just following the lead from Big Green groups (aka, “The Green Blob”), the U.S. Climate Change Research Program (responsible for the U.S. National Climate Assessment Report), and of course, the United Nations’ Intergovernmental Panel on Climate Change (IPCC).

We have documented this low regard for the facts (some might say, deception) on many occasions, but recently we have uncovered  a particularly clear example where the IPCC’s ideology trumps the plain facts, giving the impression that climate models perform a lot better than they actually do. This is an important façade for the IPCC to keep up, for without the overheated climate model  projections of future climate change, the issue would be a lot less politically interesting (and government money could be used for other things … or simply not taken from taxpayers in the first place).

The IPCC is given deference when it comes to climate change opinion at all Northwest Washingon D.C. cocktail parties (which means also by the U.S. federal government) and other governments around the world. We tirelessly point out why this is not a good idea. By the time you get to the end of this post, you will see that the IPCC does not seek to tell the truth—the inconvenient one being that it dramatically overstated the case for climate worry in its previous reports. Instead, it continues to obfuscate.

This extracts a cost. The IPCC is harming the public health and welfare of all humankind as it pressures governments to seek to limit energy choices instead of seeking ways to help expand energy availability (or, one would hope, just stay out of the market).

Everyone knows that global warming (as represented by the rise in the earth’s average surface temperature) has stopped for nearly two decades now. As historians of science have noted, scientists can be very creative when defending the paradigm that pays. In fact, there are  already several dozen explanations

Climate modelers are scrambling to try to save their creations’  reputations because the one thing that they do not want to have to admit is that they exaggerate the amount that the earth’s average temperature will increase as a result of human greenhouse gas emissions. If the models are overheated, then so too are all the projected impacts that derive from the model projections—and that would be a disaster for all those pushing for regulations limiting the use of fossil fuels for energy. It’s safe to say the number of people employed by creating, legislating, lobbying, and enforcing these regulations is huge, as in “The Green Blob.”

Current Wisdom: Observations Now Inconsistent with Climate Model Predictions for 25 (going on 35) Years

The Current Wisdom is a series of monthly articles in which Patrick J. Michaels and Paul C. “Chip” Knappenberger, from Cato’s Center for the Study of Science, review interesting items on global warming in the scientific literature that may not have received the media attention that they deserved, or have been misinterpreted in the popular press.  

 

Question: How long will the fantasy that climate models are reliable indicators of the earth’s climate evolution persist in face of overwhelming evidence to the contrary?  

Answer: Probably for as long as there is a crusade against fossil fuels.  

Without the exaggerated alarm conjured from overly pessimistic climate model projections of climate change from carbon dioxide emissions, fossil fuels—coal, oil, gas—would regain their image as the celebrated agents of  prosperity that they are, rather than being labeled as pernicious agents of our destruction.  

Just how credible are these climate models?  

In two words, “they’re not.”  

Everyone has read that over the past 10-15 years, most climate models’ forecasts of the rate of global warming have been wrong. Most predicted a hefty warming of the earth’s average surface temperature to have taken place, while there was no significant change in the real world.  

But very few  people know that the same situation has persisted for 25, going on 35 years, or that over the past 50-60 years (since the middle of the 20th century), the same models expected about 33 percent more warming to have taken place than was observed.  

The Current Wisdom: Better Model, Less Warming

The Current Wisdom is a series of monthly posts in which Senior Fellow Patrick J. Michaels reviews interesting items on global warming in the scientific literature that may not have received the media attention that they deserved, or have been misinterpreted in the popular press.

The Current Wisdom only comments on science appearing in the refereed, peer-reviewed literature, or that has been peer-screened prior to presentation at a scientific congress.


Better Model, Less Warming

Bet you haven’t seen this one on TV:  A newer, more sophisticated climate model has lost more than 25% of its predicted warming!  You can bet that if it had predicted that much more warming it would have made the local paper.

The change resulted from a more realistic simulation of the way clouds work, resulting in a major reduction in the model’s “climate sensitivity,” which is the amount of warming predicted for a doubling of  the concentration of atmospheric carbon dioxide over what it was prior to the industrial revolution.

Prior to the modern era, atmospheric carbon dioxide concentrations, as measured in air trapped in ice in the high latitudes (which can be dated year-by-year) was pretty constant, around 280 parts per million (ppm).  No wonder CO2 is called a “trace gas”—there really is not much of it around.

The current concentration is pushing about 390 ppm, an increase of about 40% in 250 years.  This is a pretty good indicator of the amount of “forcing” or warming pressure that we are exerting on the atmosphere.  Yes, there are other global warming gases going up, like the chlorofluorocarbons (refrigerants now banned by treaty), but the modern climate religion is that these are pretty much being cancelled by reflective  “aerosol” compounds that go in the air along with the combustion of fossil fuels, mainly coal.

Most projections have carbon dioxide doubling to a nominal 600 ppm somewhere in the second half of this century, absent no major technological changes (which history tells us is a very shaky assumption).  But the “sensitivity” is not reached as soon as we hit the doubling, thanks to the fact that it takes a lot of time to warm the ocean (like it takes a lot of time to warm up a big pot of water with a small burner).

So the “sensitivity” is much closer to the temperature rise that a model projects about 100 years from now – assuming (again, shakily) that we ultimately switch to power sources that don’t release dreaded CO2 into the atmosphere somewhere around the time its concentration doubles.

The bottom line is that lower sensitivity means less future warming as a result of anthropogenic greenhouse gas emissions. So our advice… keep on working on the models, eventually, they may actually arrive at something close puny rate of warming that is being observed

At any rate, improvements to the Japanese-developed Model for Interdisciplinary Research on Climate (MIROC) are the topic of a new paper by Masahiro Watanabe and colleagues in the current issue of the Journal of Climate. This modeling group has been working on a new version of their model (MIROC5) to be used in the upcoming 5th Assessment Report of the United Nations’ Intergovernmental Panel on Climate Change, due in late 2013. Two incarnations of the previous version (MIROC3.2) were included in the IPCC’s 4th Assessment Report (2007) and contribute to the IPCC “consensus” of global warming projections.

The high resolution version (MIROC3.2(hires)) was quite a doozy – responsible for far and away the greatest projected global temperature rise (see Figure 1). And the medium resolution model (MIROC3.2(medres)) is among the Top 5 warmest models. Together, the two MIROC models undoubtedly act to increase the overall model ensemble mean warming projection and expand the top end of the “likely” range of temperature rise.

FIGURE 1

Global temperature projections under the “midrange” scenario for greenhouse-gas emissions produced by the IPCC’s collection of climate models.  The MIROC high resolution model (MIROC3.2(hires)) is clearly the hottest one, and the medium range one isn’t very far behind.

The reason that the MIROC3.2 versions produce so much warming is that their  sensitivity is very high, with the high-resolution  at 4.3°C (7.7°F) and the medium-resolution  at  4.0°C (7.2°F).  These sensitivities are very near the high end of the distribution of climate sensitivities from the IPCC’s collection of models (see Figure 2).

FIGURE 2

Equilibrium climate sensitivities of the models used in the IPCC AR4 (with the exception of the MIROC5). The MIROC3.2 sensitivities are highlighted in red and lie near the upper und of the collection of model sensitivities.  The new, improved, MIROC5, which was not included in the IPCC AR4, is highlighted in magenta, and lies near the low end of the model climate sensitivities (data from IPCC Fourth Assessment Report, Table 8.2 and Watanabe et al., 2010).

Note that the highest sensitivity is not necessarily in the hottest model, as observed warming is dependent upon how the model deals with the slowness of the oceans to warm.

The situation is vastly different in the new MIROC5 model.  Watanabe et al. report that the climate sensitivity is now  2.6°C (4.7°F) – more than 25% less than in the previous version on the model.[1] If the MIROC5 had been included in the IPCC’s AR4 collection of models, its climate sensitivity of 2.6°C would have been found near the low end of the distribution (see Figure 2), rather than pushing the high extreme as MIROC3.2 did.

And to what do we owe this large decline in the modeled climate sensitivity?  According to Watanabe et al., a vastly improved handling of cloud processes involving “a prognostic treatment for the cloud water and ice mixing ratio, as well as the cloud fraction, considering both warm and cold rain processes.”  In fact, the improved cloud scheme—which produces clouds which compare more favorably with satellite observations—projects that under a warming climate low altitude clouds become a negative feedback rather than acting as positive feedback as the old version of the model projected.[2] Instead of enhancing the CO2-induced warming, low clouds are now projected to retard it.

Here is how Watanabe et al. describe their results:

A new version of the global climate model MIROC was developed for better simulation of the mean climate, variability, and climate change due to anthropogenic radiative forcing….

MIROC5 reveals an equilibrium climate sensitivity of 2.6K, which is 1K lower than that in MIROC3.2(medres)…. This is probably because in the two versions, the response of low clouds to an increasing concentration of CO2 is opposite; that is, low clouds decrease (increase) at low latitudes in MIROC3.2(medres) (MIROC5).[3]

Is the new MIROC model perfect? Certainly not.  But is it better than the old one? It seems quite likely.  And the net result of the model improvements is that the climate sensitivity and therefore the warming projections (and resultant impacts) have been significantly lowered. And much of this lowering comes as the handling of cloud processes—still among the most uncertain of climate processes—is improved upon. No doubt such improvements will continue into the future as both our scientific understanding and our computational abilities increase.

Will this lead to an even greater reduction in climate sensitivity and projected temperature rise?  There are many folks out there (including this author) that believe this is a very distinct possibility, given that observed warming in recent decades is clearly beneath the average predicted by climate models. Stay tuned!

References:

Intergovernmental Panel on Climate Change, 2007.  Fourth Assessment Report, Working Group 1 report, available at http://www.ipcc.ch.

Watanabe, M., et al., 2010. Improved climate simulation by MIROC5: Mean states, variability, and climate sensitivity. Journal of Climate, 23, 6312-6335.


[1] Watanabe et al. report that the sensitivity of MIROC3.2 (medres) is 3.6°C (6.5°), which is less that what was reported in the 2007 IPCC report.  So 25% is likely a conservative estimate of the reduction in warming.

[2] Whether enhanced cloudiness enhances or cancels carbon-dioxide warming is one of the core issues in the climate debate, and is clearly not “settled” science.

[3] Degrees Kelvin (K) are the same as degrees Celsius (C) when looking at relative, rather than absolute temperatures.

The Shocking Truth: The Scientific American Poll on Climate Change

November’s Scientific American features a profile of Georgia Tech atmospheric scientist Judith Curry,  who has committed the mortal sin of  reaching out to other scientists who hypothesize that global warming isn’t the disaster it’s been cracked up to be.  I have personal experience with this, as she invited me to give a research seminar in Tech’s prestigious School of Earth and Atmospheric Sciences in 2008.  My lecture summarizing the reasons for doubting the apocalyptic synthesis of climate change was well-received by an overflow crowd.

Written by Michael Lemonick, who hails from the shrill blog Climate Central, the article isn’t devoid of the usual swipes, calling her a “heretic„ which is hardly at all true.  She’s simply another hardworking scientist who lets the data take her wherever it must, even if that leads her to question some of our more alarmist colleagues. 

But, as a make-up call for calling attention to Curry, Scientific American has run a poll of its readers on climate change.  Remember that SciAm has been shilling for the climate apocalypse for years, publishing a particularly vicious series of attacks on Denmark’s Bjorn Lomborg’s Skeptical Environmentalist.  The magazine also featured NASA’s James Hansen and his outlandish claims on sea-level rise. Hansen has stated, under oath in a deposition, that a twenty foot rise is quite possible within the next 89 years; oddly, he has failed to note that in 1988 he predicted that the West Side Highway in Manhattan would go permanently under water in twenty years.

SciAm probably expected a lot of people would agree with the key statement in their poll that the United Nations’ Intergovernmental Panel on Climate Change (IPCC) is “an effective group of government representatives and other experts.”

Hardly. As of this morning, only 16% of the 6655 respondents agreed.  84%—that is not a typo—described the IPCC as “a corrupt organization, prone to groupthink, with a political agenda.” 

The poll also asks “What should we do about climate change?” 69% say “nothing, we are powerless to stop it.” When asked about policy options, an astonishingly low 7% support cap-and-trade, which passed the U.S. House of Representatives in June, 2009, and cost approximately two dozen congressmen their seats.

The real killer is question “What is causing climate change?” For this one, multiple answers are allowed.  26% said greenhouse gases from human activity, 32% solar variation, and 78% “natural processes.” (In reality all three are causes of climate change.)

And finally, “How much would you be willing to pay to forestall the risk of catastrophic climate change?”  80% of the respondents said “nothing.”

Remember that this comes from what is hardly a random sample.  Scientific American is a reliably statist publication and therefore appeals to a readership that is skewed to the left of the political center.  This poll demonstrates that virtually everyone now acknowledges that the UN has corrupted climate science, that climate change is impossible to stop, and that futile attempts like cap-and-trade do nothing but waste money and burn political capital, things that Cato’s scholars have been saying for years.

Atomic Dreams

Last week I was on John Stossel’s (most excellent) new show on Fox Business News to discuss energy policy – in particular, popular myths that Republicans have about energy markets.  One of the topics I touched upon was nuclear power.  My argument was the same that I have offered in print: Nuclear power is a swell technology but, given the high construction costs associated with building nuclear reactors, it’s a technology that cannot compete in free markets without a massive amount of government support.  If one believes in free markets, then one should look askance at such policies. 

As expected, the atomic cult has taken offense. 

Now, it is reasonable to argue that excessive regulatory oversight has driven up the cost of nuclear power and that a “better” regulatory regime would reduce costs.  Perhaps.  But I have yet to see any concrete accounting of exactly which regulations are “bad” along with associated price tags for the same.  If anyone out there in Internet-land has access to a good, credible accounting like that, please, send it my way.  But until I see something tangible, what we have here is assertion masquerading as fact.

Most of those who consider themselves “pro-nuke” are unaware of the fact that the current federal regulatory regime was thoroughly reformed in the late 1990s to comport with the industry’s model of what a “good” federal regulatory regime would look like.  As Oliver Kingsley Jr., the President of Exelon Nuclear, put it in Senate testimony back in 2001:

The current regulatory environment has become more stable, timely, and predictable, and is an important contributor to improved performance of nuclear plants in the United States.  This means that operators can focus more on achieving operational efficiencies and regulators can focus more on issues of safety significance.  It is important to note that safety is being maintained and, in fact enhanced, as these benefits of regulatory reform are being realized.  The Nuclear Regulatory Commission – and this Subcommittee – can claim a number of successes in their efforts to improve the nuclear regulatory environment.  These include successful implementation of the NRC Reactor Oversight Process, the timely extension of operating licenses at Calvert Cliffs and Oconee, the establishment of a one-step licensing process for advanced reactors, the streamlining of the license transfer process, and the increased efficiency in processing licensing actions.

It’s certainly possible that the industry left some desirable reforms undone, but it seems relevant to me that the Nuclear Energy Institute – the trade association for the nuclear energy industry and a fervent supporter of all these government assistance programs – does not complain that they’re being unfairly hammered by costly red-tape.

For the most part, however, the push-back against the arguments I offered last week has little to do with this.  It has to do with bias.  According to a post by Rod Adams over at “Atomic Insights Blog,” I am guilty of ignoring subsidies doled-out to nuclear’s biggest competitor – natural gas – and because Cato gets money from Koch Industries, it’s clear that my convenient neglect of that matter is part of a corporate-funded attack on nuclear power.  Indeed, Mr. Adams claims that he has unearthed a “smoking gun” with this observation.

Normally, I would ignore attacks like this.  This particular post, however, offers the proverbial “teachable moment” that should not be allowed to go to waste.

First, let’s look at the substance of the argument.  Did I “give natural gas a pass” as Mr. Adams contends? Well, yes and no; the show was about the cost of nuclear power, not the cost of natural gas.  I did note that natural gas-fired electricity was more attractive in this economic environment than nuclear power, something that happens to be true.  Had John Stossel asked me about whether gas’ economic advantage was due to subsidy, I would have told him that I am against natural gas subsidies as well – a position I have staked-out time and time again in other venues (while there are plenty of examples, this piece I co-authored with Daniel Becker – then of the Sierra Club – for The Los Angeles Times represents my thinking on energy subsidies across the board.  A blog post a while back about the Democratic assault on oil and gas subsidies found me arguing that the D’s should actually go further!  Dozens of other similar arguments against fossil fuel subsidies can be found on my publications page).  So let’s dispose of Mr. Adams’ implicit suggestion that I am some sort of tool for the oil and gas industry, arguing against subsidies here but not against subsidies there.

Second, let’s consider the implicit assertion that Mr. Adams makes – that natural gas-fired electricity is more attractive than nuclear power primarily because of subsidy.  The most recent and thorough assessment of this matter comes from Prof. Gilbert Metcalf, an economist at Tufts University.  Prof. Metcalf agrees with a 2004 report from the Energy Information Administration which contended that preferences for natural gas production in the tax code do little to increase natural gas production and thus do little to make natural gas less expensive than it might otherwise be.  They are wealth transfers for sure, but they do not do much to change natural gas supply or demand curves and thus do not affect consumer prices.  Prof. Metcalf argues that if we had truly level regulatory playing field without any tax distortions, natural gas-fired electricity would actually go down, not up!  Government intervention in energy markets does indeed distort gas-fired electricity prices.  It makes those prices higher than they otherwise would be!

The Energy Information Administration (EIA) identified five natural gas subsidies in 2007 that were relevant to the electricity sector (table 5).  Only two are of particular consequence.  They are:

  • Expensing of Exploration and Development Costs – Gas producers are allowed to expense exploration and development expenditures rather than capitalize and depreciate those costs over time.  Oil and gas producers (combined) took advantage of this tax break to the tune of $860 million per year.  How much goes to gas production rather than to oil production is unclear.
  • Excess of Percentage over Cost Depletion Deferral – Under cost depletion, producers are allowed to make an annual deduction equal to the non-recovered cost of acquisition and development of the resource times the proportion of the resource removed that year.  Under percentage depletion, producers deduct a percentage of gross income from resource production.  Oil and gas producers (combined) take advantage of this tax break to the tune of $790 million per year.  How much goes to gas production rather than to oil production is unclear. 

Even if we put aside the fact that these subsidies don’t impact final consumer prices in any significant manner, it’s useful to keep in mind the fact that the subsidy per unit of gas-fired electricity production – as calculated by EIA – works out to 25 cents per megawatt hour (table 35).  Subsidy per unit of nuclear-fired electricity production works out to $1.59 per megawatt hour.  Hence, the argument that nuclear subsidies are relatively small in comparison with natural gas subsidies is simply incorrect.

Some would argue that the Foreign Tax Credit – a generally applicable credit available to corporations doing business overseas that allows firms to treat royalty payments to foreign governments as a tax that can be deducted from domestic corporate income taxes – should likewise be on the subsidy list.  The Environmental Law Institute calculates that this credit saves the fossil fuel industry an additional $15.3 billion.  There is room for debate about the wisdom of that credit, but regardless, it doesn’t appear as if the Foreign Tax Credit affects domestic U.S. prices for gas-fired electricity.     

The bigger point is that without government help, few doubt that the natural gas industry would still be humming and electricity would still be produced in large quantities from gas-fired generators.  But without government production subsidies, without loan guarantees, and without liability protection via the Price-Anderson Act, even the nuclear power industry concedes that they would disappear.

Now, to be fair, Prof. Metcalf reports that nuclear power is cheaper than gas-fired power under both current law and under a no-subsidy, no-tax regime.  His calculations, however, were made at a time when natural gas prices were at near historic highs that were thought to be the new norm in energy markets and were governed by fairly optimistic assumptions about nuclear power plant construction costs.  Those assumptions have not held-up well with time.  For a more recent assessment, see my review of this issue in Reasonalong with this study from MIT, which warns that if more government help isn’t forthcoming, “nuclear power will diminish as a practical and timely option for deployment at a scale that would constitute a material contribution to climate change risk mitigation.”

Third, Mr. Adams argues that federal nuclear loan guarantee program is a self-evidently good deal and implies that only an anti-industry agitprop specialist (like me) could possibly refuse to see that.  “That program, with its carefully designed and implemented due diligence requirements for project viability, should actually produce revenue for the government.”  Funny, but when private investors perform those due diligence exercises, they come to a very different conclusion … which is why we have a federal loan guarantee program in the first place. 

Who do you trust to watch over your money – investment bankers or Uncle Sam?  The former don’t have the best track record in the world these days, but note that the popular indictment of that crowd is that investment banks weren’t tight fisted enough when it came to lending.  If even these guys were saying no to nuclear power – and at a time when money was flowing free and easy – what makes Mr. Adams think that a bunch of politicians are right about the glorious promise of nuclear power, particularly given the “too cheap to meter” rhetoric we’ve heard from the political world now for the better part of five decades? 

Anyway, for what it’s worth, the Congressional Budget Office has taken a close look at this alleged bonanza for the taxpayer and judged the risk of default on these loan guarantees to be around 50 percent.  They may be wrong of course, but the risks are there, something Moody’s acknowledged last year in a published analysis warning that they were likely to downgrade the credit-worthiness of nuclear power plant construction loans.

Fourth and finally, Mr. Adams cites Cato’s skepticism about “end-is-near” climate alarmism as yet more evidence that we are on the take from the fossil fuels industry.  I don’t know if Mr. Adams has been following current events lately, but I would think that we’re looking pretty good right now on that front.  Der Spiegel – no hot-bed of “Big Oil” agitprop – sums up the state of the debate rather nicely in the wake of the ongoing collapse of IPCC credibility.  Matt Ridley – another former devotee of climate alarmism – likewise sifts through the rubble that is now the infamous Michael Mann “hockey stick” analysis (which allegedly demonstrated an unprecedented degree of warming in the 20th Century) and finds thorough and total rot at the heart of the alarmist argument.  Mr. Adams is perhaps unaware that our own Pat Michaels has been making these arguments for years and Cato has no apologies to make on that score. 

Regardless, ad hominem is the sign of a man running out of arguments.  There aren’t many here to rebut, but the form of the complaints offered by Mr. Adams speaks volumes about how little the pro-nuclear camp has to offer right now in defense of nuclear power subsidies.

I have no animus towards nuclear power per se.  If nuclear power could compete without government help, I would be as happy as Mr. Adams or the next MIT nuclear engineer.  But I am no more “pro” nuclear power than I am “pro” any power.  It is not for me to pick winners in the market place.  That’s the invisible hand’s job.  If there is bad regulation out there harming the industry, then by all means, let’s see a list of said bad regulations and amend them accordingly.  But once those regulations are amended (if there are indeed any that need amending), nuclear power should still be subject to an unbiased market test.  Unlike Mr. Adams, I don’t want to see that test rigged.

Cherry Picking Climate Catastrophes: Response to Conor Clarke, Part II

Conor Clarke at The Atlantic blog, raised several issues with my study, “What to Do About Climate Change,” which Cato published last year.

One of Conor Clarke’s comments was that my analysis did not extend beyond the 21st century. He found this problematic because, as Conor put it, climate change would extend beyond 2100, and even if GDP is higher in 2100 with unfettered global warming than without, it’s not obvious that this GDP would continue to be higher “in the year 2200 or 2300 or 3758”. I addressed this portion of his argument in Part I of my response. Here I will address the second part of this argument, that “the possibility of ‘catastrophic’ climate change events — those with low probability but extremely high cost — becomes real after 2100.”

The examples of potentially catastrophic events that could be caused by anthropogenic greenhouse gas induced global warming (AGW) that have been offered to date (e.g., melting of the Greenland or West Antarctic Ice Sheets, or the shutdown of the thermohaline circulation) contain a few drops of plausibility submerged in oceans of speculation. There are no scientifically justified estimates of the probability of their occurrence by any given date. Nor are there scientifically justified estimates of the magnitude of damages such events might cause, not just in biophysical terms but also in socioeconomic terms. Therefore, to call these events “low probability” — as Mr. Clarke does — is a misnomer. They are more appropriately termed as plausible but highly speculative events.

Consider, for example, the potential collapse of the Greenland Ice Sheet (GIS). According to the IPCC’s WG I Summary for Policy Makers (p. 17), “If a negative surface mass balance were sustained for millennia, that would lead to virtually complete elimination of the Greenland Ice Sheet and a resulting contribution to sea level rise of about 7 m” (emphasis added). Presumably the same applies to the West Antarctic Ice Sheet.

But what is the probability that a negative surface mass balance can, in fact, be sustained for millennia, particularly after considering the amount of fossil fuels that can be economically extracted and the likelihood that other energy sources will not displace fossil fuels in the interim? [Remember we are told that peak oil is nigh, that renewables are almost competitive with fossil fuels, and that wind, solar and biofuels will soon pay for themselves.]

Second, for an event to be classified as a catastrophe, it should occur relatively quickly precluding efforts by man or nature to adapt or otherwise deal with it. But if it occurs over millennia, as the IPCC says, or even centuries, that gives humanity ample time to adjust, albeit at a socioeconomic cost. But it need not be prohibitively dangerous to life, limb or property if: (1) the total amount of sea level rise (SLR) and, perhaps more importantly, the rate of SLR can be predicted with some confidence, as seems likely in the next few decades considering the resources being expended on such research; (2) the rate of SLR is slow relative to how fast populations can strengthen coastal defenses and/or relocate; and (3) there are no insurmountable barriers to migration.

This would be true even had the so-called “tipping point” already been passed and ultimate disintegration of the ice sheet was inevitable, so long as it takes millennia for the disintegration to be realized. In other words, the issue isn’t just whether the tipping point is reached, rather it is how long does it actually take to tip over. Take, for example, if a hand grenade is tossed into a crowded room. Whether this results in tragedy — and the magnitude of that tragedy — depends upon how much time it takes for the grenade to go off, the reaction time of the occupants, and their ability to respond.

Lowe, et al. (2006, p. 32-33), based on a “pessimistic, but plausible, scenario in which atmospheric carbon dioxide concentrations were stabilised at four times pre-industrial levels,” estimated that a collapse of the Greenland Ice Sheet would over the next 1,000 years raise sea level by 2.3 meters (with a peak rate of 0.5 cm/yr). If one were to arbitrarily double that to account for potential melting of the West Antarctic Ice Sheet, that means a SLR of ~5 meters in 1,000 years with a peak rate (assuming the peaks coincide) of 1 meter per century.

Such a rise would not be unprecedented. Sea level has risen 120 meters in the past 18,000 years — an average of 0.67 meters/century — and as much as 4 meters/century during meltwater pulse 1A episode 14,600 years ago (Weaver et al. 2003; subscription required). Neither humanity nor, from the perspective of millennial time scales (per the above quote from the IPCC), the rest of nature seem the worse for it. Coral reefs for example, evolved and their compositions changed over millennia as new reefs grew while older ones were submerged in deeper water (e.g., Cabioch et al. 2008). So while there have been ecological changes, it is unknown whether the changes were for better or worse. For a melting of the GIS (or WAIS) to qualify as a catastrophe, one has to show, rather than assume, that the ecological consequences would, in fact, be for the worse.

Human beings can certainly cope with sea level rise of such magnitudes if they have centuries or millennia to do so. In fact, if necessary they could probably get out of the way in a matter of decades, if not years.

Can a relocation of such a magnitude be accomplished?

Consider that the global population increased from 2.5 billion in 1950 to 6.8 billion this year. Among other things, this meant creating the infrastructure for an extra 4.3 billion people in the intervening 59 years (as well as improving the infrastructure for the 2.5 billion counted in the baseline, many of whom barely had any infrastructure whatsoever in 1950). These improvements occurred at a time when everyone was significantly poorer. (Global per capita income today is more than 3.5 times greater today than it was in 1950). Therefore, while relocation will be costly, in theory, tomorrow’s much wealthier world ought to be able to relocate billions of people to higher ground over the next few centuries, if need be. In fact, once a decision is made to relocate, the cost differential of relocating, say, 10 meters higher rather than a meter higher is probably marginal. It should also be noted that over millennia the world’s infrastructure will have to be renewed or replaced dozens of times – and the world will be better for it. [For example, the ancient city of Troy, once on the coast but now a few kilometers inland, was built and rebuilt at least 9 times in 3 millennia.]

Also, so long as we are concerned about potential geological catastrophes whose probability of occurrence and impacts have yet to be scientifically estimated, we should also consider equally low or higher probability events that might negate their impacts. Specifically, it is quite possible — in fact probable — that somewhere between now and 2100 or 2200, technologies will become available that will deal with climate change much more economically than currently available technologies for reducing GHG emissions. Such technologies may include ocean fertilization, carbon sequestration, geo-engineering options (e.g., deploying mirrors in space) or more efficient solar or photovoltaic technologies. Similarly, there is a finite, non-zero probability that new and improved adaptation technologies will become available that will substantially reduce the net adverse impacts of climate change.

The historical record shows that this has occurred over the past century for virtually every climate-sensitive sector that has been studied. For example, from 1900-1970, U.S. death rates due to various climate-sensitive water-related diseases — dysentery, typhoid, paratyphoid, other gastrointestinal disease, and malaria —declined by 99.6 to 100.0 percent. Similarly, poor agricultural productivity exacerbated by drought contributed to famines in India and China off and on through the 19th and 20th centuries killing millions of people, but such famines haven’t recurred since the 1970s despite any climate change and the fact that populations are several-fold higher today. And by the early 2000s, deaths and death rates due to extreme weather events had dropped worldwide by over 95% of their earlier 20th century peaks (Goklany 2006).

With respect to another global warming bogeyman — the shutdown of the thermohaline circulation (AKA the meridional overturning circulation), the basis for the deep freeze depicted in the movie, The Day After Tomorrow — the IPCC WG I SPM notes (p. 16), “Based on current model simulations, it is very likely that the meridional overturning circulation (MOC) of the Atlantic Ocean will slow down during the 21st century. The multi-model average reduction by 2100 is 25% (range from zero to about 50%) for SRES emission scenario A1B. Temperatures in the Atlantic region are projected to increase despite such changes due to the much larger warming associated with projected increases in greenhouse gases. It is very unlikely that the MOC will undergo a large abrupt transition during the 21st century. Longer-term changes in the MOC cannot be assessed with confidence.”

Not much has changed since then. A shut down of the MOC doesn’t look any more likely now than it did then. See here, here, and here (pp. 316-317).

If one wants to develop rational policies to address speculative catastrophic events that could conceivably occur over the next few centuries or millennia, as a start one should consider the universe of potential catastrophes and then develop criteria as to which should be addressed and which not. Rational analysis must necessarily be based on systematic analysis, and not on cherry picking one’s favorite catastrophes.

Just as one may speculate on global warming induced catastrophes, one may just as plausibly also speculate on catastrophes that may result absent global warming. Consider, for example, the possibility that absent global warming, the Little Ice Age might return. The consequences of another ice age, Little or not, could range from the severely negative to the positive (if that would buffer the negative consequences of warming). That such a recurrence is not unlikely is evident from the fact that the earth entered and, only a century and a half ago, retreated from a Little Ice Age, and that history may indeed repeat itself over centuries or millennia.

Yet another catastrophe that greenhouse gas controls may cause is that CO2 not only contributes to warming, it is also the key building block of life as we know it. All vegetation is created by the photosynthesis of CO2 in the atmosphere. In fact, according to the IPCC WG I report (2007, p. 106), net primary productivity of the global biosphere has increased in recent decades, partly due to greater warming, higher CO2 concentrations and nitrogen deposition. Thus , there is a finite probability that reducing CO2 emissions would, therefore, reduce the net primary productivity of the terrestrial biosphere with potentially severe negative consequences for the amount and diversity of wildlife that it could support, as well as agricultural and forest productivity with adverse knock on effects on hunger and health.

There is also a finite probability that costs of GHG reductions could reduce economic growth worldwide. Even if only industrialized countries sign up for emission reductions, the negative consequences could show up in developing countries because they derive a substantial share of their income from aid, trade, tourism, and remittances from the rest of the world. See, for example, Tol (2005), which examines this possibility, although the extent to which that study fully considered these factors (i.e., aid, trade, tourism, and remittances) is unclear.

Finally, one of the problems with the argument that society should address low probability high impact events (assuming a probability could be estimated rather than assumed or guessed) is that it necessarily means there is a high probability that resources expended on addressing such catastrophic events will have been squandered. This wouldn’t be a problem but for the fact that there are opportunity costs associated with this.

According to the 2007 IPCC Science Assessment’s Summary for Policy Makers (p. 10), “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.” In plain language, this means that the IPCC believes there is at least a 90% likelihood that anthropogenic greenhouse gas emissions (AGHG) are responsible for 50-100% of the global warming since 1950. In other words, there is an up to 10% chance that anthropogenic GHGs are not responsible for most of that warming.

This means there is an up to 10% chance that resources expended in limiting climate change would have been squandered. Since any effort to significantly reduce climate change will cost trillions of dollars (see Nordhaus 2008, p. 82), that would be an unqualified disaster, particularly since those very resources could be devoted to reducing urgent problems humanity faces here and now (e.g., hunger, malaria, safer water and sanitation) — problems we know exist for sure unlike the bogeymen that we can’t be certain about.

Spending money on speculative, even if plausible, catastrophes instead of problems we know exist for sure is like a starving man giving up a fat juicy bird in hand while hoping that we’ll catch several other birds sometime in the next few centuries even though we know those birds don’t exist today and may never exist in the future.

Response to Conor Clarke, Part I

Last week Conor Clarke at The Atlantic blog , apparently as part of a running argument with Jim Manzi, raised four substantive issues with my study, “What to Do About Climate Change,” that Cato published last year. Mr. Clarke deserves a response, and I apologize for not getting to this sooner. Today, I’ll address the first part of his first comment. I’ll address the rest of his comments over the next few days.

Conor Clarke: 

(1) Goklany’s analysis does not extend beyond the 21st century. This is a problem for two reasons. First, climate change has no plans to close shop in 2100. Even if you believe GDP will be higher in 2100 with unfettered global warming than without, it’s not obvious that GDP would be higher in the year 2200 or 2300 or 3758. (This depends crucially on the rate of technological progress, and as Goklany’s paper acknowledges, that’s difficult to model.) Second, the possibility of “catastrophic” climate change events – those with low probability but extremely high cost – becomes real after 2100.

Response:  First, I wouldn’t put too much stock in analyses purporting to extend out to the end of the 21st century, let alone beyond that, for numerous reasons, some of which are laid out on pp. 2-3 of the Cato study. As noted there, according to a paper commissioned for the Stern Review, “changes in socioeconomic systems cannot be projected semi-realistically for more than 5–10 years at a time.”

Second, regarding Mr. Clarke’s statement that, “Even if you believe GDP will be higher in 2100 with unfettered global warming than without, it’s not obvious that GDP would be higher in the year 2200 or 2300 or 3758,” I should note that the conclusion that net welfare for 2100 (measured by net GDP per capita) is not based on a belief.  It follows inexorably from Stern’s own analysis.

Third, despite my skepticism of long term estimates, I have, for the sake of argument, extended the calculation to 2200. See here. Once again, I used the Stern Review’s estimates, not because I think they are particularly credible (see below), but for the sake of argument. Specifically, I assumed that losses in welfare due to climate change under the IPCC’s warmest scenario would, per the Stern Review’s 95th percentile estimate, be equivalent to 35.2 percent of GDP in 2200. [Recall that Stern’s estimates account for losses due to market impacts, non-market (i.e., environmental and public health) impacts and the risk of catastrophe, so one can’t argue that only market impacts were considered.]

The results, summarized in the following figure, indicate that even if one uses the Stern Review’s inflated impact estimates under the warmest IPCC scenario, net GDP in 2200 ought to be higher in the warmest world than in cooler worlds for both developing and industrialized countries.


Source: Indur M. Goklany, “Discounting the Future,” Regulation 32: 36-40 (Spring 2009).

The costs of climate change used to develop the above figure are, most likely, overestimated because they do not properly account for increases in future adaptive capacity consistent with the level of net economic development resulting from Stern’s own estimates (as shown in the above figure).  This figure shows that even after accounting for losses in GDP per capita due to climate change – and inflating these losses – net GDP per capita in 2200 would be between 16 and 85 times higher in 2200 that it was in the baseline year (1990).  No less important, Stern’s estimate of the costs of climate change neglect secular technological change that ought to occur during the 210-year period extending from the base year (1990) to 2200. In fact, as shown here, empirical data show that for most environmental indicators that have a critical effect on human well-being, technology has, over decades-long time frames reduced impacts by one or more orders of magnitude.

As a gedanken experiment, compare technology (and civilization’s adaptive capacity) in 1799 versus 2009. How credible would a projection for 2009 have been if it didn’t account for technological change from 1799 to 2009?

I should note that some people tend to dismiss the above estimates of GDP on the grounds that it is unlikely that economic development, particularly in today’s developing countries, will be as high as indicated in the figure.  My response to this is that they are based on the very assumptions that drive the IPCC and the Stern Review’s emissions and climate change scenarios. So if one disbelieves the above GDP estimates, then one should also disbelieve the IPCC and the Stern Review’s projection for the future.

Fourth, even if analysis that appropriately accounted for increases in adaptive capacity had shown that in 2200 people would be worse off in the richest-but-warmest world than in cooler worlds, I wouldn’t get too excited just yet. Even assuming a 100-year lag time between the initiation of emission reductions and a reduction in global temperature because of a combination of the inertia of the climate system and the turnover time for the energy infrastructure, we don’t need to do anything drastic till after 2100 (=2200 minus 100 years), unless monitoring shows before then that matters are actually becoming worse (as opposing merely changing), in which case we should certainly mobilize our responses. [Note that change doesn’t necessarily equate to worsening. One has to show that a change would be for the worse.  Unfortunately, much of the climate change literature skips this crucial step.]

In fact, waiting-and-preparing-while-we-watch (AKA watch-and-wait) makes most sense, just as it does for many problems (e.g., some cancers) where the cost of action is currently high relative to its benefit, benefits are uncertain, and technological change could relatively rapidly improve the cost-benefit ratio of controls. Within the next few decades, we should have a much better understanding of climate change and its impacts, and the cost of controls ought to decline in the future, particularly if we invest in research and development for mitigation.  In the meantime we should spend our resources on solving today’s first order problems – and climate change simply doesn’t make that list, as shown by the only exercises that have ever bothered to compare the importance of climate change relative to other global problems.  See here and here.  As is shown in the Cato paper (and elsewhere), this also ought to reduce vulnerability and increase resiliency to climate change.

In the next installment, I’ll address the second point in Mr. Clarke’s first point, namely, the fear that “the possibility of ‘catastrophic’ climate change events – those with low probability but extremely high cost – becomes real after 2100.”