The first of these assessments was released in the fall of 2000. Despite the four-year mandate, none were produced during the George W. Bush administration. Finally, in 2009, a second assessment was published. While the 2009 report is still the document of record, a draft of the third report, scheduled for publication late this year, was circulated for public comment early this year. Unfortunately, none of the assessments provide complete and comprehensive summaries of the scientific literature, but instead highlight materials that tend to view climate change as a serious and emergent problem. A comprehensive documentation of the amount of neglected literature can be found in a recent Cato Institute publication, Addendum: Global Climate Change Impacts in the United States.
2000 Assessment
In the 2000 assessment, known officially as the “U.S. National Assessment of Climate Change,” the USGCRP examined nine different general circulation climate models (GCMs) to assess climate change impacts on the nation. They chose two GCMs to use for their projections of climate change. One, from the Canadian Climate Center, forecasted the largest temperature changes of all models considered, and the other, from the Hadley Center in the United Kingdom, forecasted the largest precipitation changes.
The salient feature of those models is that they achieved something very difficult in science: they generated “anti-information”—projections that were of less utility than no forecasts whatsoever. This can be demonstrated by comparing the output of the Canadian Climate Center’s model with actual temperatures.
The top half of Figure 1 displays the observed 10-year smoothed averaged maximum temperature departures from the climatological mean over the lower 48 states from data through 1998. The bottom half displays the difference between the model projections over the same time period and those same temperature observations. Statisticians refer to such differences as “residuals.” The figure includes the value of the variance—a measure of the variability of the data about its average value—for both the observed data and the climate model residuals. In this case, the residuals have over twice the variance of the raw data.
To analogize the relationship between the Canadian model and the empirical data, imagine the model output was in the form of 100 answers to a four-option multiple choice test. If the model simply randomly generated answers, within statistical limits it would provide the correct answer to about 25 percent of the questions. But, analogously speaking, the Canadian model would do worse than random: it would only answer about 12 out of 100 correctly. That’s “anti-information”: using the model provides less information than guessing randomly.
We communicated this problem to Tom Karl, director of the National Climatic Data Center and the highest-ranking scientist in the USGCRP. He responded that the models were never meant to predict 10-year running means of surface average temperature. He then repeated our test using 25-year running means and obtained the same result that we did. But the assessment was issued unchanged.
2009 Assessment
After a hiatus of nine years, the USGCRP produced its second national assessment, titled “Global Climate Change Impacts in the United States.” The environmental horrors detailed in the 2009 assessment primarily derive from a predicted increase in global surface temperature and changed precipitation patterns. But global surface temperatures have not increased recently at anywhere near the rate projected by the consensus of so-called “midrange emission scenario” climate models that formed much of the basis for the 2009 document. This has caused an intense debate about whether the pause in significant warming, now in its 16th year (using data from the Climate Research Unit (CRU) at the University of East Anglia—the data set that scientists cite the most), indicates that the models are failing because they are too “sensitive” to changes in carbon dioxide. That is, the models’ estimate of the increase in temperature resulting from a doubling of carbon dioxide concentration may simply be too high.
Figure 2 compares the trends (through the year 2012) in the CRU’s observed global temperature history over periods ranging from five to 15 years to the complete collection of climate model runs in the most recent climate assessment from the United Nations’ Intergovernmental Panel on Climate Change (IPCC). This suite of models is run under the midrange emissions scenario, which has tracked fairly accurately with real emissions rates, especially given the displacement of coal by cheaper abundant natural gas from shale formations worldwide.