The endangerment finding is so important because in 2007 the Supreme Court ruled that the EPA is mandated by law to regulate those emissions if the agency determines that they pose a danger to human health and welfare. The 2009 finding of endangerment did precisely that.
The endangerment finding itself depends heavily on an EPA “Technical Support Document,” largely based on forecasts from global climate models. These models, in aggregate, fail miserably in their ability to simulate temperatures of the last two decades, especially those in the lower atmosphere.
Thanks to a recent paper called “The Art and Science of Climate Model Tuning,” published in the Bulletin of the American Meteorological Society and written by Frederic Hourdin and 14 other climate modelers, we now have an idea why.
The paper discusses the phenomenal amount of adjustment that has been applied to the models in order to get them to produce what the scientists called an “anticipated acceptable range” of future warming. Among modelers, this is known as “tuning” an experiment in order to get a desired answer.
The problem, as noted recently by Paul Voosen in an article summarizing Hourdin et al., is that, left to their own devices, these models cannot even replicate the observed climatic history of the 20th century. The degree of adjustment to get them to do this, he noted, was purposefully withheld from the public. Per Voosen, modelers “had been mum in public about their ‘secret sauce’ … this taboo reflected fears that climate contrarians would use the practice of tuning to seed doubt about (the) models.”
The inescapable conclusion from the paper is that each fiddling of the models — which includes adjusting everything from the earth’s reflectivity to the mixing of heat in the ocean — gives a different forecast of how much the earth will warm for doubling atmospheric carbon dioxide, which is called the equilibrium climate sensitivity (ECS). If the ECS can be changed to a wide range of values, depending upon the “tuning” of the model, then it is the modeler and not the underlying physics that decides this number.
And who defines an “acceptable” ECS? In these cases, it is the very same people jiggling the models in the first place. It isn’t science to decide the right answer and then get a model to compute it!
Every science graduate student soon learns the perils of fiddling with models. Put in enough predictors and you can simulate the past behavior of anything, but a model with too many variables will blow up when tested in the real world, producing results no better than a table of random numbers. Similarly, “tune” the internal workings of a model beyond physical reality in search of an “anticipated acceptable” warming, and that model is likely to make bad forecasts.
This explains the profound mismatch between atmospheric observations and the models that have developed over the past two decades. The Obama EPA’s endangerment finding depends on these same flawed models.
Because these models are now known to be based upon personal philosophy of what is “acceptable” rather than rigorous science, there is very little to keep future EPA Administrator Pruitt from concluding that the technical support for an endangerment finding no longer exists. If that support no longer exists, then neither does the basis for the EPA’s carbon dioxide regulations.