Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”
There is a new paper out in the journal Climatic Change that takes a look into the issue of publication bias in the climate change literature. This is something that we have previously looked into ourselves. The results of our initial investigation (from back in 2010) were written up and published in the paper “Evidence for ‘Publication bias’ concerning global warming in Science and Nature” in which we concluded that there was an overwhelming propensity for Nature and Science—considered among the world’s leading scientific journals—to publish findings that concluded climate change was “worse than expected.” We noted the implications:
This has considerable implications for the popular perception of global warming science, for the nature of “compendia” of climate change research, such as the reports of the United Nations’ Intergovernmental Panel on Climate Change, and for the political process that uses those compendia as the basis for policy…
The consequent synergy between [publication bias], public perception, scientific “consensus” and policy is very disturbing. If the results shown for Science and Nature are in fact a general character of the scientific literature concerning global warming, our policies are based upon a unidirectionally biased stream of new information, rather than one that has a roughly equiprobable distribution of altering existing forecasts or implications of climate change in a positive or negative direction. This bias exists despite the stated belief of the climate research community that it does not.
In their investigation into publication bias, the authors of the new paper, Christian Harlos, Tim C. Edgell, and Johan Hollander, looked more broadly across scientific journals (including articles from 31 different journals), but a bit more narrowly at the field of climate change, limiting themselves to a sub-set of articles that dealt with a marine response to climate change (they selected, via random sampling, 120 articles in total).
Harlos et al. were primarily interested in looking into whether or not there was a bias in these articles resulting from an under-reporting of non-significant results. This bias type is known as the “file drawer” problem—in which research findings that aren’t statistically significant are rarely published (and therefore sit in a “file drawer). This leads to an over- (and non-robust) estimate of the number of truly significant results. The “file drawer” problem has received a lot of attention in recent years (see here for example) and it continues to be an active research area.