Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”
There is a new paper out in the journal Climatic Change that takes a look into the issue of publication bias in the climate change literature. This is something that we have previously looked into ourselves. The results of our initial investigation (from back in 2010) were written up and published in the paper “Evidence for ‘Publication bias’ concerning global warming in Science and Nature” in which we concluded that there was an overwhelming propensity for Nature and Science—considered among the world’s leading scientific journals—to publish findings that concluded climate change was “worse than expected.” We noted the implications:
This has considerable implications for the popular perception of global warming science, for the nature of “compendia” of climate change research, such as the reports of the United Nations' Intergovernmental Panel on Climate Change, and for the political process that uses those compendia as the basis for policy…
The consequent synergy between [publication bias], public perception, scientific “consensus” and policy is very disturbing. If the results shown for Science and Nature are in fact a general character of the scientific literature concerning global warming, our policies are based upon a unidirectionally biased stream of new information, rather than one that has a roughly equiprobable distribution of altering existing forecasts or implications of climate change in a positive or negative direction. This bias exists despite the stated belief of the climate research community that it does not.
In their investigation into publication bias, the authors of the new paper, Christian Harlos, Tim C. Edgell, and Johan Hollander, looked more broadly across scientific journals (including articles from 31 different journals), but a bit more narrowly at the field of climate change, limiting themselves to a sub-set of articles that dealt with a marine response to climate change (they selected, via random sampling, 120 articles in total).
Harlos et al. were primarily interested in looking into whether or not there was a bias in these articles resulting from an under-reporting of non-significant results. This bias type is known as the “file drawer” problem—in which research findings that aren’t statistically significant are rarely published (and therefore sit in a “file drawer). This leads to an over- (and non-robust) estimate of the number of truly significant results. The “file drawer” problem has received a lot of attention in recent years (see here for example) and it continues to be an active research area.
From their examination, however, the Harlos team did not find firm evidence that the file-drawer-type bias was strongly manifest. But, importantly, they did find that several other types of bias were manifest, including bias in how scientific findings were being communicated:
However, our meta-analysis did find multiple lines of evidence of biases within our sample of articles, which were perpetuated in journals of all impact factors and related largely to how science is communicated: The large, statistically significant effects were typically showcased in abstracts and summary paragraphs, whereas the lesser effects, especially those that were not statistically significant, were often buried in the main body of reports. Although the tendency to isolate large, significant results in abstracts has been noted elsewhere (Fanelli 2012), here we provide the first empirical evidence of such a trend across a large sample of literature.
The authors note that, in particular, this bias was worst in the high impact journals (like Science and Nature), and that:
[O]ur results corroborate with others by showing that high impact journals typically report large effects based on small sample sizes (Fraley and Vazire 2014), and high impact journals have shown publication bias in climate change research (Michaels 2008, and further discussed in Radetzki 2010).
Ultimately, importantly, and significantly, they conclude:
…[M]ost audiences, especially non-scientific ones, are more likely to read article abstracts or summary paragraphs only, without perusing technical results. The onus to effectively communicate science does not fall entirely on the reader; rather, it is the responsibility of scientists and editors to remain vigilant, to understand how biases may pervade their work, and to be proactive about communicating science to non-technical audiences in transparent and un-biased ways. Ironically, articles in high impact journals are those most cited by other scientists; therefore, the practice of sensationalizing abstracts may bias scientific consensus too, assuming many scientists may also rely too heavily on abstracts during literature reviews and do not spend sufficient time delving into the lesser effects reported elsewhere in articles.
Despite our sincerest aim of using science as an objective and unbiased tool to record natural history, we are reminded that science is a human construct, often driven by human needs to tell a compelling story, to reinforce the positive, and to compete for limited resources—publication trends and communication bias is a proof of that.
These findings are yet another impelling reason (recall the problem with the bias in climate model tuning) why a re-examination of our government’s previous assessment reports of climate change (such as those underlying the EPA’s endangerment finding) should be undertaken by the new Administration at the soonest possible opportunity.
Harlos, C., T.C. Edgell, and J. Hollander, 2017. No evidence of publication bias in climate change science, Climatic Change, 140, 375-385, doi:10.1007/s10584-016-1880-1
Michaels, P.J., 2008. Evidence for “Publication bias” concerning global warming in Science and Nature. Energy and Environment, 19, 287–301, doi:10.1260/095830508783900735