Tag: science

Comments on the USGCRP Climate and Health Assessment

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

On June 8th, the public comment period on the draft report on climate and health from the U.S. Global Change Research Program (USGCRP) closed. Never liking to miss an opportunity to add our two cents’ worth to the conversation, we submitted a set of comments that focused on the weakness of the underlying premise of the report, more so than the specific details (although we did include a sample set of those to show just how pervasive the selective and misuse of science is throughout the report).

Our entire Comments are available here. But, for convenience, here’s the highlight reel. In summary, we found:

What is clear from this report, The Impacts of Climate Change on Human Health in the United States: A Scientific Assessment, and all other similar ones that have come before, is that the USGCRP simply chooses not to accept the science on human health and climate and instead prefers to forward alarming narratives, many based on science fiction rather than actual science. To best serve the public, this report should be withdrawn. By going forward without a major overhaul, its primary service [will] be to misinform and mislead the general public and policymakers alike.

Here we lay out the general problem:

The authors of the USGCRP draft of The Impacts of Climate Change on Human Health in the United States: A Scientific Assessment report have an outstanding imagination for coming up with ways that climate change may negatively impact the health and well-being of Americans, but a profound lack of understanding in the manner in which health and well-being is impacted by climate (including climate change).

Is There No “Hiatus” in Global Warming After All?

A new paper posted today on ScienceXpress (from Science magazine), by Thomas Karl, Director of NOAA’s Climate Data Center, and several co-authors[1], that seeks to disprove the “hiatus” in global warming prompts many serious scientific questions.

The main claim[2] by the authors that they have uncovered a significant recent warming trend is dubious. The significance level they report on their findings (.10) is hardly normative, and the use of it should prompt members of the scientific community to question the reasoning behind the use of such a lax standard.

In addition, the authors’ treatment of buoy sea-surface temperature (SST) data was guaranteed to create a warming trend. The data were adjusted upward by 0.12°C to make them “homogeneous” with the longer-running temperature records taken from engine intake channels in marine vessels. 

As has been acknowledged by numerous scientists, the engine intake data are clearly contaminated by heat conduction from the engine itself, and as such, never intended for scientific use. On the other hand, environmental monitoring is the specific purpose of the buoys. Adjusting good data upward to match bad data seems questionable, and the fact that the buoy network becomes increasingly dense in the last two decades means that this adjustment must put a warming trend in the data.

The extension of high-latitude arctic land data over the Arctic Ocean is also questionable. Much of the Arctic Ocean is ice-covered even in high summer, meaning the surface temperature must remain near freezing. Extending land data out into the ocean will obviously induce substantially exaggerated temperatures.

Additionally, there exist multiple measures of bulk lower atmosphere temperature independent from surface measurements which indicate the existence of a “hiatus”[3]. If the Karl et al., result were in fact robust, it could only mean that the disparity between surface and mid-tropospheric temperatures is even larger that previously noted. 

Getting the vertical distribution of temperature wrong invalidates virtually every forecast of sensible weather made by a climate model, as much of that weather (including rainfall) is determined in large part by the vertical structure of the atmosphere.

Instead, it would seem more logical to seriously question the Karl et al. result in light of the fact that, compared to those bulk temperatures, it is an outlier, showing a recent warming trend that is not in line with these other global records.

And finally, even presuming all the adjustments applied by the authors ultimately prove to be accurate, the temperature trend reported during the “hiatus” period (1998-2014), remains significantly below (using Karl et al.’s measure of significance) the mean trend projected by the collection of climate models used in the most recent report from the United Nation’s Intergovernmental Panel on Climate Change (IPCC). 

It is important to recognize that the central issue of human-caused climate change is not a question of whether it is warming or not, but rather a question of how much. And to this relevant question, the answer has been, and remains, that the warming is taking place at a much slower rate than is being projected.

The distribution of trends of the projected global average surface temperature for the period 1998-2014 from 108 climate model runs used in the latest report of the U.N.’s Intergovernmental Panel on Climate Change (IPCC)(blue bars). The models were run with historical climate forcings through 2005 and extended to 2014 with the RCP4.5 emissions scenario. The surface temperature trend over the same period, as reported by Karl et al. (2015, is included in red. It falls at the 2.4th percentile of the model distribution and indicates a value that is (statistically) significantly below the model mean projection.

The distribution of trends of the projected global average surface temperature for the period 1998-2014 from 108 climate model runs used in the latest report of the U.N.’s Intergovernmental Panel on Climate Change (IPCC)(blue bars). The models were run with historical climate forcings through 2005 and extended to 2014 with the RCP4.5 emissions scenario. The surface temperature trend over the same period, as reported by Karl et al. (2015, is included in red. It falls at the 2.4th percentile of the model distribution and indicates a value that is (statistically) significantly below the model mean projection.


[1] Karl, T. R., et al., Possible artifacts of data biases in the recent global surface warming hiatus. Scienceexpress, embargoed until 1400 EDT June 4, 2015.

[2] “It is also noteworthy that the new global trends are statistically significant and positive at the 0.10 significance level for 1998-2012…”

[3] Both the UAH and RSS satellite records are now in their 21st year without a significant trend, for example

You Ought to Have a Look: Intimidation in Science

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger. While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic. Here we post a few of the best in recent days, along with our color commentary.

Talk of interference, intimidation, and abridgement of scientific freedom continues to make the news this week—and increasingly is taking the form of pushback against recently announced congressional investigations into sources of scientific research funding.

On Tuesday, the Wall Street Journal ran an editorial offering a “round of applause for those pushing back, providing the bullies a public lesson in the First Amendment.” Highlighted in their coverage were efforts by the Cato Institute, Heartland Institute and Koch Industries condemning attempts to “silence public debate” on climate change. From the WSJ:

Democrats and their allies have failed to persuade Americans that climate change is so serious that it warrants sweeping new political controls on American energy and industry. So liberals are trying to silence those who are winning the argument. We’re glad to see the dissenters aren’t intimidated.

Also unintimidated by attempts abridge academic freedom is Alice Dreger, professor of Medical Education-Medical Humanities and Bioethics at Northwestern University and a historian of science and medicine. Dreger has a new book out titled Galileo’s Middle Finger: Heretics, Activists and the Search for Justice in Science that describes how activists try to intimidate researchers when the activists disagree with the researchers’ work.

Roger Pielke Jr. reviews the book for Nature. From his blog, leading into his review, Roger describes why he empathizes with Dreger:

You Ought to Have a Look: An Overreaching Investigation

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger. While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

Over the past couple of weeks, prominent members of the climate science/climate policy community have come under attack for not toeing the (Presidential) party line when it comes to how human-caused climate change is being billed and sold via the President’ Climate Action Plan.

The attacks began with Harvard Smithsonian Center for Astrophysics researcher Willie Soon, and thanks to the attention afforded by Justin Gillis in the New York Times, were expanded by Representative Raul Grijalva (D-AZ), to include Richard Lindzen, David Legates, John Christy, Judith Curry, Robert Balling, Roger Pielke Jr., and Steven Hayward.

In this You Ought to Have a Look, we provide links to the subsequent public comments from those researchers under question (who have made them available) in response to this line of investigation—one which many have termed a “witch hunt.”

How to Stop Wasting Money on Science

In Thursday’s Wall Street Journal, former Energy Secretary (and Stanford professor) Steven Chu and his colleague Thomas R. Cech penned an opinion piece entitled How to Stop Winning Nobel Prizes in Science, in which they argue for better long-term planning and consistency in the public funding of science. Cato adjunct scholar Dr. Terence Kealey agrees, suggesting the right amount would be consistently $0.

In August, 2013, Kealey wrote precisely about this in that month’s edition of Cato Unbound. Since then, he has stepped down after a long and successful tenure as vice-chancellor (the equivalent of college president in the U.S.) of the University of Buckingham in the United Kingdom.

First, Kealey considers the notion that science is a “public good,” i.e., something that should rightly be funded by government because scientific developments would otherwise be underprovided from the perspective of society as a whole.  

The myth [that Science is a public good] may be the longest-surviving intellectual error in western academic thought, having started in 1605 when a corrupt English lawyer and politician, Sir Francis Bacon, published his Advancement of Learning.

Kealey went on to document that there is no evidence the public good model (as opposed to laissez faire) is more efficient at providing for the betterment of the public:

Reflections on Rapid Response to Unjustified Climate Alarm

The Cato Institute’s Center for the Study of Science today kicks off its rapid response center that will identify and correct inappropriate and generally bizarre claims on behalf of climate alarm. I wish them luck in this worthy enterprise, but more will surely be needed to deal with this issue.

To be sure, there is an important role for such a center. It is not to convince the ‘believers.’ Nor do I think that there is any longer a significant body of sincere and intelligent individuals who are simply trying to assess the evidence. As far as I can tell, the issue has largely polarized that relatively small portion of the population that has chosen to care about the issue. The remainder quite reasonably have chosen to remain outside the polarization. Thus the purpose of a rapid response Center will be to reassure those who realize that this is a fishy issue, that there remain scientists who are still concerned with the integrity of science. There is also a crucial role in informing those who wish to avoid the conflict as to what is at stake. While these are important functions, there are other issues that I feel a think tank ought to consider. Moreover, there is a danger that rapid response to trivial claims lends unwarranted seriousness to these claims. 

Climate alarm belongs to a class of issues characterized by a claim for which there is no evidence, that nonetheless appeals strongly to one or more interests or prejudices. Once the issue is adopted, evidence becomes irrelevant. Instead, the believer sees what he believes. Anything can serve as a supporting omen. Three very different previous examples come to mind (though there are many more examples that could be cited): Malthus’ theory of overpopulation, social Darwinism and the Dreyfus Affair. Although each of these issues engendered opposition, only the Dreyfus Affair led to widespread societal polarization. More commonly, only the ‘believers’ are sufficiently driven to form a movement. We will briefly review these examples (though each has been subject to book length analyses), but the issue of climate alarm is somewhat special in that it appeals to a sizeable number of interests, and has strong claims on the scientific community. It also has the potential to cause exceptional harm to an unprecedented number of people. This has led to persistent opposition amidst widespread lack of interest. However, all these issues are characterized by profound immorality pretending to virtue. 

Education Policy, The Use of Evidence, and the Fordham Institute

In recent weeks, the Fordham Institute has repeatedly called for government testing and reporting mandates to be imposed on private schools participating in school choice programs (here and here), on the grounds that such “public accountability” improves private school academic outcomes. In defense of this claim, the Fordham Institute cites a study of Milwaukee’s voucher program in which test scores rose following the introduction of such mandates.

Patrick Wolf, director of the research team that conducted the study, has now responded, explaining that his team’s results do not necessarily support Fordham’s claim:

[B]y taking the standardized testing seriously in that final year, the schools simply may have produced a truer measure of student’s actual (better) performance all along, not necessarily a signal that they actually learned a lot more in the one year under the new accountability regime….

What about the encouraging trend that lower-performing schools in the MPCP are being closed down?  [Fordham] mentions that as well and attributes it to the stricter accountability regulations on the program.  That phenomenon of Schumpeterian “creative destruction” pre-dated the accountability changes in the choice program, however, and appears to have been caused mainly by low enrollments in low-performing choice schools, as parents “voted with their feet” against such institutional failure. Sure, the new high-stakes testing and public reporting requirements might accelerate the creative destruction of low-performing choice schools in Milwaukee, but that remains to be seen. [emphasis added]

But there is a deeper problem with the Fordham claim, to which Wolf alludes: a single study, no matter how carefully executed, is not a scientific basis for policy. Because a single study is not science. Science is a process of making and testing falsifiable predictions. It is about patterns of evidence. Bodies of evidence. Fordham offers only a toe.

And Fordham’s preferred policy not only lacks a body of supporting evidence, it is undermined by a large body of evidence. When I reviewed the within-country studies comparing outcomes among different types of school systems worldwide in 2009, I sorted the results into two categories: 1) all studies that compared “public” schools to “private” schools, where those terms were loosely defined; and 2) studies that compared “market” schools to “monopoly” schools. “Market” schools were those paid for at least in part directly by parents and only minimally regulated. “Monopoly” schools were public school systems such as those common in the U.S.

The purpose of these separate categorizations was to see if limited regulation and direct parent funding make a real difference, or if private schools that are paid for entirely by the state and subjected to Fordham’s “public accountability” have the same advantages as their more market-like counterparts.

The result of this breakdown of the literature was stark. Studies looking at truly market-like education systems are twice as consistent in finding a private sector advantage as those looking at “private” schools more broadly construed (and thus including state-funded and regulated private schools).

The pattern of evidence thus seems to contradict Fordham’s belief in the merits of “public accountability” in market education systems. What it favors are policies that promote the rise of minimally regulated education markets in which parents pay at least some of the cost of their own children’s education directly themselves, whenever possible.  That’s just the sort of system likely to arise under education tax credit programs.

Pages