Last week, the Senate Appropriations Committee filed a report along with the appropriations bill for the Departments of Labor, Health and Human Services, and Education. The report mostly consists of broad policy recommendations and guidance for how to spend the appropriated money. On page 108 of the 273 page report, however, is a discussion of “barriers to research,” specifically, how the “Committee is concerned that restrictions associated with Schedule 1 of the Controlled Substance Act effectively limit the amount and type of research that can be conducted on certain Schedule 1 drugs, especially marijuana or its component chemicals and certain synthetic drugs.”
While the report is not law, it signals a welcome change in attitude. For decades, marijuana’s Schedule 1 status has made it very difficult for researchers and scientists to investigate the plant’s medicinal and harmful properties. In order to research marijuana legally for clinical purposes, even if you’re in Colorado and could just purchase some, you must first get a license from the DEA, then get approval from the FDA, and finally get access to the one federally authorized marijuana supply, which is grown at the University of Mississippi and run by the National Institute on Drug Abuse (NIDA). On top of that, the federally sourced marijuana is often moldy and of unpredictable quality. And then there’s funding, which has often not been forthcoming to those trying to research the possible beneficial uses of cannabis.
Taken together, all of those steps make researching marijuana more difficult than researching almost any other drug on the planet, including other Schedule 1 substances such as heroin and LSD. As the Appropriations Committee report says, “At a time when we need as much information as possible about these drugs, we should be lowering regulatory and other barriers to conducting this research.” The report thus directs NIDA to “provide a short report on the barriers to research that result from the classification of drugs and compounds as Schedule 1 substances.”
The report comes at a time when Attorney General Jeff Sessions is blocking an Obama administration attempt to make marijuana more easily available to researchers. In August 2016, the DEA began accepting applications to become an authorized marijuana supplier. Twenty-six applications were submitted but, after the administration changed over, Attorney General Sessions stalled the approval process. In response, Senators Orrin Hatch (R-UT) and Kamala Harris (D-CA) sent a letter to Sessions asking him to stop blocking research. Hatch has also introduced the MEDS Act, which is a more permanent legislative fix to the problems around marijuana research.
Federal marijuana prohibition, at least as a Schedule 1 drug, is on its last legs. Nine states and the District of Columbia have legalized recreational cannabis and 30 states have legalized medical marijuana. No one is putting that genie back in the bottle. Federal law is so antiquated, in fact, that it makes no distinction between medical and recreational use. Schedule 1 drugs have no accepted medical uses, and the difficulty of carrying out medical research is one reason marijuana still has that status. The Senate Appropriations report is just another step toward the inevitable revision of federal marijuana laws.
You Ought to Have a Look is a regular feature from the Center for the Study of Science. While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic. Here we post a few of the best in recent days, along with our color commentary.
This week we focus on an in-depth article in Slate authored by Sam Apple that profiles John Arnold, “one of the least known billionaires in the U.S.” Turns out Mr. Arnold is very interested in “fixing” science. His foundation, the Arnold Foundation, has provided a good deal of funding to various research efforts across the country and across disciplines aimed at investigating how the scientific incentive structure results in biased (aka “bad”) science. His foundation has supported several high-profile science-finding replication efforts, such as those being run by Stanford’s John Ioannidis (whose work we are very fond of) and University of Virginia’s Brian Nosek who runs a venture called the “Reproducibility Project” (and who pioneered the badge system of rewards for open science that we previously discussed). The Arnold Foundation has also provided support for the re-examining of nutritional science, an effort lead by Gary Taubes (also a favorite of ours), as well as investigations into the scientific review process behind the U.S. government’s dietary guidelines, spearheaded by journalist Nina Teicholz.
Apple writes that:
In my conversations with Arnold and his grantees, the word incentives seems to come up more than any other. The problem, they claim, isn’t that scientists don’t want to do the right thing. On the contrary, Arnold says he believes that most researchers go into their work with the best of intentions, only to be led astray by a system that rewards the wrong behaviors.
This is something that we, too, repeatedly highlight at the Center for the Study of Science and investigating its impact is what we are built around.
[S]cience, itself, through its systems of publication, funding, and advancement—had become biased toward generating a certain kind of finding: novel, attention grabbing, but ultimately unreliable…
“As a general rule, the incentives related to quantitative research are very different in the social sciences and in financial practice,” says James Owen Weatherall, author of The Physics of Wall Street. “In the sciences, one is mostly incentivized to publish journal articles, and especially to publish the sorts of attention-grabbing and controversial articles that get widely cited and picked up by the popular media. The articles have to appear methodologically sound, but this is generally a lower standard than being completely convincing. In finance, meanwhile, at least when one is trading with one’s own money, there are strong incentives to work to that stronger standard. One is literally betting on one’s research.”
Another term for “betting on one’s research” is having some “skin in the game”—a concept that Judy Curry expounds upon on in her blog piece on her transition from academia to her weather and climate forecasting business, for
reasons hav[ing] to do with my growing disenchantment with universities, the academic field of climate science and scientists…The reward system that is in place for university faculty members is becoming increasingly counterproductive to actually educating students to be able to think and cope in the real world, and in expanding the frontiers of knowledge in a meaningful way (at least in certain fields that are publicly relevant such as climate change).
Further, she said
I said in my post JC in transition that I thought that the private sector is a more ‘honest’ place for a scientist than academia. In this context, in the private sector you have skin in the game with regards to weather forecasts (and shorter term climate forecasts), whereas in academia scientists have no skin in the game in terms of the climate change projections.
Making shorter term weather or climate forecasts, with some skin in the game, would be very good experience for academic climate scientists. And this experience just might end up changing their perspectives on uncertainty and forecast confidence.
In the Slate article, Apple sees the ultimate efforts of Arnold and those he’s helping to support, as trying to tap into scientists’ natural inclination towards providing valuable research, and, much as Curry suggests, finding ways to alter the existing incentive structure towards that goal:
Scientists really do want to discover things that make a difference in people’s lives. In a sense, that’s the strongest weapon that we have. We can feed off that. Figuring out exactly which rewards work best and how to simultaneously change the incentives for researchers, institutions, journals, and funders is now a key area of interest…
We couldn’t agree more.
Be sure to check out Apple’s full article for some great background on John Arnold and more details from the efforts that he supports, as well Judy’s excellent set of posts her reasons for leaving academia and heading full-time into the private sphere. You ought to have a look.
And before we go, we wanted to pass along this bit of carbon tax news:
Tesla Motors Inc. founder Elon Musk is pressing the Trump administration to adopt a tax on carbon emissions, raising the issue directly with President Donald Trump and U.S. business leaders at a White House meeting Monday regarding manufacturing.
A senior White House official said Musk floated the idea of a carbon tax at the meeting but got little or no support among the executives at the White House, signaling that Trump’s conservative political orbit remains tepid on the issue.
Hold firm fellas!
Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”
A new paper has been published in the journal Geophysical Research Letters that examines trends in heavy rainfall amounts across the U.S. The paper is authored by Newcastle University’s Renaud Barbero and colleagues, and, to summarize, finds that the heaviest rainfall events of the year have been increasing in magnitude since 1951 when averaged across nearly 500 stations distributed across the U.S. (note: results from individual stations may differ from the general finding).
Someone with a critical eye might ask the real question, which is “how much?” That such a number does not jump out of this paper—a cynic would say—probably means it is very small. Read on and you will find the answer.
That rainfall on the rainiest day of the year is increasing is, of itself, hardly surprising considering that the total annual rainfall amount averaged across the U.S. has also been increasing during this same period (again, results from individual locations/regions may (and do) depart from this generality).
Changes in heavy rainfall like this are often luridly described as a “disproportionate increase” in extreme events, or that extreme precipitation increases are “worse than expected.”
This is not the case—or at least wasn’t the case when we published a paper in the International Journal of Climatology on this very subject back in 2004. In that paper we found the same thing that Barbero and colleagues found with regards to the heaviest daily rainfall events of each year—they were getting heavier. But, we were careful to note, that so too was the total yearly rainfall, and as a result, the increase of the heaviest daily rainfall events was completely proportional to the increase in annual rainfall increase. In other words, the increase on the wettest day each year was neither “disproportionate” nor “worse than expected.”
In fact, this result was not a surprise at all. It’s basic climatology.
Let’s have a look, and while we’re at it, we’ll update our old analysis (which ended with data from 2001) with data available from a new “extremes” dataset which runs through 2010.
The “extremes” data set we’ll use is the one compiled by Markus Donat and colleagues and includes gridded data (3.75° longitude x 2.5° latitude grid) on annual daily rainfall extremes as well as data on annual total rainfall. This data is available using this handy on-line tool. We selected the region of interest as shown in Figure 1.
Figure 1. Study area used in this analysis.
Now, let’s look at the relationship between total annual rainfall and daily maximum rainfall for each year, averaged across the gridcells contained in our study area, as shown in Figure 2. Years with more total rain also have more rainfall on the wettest day of the year. This is no different during the first half of the 20th century (1901-1950; blue points) than it has been in the time since (1951-2010; red points). Any human-caused climate change (which should be more evident in the latter period) hasn’t led to a change in this general climatological relationship.
Figure 2. The average of the annual total precipitation (across the grids in the boxed area in Figure 1) plotted against the average of the annual daily maximum precipitation for two different time periods.
Let’s dig in a bit further. In Figure 3a (top) we plot the amount of precipitation falling during the wettest day of the year averaged across our study area. The data run from 1901-2010 and show a small, but statistically significant increase. Figure 3b (middle) shows the average total annual precipitation over the same period. Again, a slight and significant increase. And finally, Figure 3c (bottom) shows that the proportion of annual precipitation delivered during the wettest day of the year (Figure 3a divided by Figure 3b) exhibits no overall trend. In other words, the relationship between total rain and heavy rain remains unchanged. Each of these precipitation histories shows a good bit of temporal variation. If human-caused climate change is playing a role here, it is neither readily discernible nor resulting in unusual behavior.
Figure 3. (a, top) The daily precipitation on the wettest day of the year averaged across the region depicted in Figure 1, 1901-2010; (b, middle) The total annual precipitation averaged across the region depicted in Figure 1, 1901-2010; (c, bottom) the proportion of total annual precipitation delivered during the wettest day of the year, 1901-2010.
From Figure 3a we can also see the actual increase in the amount of rainfall falling on the rainiest day of the year. It’s just a tad more than one-tenth of an inch. We submit that that isn’t “extreme,” “disproportionate” or even “consequential.” It’s just a very small number. To emphasize to the point of ridiculousness, hold your thumb and index finger a tenth of an inch apart and imagine the horrific damage that this increase in rain must cause!
In fact, if global warming has a hand in increasing annual precipitation totals, this would seem as a good thing (positive externality) considering the anticipated increase in water-use demand. According to a recent report from the Pacific Institute on our nation’s water-use trends:
We conclude that considerable progress has been made in managing the nation’s water — but the current pace is not likely to counter the demands of continued population and economic growth, climate change, and increasing tensions over scarce water resources. National water use remains high, and many freshwater systems are under stress from overuse. While there is reason to believe this may be changing, we must continue efforts to improve water-use efficiency in our homes, businesses, and on our nation’s farms.
All of this confirms that our original finding from 2004 continues to remain valid and thus our conclusions from that paper are worth repeating:
Our results support the contention that, where changes are significant, there is an increase in the amount of rain occurring on heavy rain days. However, our results provide no support for the argument that the increase in total annual rainfall observed across the USA is disproportionately occurring on the wettest days — a contention that may have arisen from methodological constraints rather than true changes in the nature of precipitation delivery…
Our results argue strongly that the increase in rainfall in the coterminous 48 states that has been observed in the last 100 years has not resulted in any systematic disproportion in the percentage of that increase allocated to the heaviest rain days.
But then, as is the case now, the actual amount of the increase is so small as to be operationally meaningless, even if it were caused by emissions of dreaded carbon dioxide (a hypothesis that is rather difficult to prove). The same holds true for the large majority of other impacts from anthropogenic climate change—despite what you may read in the other media outlets.
Barbero, R., H. J. Fowler, G. Lenderink, and S. Blenkinsop, 2016. Is the intensification of precipitation extremes better detected at hourly than daily resolutions? Geophysical Research Letters, doi: 10.1002/2016GL071917.
Donat, M. G., et el., 2013. Updated analysis of temperature and precipitation extremes indices since the beginning of the twentieth century: The HadEX2 dataset. Journal of Geophysical Research-Atmospheres, 118, 1-16, doi: 10.1002/jgrd.50150.
Michaels, P. J., P. C. Knappenberger, O. W. Frauenfeld, and R. E. Davis, 2004. Trends in precipitation on the wettest days of the yeas across the contiguous USA. International Journal of Climatology, 24, 1873-1882, doi: 10.1002/joc.1102.
While the twitterverse is chirping with concern over Donald Trump’s handling of the global warming science, we offer a few realities that should be key parts of any transitional team’s synthesis.
1. Carbon dioxide is a greenhouse gas that by itself will result in a slight warming of the lower atmosphere and surface temperatures, as well as a cooling of the stratosphere.
a. All of these have been observed.
2. Additional warming is provided by a complicated feedback with water vapor. If it were large and positive, so would be future warming.
a. The observed warming is far below values consistent with a high temperature sensitivity. Therefore future warming will run considerably below any high-sensitivity estimate.
b. The disparity between observed and forecast warming continues to grow.
3. Any attempts to mitigate significant future warming with the current suite of politically acceptable technologies are doomed to failure. The Paris Agreement, according the EPA’s own models, only prevents 0.1 to 0.2⁰C of warming by 2100.
a. The Paris Agreement is meaningless, unenforceable, and compels developed nations to tender funds to the developing world. That makes it a treaty that should be submitted to the Senate for ratification, where it will be soundly rejected.
4. Having the government mandate politically correct and inefficient technologies such as solar energy and wind inevitably squanders resources that would better be used for investments in a much more efficient future. Unfortunately, this is what President Obama’s Clean Power Plan does.
a. Voiding the Clean Power Plan will therefore ultimately lead more quickly to competitive, more efficient technologies.
5. Affluent societies have the resources for private investment in novel and efficient technologies, and inevitably are more protective of their environment than are poor ones.
a. Environmental protection is a priority in a vibrant economy. Promoting economic development is the key to a cleaner planet.
6. There is no evidence that government funding of most science is better than a more diversified base of private support. The current dependency of the academy on this funding is creating perverse incentives that are demonstrably harming science.
a. All government-funded science outside of the clandestine realm must be perfectly transparent with data, research methods and results available to any party.
We’ve recently taken a look to see how these comport with the views of Myron Ebell of the Competitive Enterprise Institute, who was recently named to head the transitional team for the new administration’s EPA. We think he agrees with this us on this synthesis—something no one ever in any previous administration has done!
You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger. While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic. Here we post a few of the best in recent days, along with our color commentary.
We came across a pair of interesting, but somewhat involved reads this week on the interface of science and science policy when it comes to climate change. We’ll give you a little something to chew on from each one, but suggest that you ought you have a look at them at length to appreciate them in full.
First up is a piece, “The Limits of Knowledge and the Climate Change Debate” appearing in the Fall 2016 issue of the Cato Journal by Brian J. L. Berry, Jayshree Bihari, and Euel Elliott in which the authors examine the “increasingly contentious confrontation over the conduct of science, the question of what constitutes scientific certainty, and the connection between science and policymaking.”
Here’s an extended abstract:
As awareness of the uncertainties of global warming has trickled out, polling data suggests that the issue has fallen down the American public’s list of concerns. This has led some commentators to predict “the end of doom,” as Bailey (2015) puts it. In light of this, it seems odd to keep hearing that “the science is settled” and that there is little, if anything, more to be decided. The global warming community still asks us to believe that all of the complex causal mechanisms that drive climate change are fully known, or at least are known well enough that we, as a society, should be willing to commit ourselves to a particular, definitive and irreversible, course of action.
The problem is that we are confronted by ideologically polarized positions that prevent an honest debate in which each side acknowledges the good faith positions of the other. Too many researchers committed to the dominant climate science position are acting precisely in the manner that Kuhnian “normal science” dictates. The argument that humanity is rushing headlong toward a despoiled, resource-depleted world dominates the popular media and the scientific establishment, and reflects a commitment to the idea that climate change represents an existential or near-existential threat. But as Ellis (2013) says, “These claims demonstrate a profound misunderstanding of the ecology of human systems. The conditions that sustain humanity are not natural and never have been. Since prehistory, human populations have used technologies and engineered ecosystems to sustain populations well beyond the capabilities of unaltered natural ecosystems.”
The fundamental mistake that alarmists make is to assume that the natural ecosystem is at some level a closed system, and that there are therefore only fixed, finite resources to be exploited. Yet the last several millennia, and especially the last two hundred years, have been shaped by our ability—through an increased understanding of the world around us—to exploit at deeper and deeper levels the natural environment. Earth is a closed system only in a very narrow, physical sense; it is humanity’s ability to exploit that ecology to an almost infinite extent that is important and relevant. In other words, the critical variables of creativity and innovation are absent from alarmists’ consideration.
In that sense, there is a fundamental philosophical pessimism at work here—perhaps an expression of the much broader division between cultural pessimists and optimists in society as a whole. Both Deutsch (2011) and Ridley (2015b) view much of the history of civilization as being the struggle between those who view change through the optimistic lens of the ability of humanity to advance, to solve the problem that confronts it and to create a better world, and those who believe that we are at the mercy of forces beyond our control and that efforts to shape our destiny through science and technology are doomed to failure. Much of human history was under the control of the pessimists; it has only been in the last three hundred years that civilization has had an opportunity to reap the benefits of a rationally optimistic world view (see Ridley 2010).
Yet the current “debate” over climate change—which is really, in Ridley’s (2015a) terms, a “war” absent any real debate—has potentially done grave harm to this scientific enterprise. As Ridley documents, one researcher after another who has in any way challenged the climate orthodoxy has met with withering criticism of the sort that can end careers. We must now somehow return to actual scientific debate, rooted in Popperian epistemology, and in so doing try to reestablish a reasonably nonpolitical ideal for scientific investigation and discovery. Otherwise, the poisoned debate over climate change runs the risk of contaminating the entire scientific endeavor.
It seems the idea that the way climate change science is being conducted is proving a detriment to the good of science is becoming a common theme these days (see a new examination of the general topic by Paul Smaldino and Richard McElreath here, as well as our reflections from last week).
Our second piece this week is an opinion paper by Oliver Geden in the publication Wiley Interdisciplinary Reviews: Climate Change titled “The Paris Agreement and the inherent inconsistency of climate policymaking.” In it, Geden basically outlines how international climate negations are basically broken and that the role of climate scientists (especially those who want to act as climate policy advisor) is largely contradictory to what these (self-ordained) well-intentioned folks seem to think. While most policymakers assume consistency from talk to decision to action, in reality, Geden points out, inconsistency is true way of the world when addressing complex issues involving a “deliberately transformative agenda such as energy and climate policy.” This fundamental misunderstanding, or improper assumption, only furthers the ineptitude (foolhardiness?) of international climate negotiations.
Here’s an excerpt:
Until now, there has been no serious questioning of the intention to limit the temperature increase to 2 or even 1.5 °C. Not that many in the climate research community seem to grasp the political rationalities behind the setting of long-term policy targets. Even the mainstream policy discourse assumes consistency between talk, decisions, and actions. Accordingly, a decision on a certain climate target is presented and perceived as an act of deliberate choice, that will be followed up with the deployment of appropriate measures. In real-world policymaking, however, many decisions are viewed as independent organizational products, not necessarily requiring appropriate action. Despite the cultural norm of consistency, inconsistency is an inherent and inevitable feature of policymaking.
…Against this backdrop, the most challenging task ahead for policy-driven researchers and scientific advisors is that of critical self-reflection. In a world of inherently inconsistent climate policymaking, simply delivering the best available knowledge to policymakers might have counterintuitive effects. This means that those providing expertise cannot rely solely on their good intentions but also have to consider results. They must critically assess how their work is actually being interpreted and used in policymaking processes. This is not to say that researchers and scientific advisors should try to actively influence policymaking, as occasionally suggested, since that would almost inevitably lead to more inconsistency in experts’ knowledge production as a result of an increased politicization of climate research.
Climate researchers and scientific advisors should resist the temptation to act like political entrepreneurs peddling their advice, for example, by exaggerating how easy it is to transform the world economy. It is by no means their task to spread optimism about the future achievements of climate policy. Instead, to provide high-quality expertise, it is sufficient to critically analyze the risks and benefits of political efforts and contribute empirically sound—and sometimes unwelcome—perspectives to the global climate policy discourse.
This latter advice seems to have been lost on the 375 National Academy of Sciences who this week were signatories (aka “Responsible Scientists”) of an open letter expressing their “concern” that pulling out of the Paris Accord (as advocated by the “Republican nominee for President”) “would make it far more difficult to develop effective global strategies for mitigating and adapting to climate change. The consequences of opting out of the global community would be severe and long-lasting – for our planet’s climate and for the international credibility of the United States.”
Sure, whatever you say.
Air temperature and precipitation, in the words of Chattopadhyay and Edwards (2016), are “two of the most important variables in the fields of climate sciences and hydrology.” Understanding how and why they change has long been the subject of research, and reliable detection and characterization of trends in these variables is necessary, especially at the scale of a political decision-making entity such as a state. Chattopadhyay and Edwards evaluated trends in precipitation and air temperature for the Commonwealth of Kentucky in the hopes that their analysis would “serve as a necessary input to forecasting, decision-making and planning processes to mitigate any adverse consequences of changing climate.”
Data used in their study originated from the National Oceanic and Atmospheric Administration and consisted of time series of daily precipitation and maximum and minimum air temperatures for each Kentucky county. The two researchers focused on the 61-year period from 1950-2010 to maximize standardization among stations and to ensure acceptable record length. In all, a total of 84 stations met their initial criteria. Next, Chattopadhyay and Edwards subjected the individual station records to a series of statistical analyses to test for homogeneity, which reduced the number of stations analyzed for precipitation and temperature trends to 60 and 42, respectively. Thereafter, these remaining station records were subjected to non-parametric Mann-Kendall testing to assess the presence of significant trends and the Theil-Sen approach to quantify the significance of any linear trends in the time series. What did these procedures reveal?
For precipitation, Chattopadhyay and Edwards report only two of the 60 stations exhibited a significant trend in precipitation, leading the two University of Kentucky researchers to state “the findings clearly indicate that, according to the dataset and methods used in this study, annual rainfall depths in Kentucky generally exhibit no statistically significant trends with respect to time.” With respect to temperature, a similar result was found. Only three of the 42 stations examined had a significant trend. Once again, Chattopadhyay and Edwards conclude the data analyzed in their study “indicate that, broadly speaking, mean annual temperatures in Kentucky have not demonstrated a statistically significant trend with regard to time.”
Given such findings, it would seem that the vast bulk of anthropogenic CO2 emissions that have been emitted into the atmosphere since 1950 have had little impact on Kentucky temperature and precipitation, because there have been no systematic trends in either variable.
Chattopadhyay, S. and Edwards, D.R. 2016. Long-term trend analysis of precipitation and air temperature for Kentucky, United States. Climate 4: 10; doi:10.3390/cli4010010.
You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger. While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic. Here we post a few of the best in recent days, along with our color commentary.
We’ve put together an interesting collection of articles this week for your consideration.
First up is a shout out to lukewarming from Bloomberg View columnist Megan McArdle. In her piece “Global Warming Alarmists, You’re Doing It Wrong,” McArdle suggests that lukewarmers have a lot to bring to the climate change table, but are turned away by the entrenched establishment and tarred with labels like climate “denier”—a label which couldn’t be further from the truth. McArdle writes:
Naturally, proponents of climate-change models have welcomed the lukewarmists' constructive input by carefully considering their points and by advancing counterarguments firmly couched in the scientific method.
No, of course I’m just kidding. The reaction to these mild assertions is often to brand the lukewarmists “deniers” and treat them as if what they were saying was morally and logically equivalent to suggesting that the Holocaust never happened.
In her article, McArdle calls for less name calling and less heel digging and more open, constructive discussion:
There is a huge range of possible beliefs that go into assessing the various complicated theories about how the climate works, and the global-warming predictions generated by those theories range from “could well be catastrophic” to “probably not a big deal.” I know very smart, well-informed, decent people who fall at either end of the spectrum, and others who are somewhere in between. Then there are folks like me who aren’t sure enough to make a prediction, but are very sure we wouldn’t like to find out, too late, that the answer is “oops, catastrophic.”
These are not differences that can be resolved by name calling. Nor has the presumed object of this name calling -- to delegitimize thoughtful opposition, and thereby increase the consensus in favor of desired policy proposals -- been a notable political success, at least in the U.S. It has certainly rallied the tribe, and produced a lot of patronizing talk about science by people who aren’t actually all that familiar with the underlying scientific questions. Other than that, we remain pretty much where we were 25 years ago: holding summits, followed by the dismayed realization that we haven’t, you know, really done all that much except burn a lot of hydrocarbons flying people to summits. Maybe last year's Paris talks will turn out to be the actual moment when things started to change -- but having spent the last 15 years as a reporter listening to people tell me that no, really, we’re about to turn the corner, I retain a bit of skepticism.
How was this bit of advice from McArdle received by some of the loudest name-callers? Not well, as she describes in this follow-up:
In response, climate scientist Michael Mann tweeted this:
Then he blocked me. You will correctly infer that I was also inundated with other interlocutors on social media and e-mail. Many of them were respectful. Others were … less so. At worst, they suggested, I was a paid shill for fossil fuel interests. (Not so. I accept no pay from anyone other than Bloomberg.) At best, they said, I was a fool who was giving aid and comfort to the enemy. My editor was thusly chided for the column: “shame on you for publishing it, especially if you have children.”
This should come as a big surprise to no one.
Next, we point you to Judith Curry’s (herself no stranger to treatment like McArdle's, and worse) excellent blog post in which she provides a 21st-century update to Michael Polanyi’s 1962 essay titled “The Republic of Science: Its Political and Economic Theory.” Curry delivers an introduction to his work (“Polanyi provides an interesting perspective from the mid 20th century, as the U.S. and Europe were contemplating massive public investments in science. Polanyi’s perspective was colored by his early years in Hungary, which led him to oppose central planning in the sciences.”) and excerpts from Polanyi’s work and then follows with an offering of comments as to how Polanyi’s perspective stands up some half a century later. For instance [embedded links in original]:
Polanyi’s analogy of the scientific process with markets captures the pure incentives that drive scientists – search of truth, intellectual satisfaction and individual ego. What happens when the externalities of the Republic of Science produce perverse incentives, and careerism becomes a dominant incentive that requires publishing a lot of papers rapidly and producing headline-worthy results (who even cares if these papers don’t survive scrutiny beyond their press release)? (see What is the measure of scientific success?) What happens is that you get increasing incidence of scientific fraud (see Science: in the doghouse?), cherry picking and meaningless papers on headline grabbing topics that don’t stand up to the test of time (see Trust and don’t bother to verify).
And what happens when the ‘hand’ guiding science isn’t ‘invisible’, i.e. science is driven by politics, such as a political imperative to move away from fossil fuels and towards renewable energy? Federal funding can bias science, particularly in terms of selecting which scientific problems receive attention (link).
And what of Polanyi’s statement: “Such self-coordination of independent initiatives leads to a joint result which is unpremeditated by any of those who bring it about.” The ‘result’ of dangerous anthropogenic climate change and the harms of dietary fat were hardly unpremeditated.
We also have our own humble opinion on Polanyi, and how he influenced Thomas Kuhn, who, as a result of Polanyi’s view, noted that the intellectual market may not be all that fluid. From “Lukewarming: The New Climate Science that Changes Everything”:
…Polanyi…recognized the horrors of government intervention in science and the pernicious influence of central planning. He argued that science should be considered a free market with spontaneous order a perspective akin to [a list of libertarian economist luminaries]. Thomas Kuhn, a physicist and philosopher who attended several of Polanyi’s lectures, went him one better and argued in his classic The Structure of Scientific Revolutions that order created paradigms, or encompassing philosophical strucures, that lie at the core of science.
We went on to demonstrate that paradigms must become even more entrenched when the government becomes the monopoly provider of funding for science with political and policy consequences.
Curry offers up these suggestions as to how to improve on the current sad state of scientific affairs [again, links in original]:
So, what should the Republic of Science look like in the 21st century? The overwhelming issue for the health of science is to reassert the importance of intellectual and political diversity in science, and to respect and even nurture scientific mavericks. The tension between pure (curiosity driven) science and use-inspired and applied science [see Pasteur’s quadrant] needs to be resolved in a way that supports all three, with appropriate roles for universities, government and the private sector. And finally, the reward structure for university scientists need to change to reward more meaningful science that stands the test of time, versus counting papers and press releases, which may not survive even superficial scrutiny even after being published in prestigious journals that are more interested in impact than in rigorous methods and appropriate conclusions.
Failure to give serious thought to these issues risks losing the public trust and support for elite university science (at least in certain fields). Scientists are becoming their own worst enemy when they play into the hands of politicians and others seeking to politicize their science.
We urge you to read the whole thing. As always, Curry is insightful, interesting, informative, and right on target.
And finally, we suggest that you ought to have a look at Julie Kelly’s “The EPA vs. Science” in National Review. In this article, Kelly looks at recent developments in the long-running controversy surrounding the use of the herbicide glyphosate (i.e., Roundup) and EPA’s recent released and then withdrawn report on glyphosate’s health effects. Here’s a teaser:
On April 29, the EPA posted a report concluding that glyphosate (the active ingredient in Roundup herbicide and other products) is “not likely to be carcinogenic.” The committee found no relationship between glyphosate exposure and a number of cancers, including leukemia, multiple myeloma, and Hodgkin lymphoma. The 86-page assessment was signed by the EPA’s cancer review committee back in October 2015 and marked “final.”
But the EPA took it down on May 2, claiming the documents were “inadvertently” posted and only a preliminary report. “EPA has not completed our cancer review. We will look at the work of other governments. . . . our assessment will be peer reviewed and completed by end of 2016,” said an EPA spokeswoman.
Kelly notes “GMO foes are now targeting glyphosate in their ongoing campaign against genetically engineered crops” and “[a]ctivists are also using the court system to punish companies that use glyphosate” and adds “[i]t seems that the EPA may be taking some cues from these anti-GMO activists.”
House Science Committee chairman Lamar Smith (R-TX) is looking into what prompted the rather unusual move by the EPA. According to Kelly:
Chairman Smith also senses that EPA foot-dragging might be based more on politics than on science: “That the EPA would remove a report, which was marked as a ‘Final Report’ and signed by thirteen scientists, appears to be yet another example of this agency’s attempt to allow politics rather than science [to] drive its decision making. Sound, transparent science should always be the basis for EPA’s decisions.”
Kelly smartly concludes:
If the science indeed shows (again) that glyphosate does not cause cancer, the anti-pesticide Center for Biological Diversity says it will be a “major roadblock” for the anti-GMO movement, which wants to ban genetically engineered crops worldwide. It will be a blow the anti-GMO movement richly deserves.
You can check out her full story here.