Topic: Energy and Environment

A Sea-Surface Temperature Picture Worth a Few Hundred Words!

On January 7 a paper by Veronika Eyring and 28 coauthors, titled “Taking Climate Model Evaluation to the Next Level” appeared in Nature Climate Change, Nature’s  journal devoted exclusively to this one obviously under-researched subject.

For years, you dear readers have been subject to our railing about the unscientific way in which we forecast this century’s climate: we take 29 groups of models and average them. Anyone, we repeatedly point out, who knows weather forecasting realizes that such an activity is foolhardy. Some models are better than others in certain situations, and others may perform better under different conditions. Consequently, the daily forecast is usually a blend of a subset of available models, or, perhaps (as can be the case for winter storms) only one might be relied upon.

Finally the modelling community (as represented by the football team of authors) gets it. The second sentence of the paper’s abstract says “there is now evidence that giving equal weight to each available model projection is suboptimal.”

A map of sea-surface temperature errors calculated when all the models are averaged up shows the problem writ large:

Annual sea-surface temperature error (modelled minus observed) averaged over the current family of climate models. From Eyring et al.

Annual sea-surface temperature error (modelled minus observed) averaged over the current family of climate models. From Eyring et al.

First, the integrated “redness” of the map appears to be a bit larger than the integrated “blueness,” which would be consistent with the oft-repeated (here) observation that the models are predicting more warming than is being observed. But, more important, the biggest errors are over some of the most climatically critical places on earth.

Start with the Southern Ocean. The models have almost the entire circumpolar sea too warm, much of it off more than 1.5°C. Down around 60°S (the bottom of the map) water temperatures get down to near 0°C (because of its salinity, sea water freezes at around -2.0°C). Making errors in this range means making errors in ice formation. Further, all the moisture that lies upon Antarctica originates in this ocean, and simulating an ocean 1.5° too warm is going to inject an enormous amount of nonexistent moisture into the atmosphere, which will be precipitated over the continent in nonexistent snow.

The problem is, down there, the models are making error about massive zones of whiteness, which by their nature absorb very little solar radiation. Where it’s not white, the surface warms up quicker.

(To appreciate that, sit outside on a sunny but calm winters day, changing your khakis from light to dark, the latter being much warmer)

There are two other error fields that merit special attention: the hot blobs off the coasts of western South America and Africa. These are regions where relatively cool water upwells to the surface, driven in large part by the trade winds that blow into the earth’s thermal equator. For not-completely known reasons, these sometimes slow or even reverse, upwelling is suppressed, and the warm anomaly known as El Niño emerges (there is a similar, but much more muted version that sometimes appears off Africa).

There’s a current theory that El Niños are one mechanism that contributes to atmospheric warming, which holds that the temperature tends to jump in steps that occur after each big one. It’s not hard to see that systematically creating these conditions more persistently than they occur could put more nonexistent warming into the forecast.

Finally, to beat ever more manfully on the dead horse—averaging up all the models and making a forecast—we again note that of all the models, one, the Russian INM-CM4 has actually tracked the observed climate quite well. It is by far the best of the lot. Eyring et al. also examined the models’ independence from each other—a measure of which are (and which are not) making (or not making) the same systematic errors. And amongst the most independent, not surprisingly, is INM-CM4.

(It’s update, INM-CM5, is slowly being leaked into the literature, but we don’t have the all-important climate sensitivity figures in print yet.)

The Eyring et al. study is a step forward. It brings climate model application into the 20th century.

Is Greenland Melt “Off the Chart?”

That’s what the second author said about a new paper on Greenland’s ice, which arrived just in time for the annual meeting of the signatories of the UN’s 1992 treaty on climate change, this time in Katowice, Poland. Appearing in Nature, Rowan University Geologist Luke Trusel and several coauthors claimed ice-core data from Central-Western Greenland revealed melting in the recent two decades that has been “exceptional over at least the last 350 years.” The paper appeared in the December 6 issue of Nature.

How exceptional?

“Our results show a pronounced 250% to 575% increase in melt intensity over the last 20 years” as measured in four ice cores in west-central Greenland. Three of the cores were in the Jakobshavn Glacier, the largest-discharging glacier in the entire Northern Hemisphere. The Ilulissat icefjord, created by the glacier, some 25 miles in length, has historically calved nearly 50 cubic kilometers of ice per year into Disko Bay, near the town of Ilulissat. 

They then correlated their ice-core data with a model for ice behavior in all of Greenland. The correlations, while significant, were modest, with the explained variance of the island-wide melting maxing at around 36%. The melt reached its maximum in the very strange summer of 2012, where the amount at the Summit site, near Greenland’s highest elevation, was the largest since the summer of 1889—worth noting because that was well over 100 years ago.

There’s a long-standing quality weather station at Ilulissat, and it certainly shows summer warming of about 2⁰C from its beginning around 1850 to the 1920s.

For a broader comparison, we looked at the summer temperature anomalies for the 5 X 5 degree gridcell that includes Disko Bay and the icefjord. Because it is relatively hospitable and settled, there are a number of stations within the cell so the data is quite reliable. The data we show is from the Climate Research Unit at the University of East Anglia, version HadCRUT4.

There’s very little to see in this temperature record. The authors are well-aware of this and offer a rather unsatisfactory explanation:

The non-linear melt-temperature sensitivity also helps explain why episodes of mid-twentieth-century warmth resulted in less intense and less sustained melting compared to the last two decades, despite being only marginally cooler…Additional factors, such as recent sea-ice losses, as well as regional and teleconnected general circulation changes may also play a part in amplifying the melt response.

Assessing the Fourth “National Assessment” of Climate Change

The 1990 Global Change Research Act requires quadrennial  “Assessments” of the effects of global climate change on the U.S. The first was published in 2000, the second in 2009 (the G.W. Bush Administration chose to ignore the law), the third in 2014, and the fourth, last Black Friday.

We contributed extensive public comments on the penultimate draft of the latest Assessment, which has changed very little between the review draft and the final copy. The final version contains the same fatal flaws we noted earlier this year. It’s based upon a family of climate models that are predicting far more warming than has been occurring in the all-important tropical atmosphere. It should have used the one model (out of the 102 available runs) that actually gets things right, the Russian INM-CM4, but it relied upon the average warming produced by all 102. INM-CM4 has the least warming of all of them, but doing the right thing—using the one that works—would have pretty much gutted climate change as a serious issue.

These reports take several years to produce, and the current one was largely a product of the Obama Administration. If there’s a Trump Administration when the next one is scheduled (2022), it is likely to be very different. Why the current regime just didn’t do as Bush did and simply elide the 1990 law is probably so it will get another crack at it in 2022.

Our lengthy technical comments still apply.

Recent Hurricane Activity in Perspective

Harvey. Irma. Maria. Michael.  Four strong (category 3 or higher) hurricanes in 14 months. Something is happening, right?

When category 4 hurricane Harvey banged into Rockport, Texas, and then decided to hang around for five days visiting the fine folks of Houston and vicinity, it broke the 11.8 year “hurricane drought”, by far the longest period in the record without a major (category 3 or higher) landfall.[1],[2] Because of its unfortunate stall, Harvey also broke the record for a single-storm rainfall in the U.S., with 60.58 inches at Nederland, Texas.

What about a human influence from dreaded carbon dioxide? The National Oceanic and Atmospheric Administrations (NOAA) Geophysical Fluid Dynamics Laboratory (GFDL) dryly stated on September 20:

In the Atlantic, it is premature to conclude that human activities–and particularly greenhouse gas emissions that cause global warming–have already had a detectable impact on hurricane activity.

More From Ed Calabrese on Environmental Regulation

University of Massachusetts toxicologist (and Cato adjunct scholar) Edward J. Calabrese has arrived.  On October 3, he testified to the Senate Subcommittee on Superfund, Waste Management, and Regulatory Oversight, a part of the larger Committee on Environment and Public Works, chaired by John Barrasso (R-WY).

Calabrese was asked for his expert opinion on a draft EPA proposal to consider alternative regulatory models, including ditching the “Linear-No Threshold” (LNT) model that it employs, as does almost every other regulatory agency on earth.  You can read about EPA’s proposal here.

The LNT model assumes that the first photon of ionizing radiation (or the first molecule of a carcinogen) is capable of inducing a genetic mutation (i.e. altered DNA) that can be then transmitted to future generations.

Many years ago, Calabrese went looking for the scientific basis for the LNT, for it ran counter to what he was finding in his toxicological research—that low doses of some toxins or ionizing radiation may actually confer benefits. That, of course, is also the basis for much of modern chemical pharmacology.

Try as he could, and he tried for years, he could not locate the seminal science that gave rise to the LNT.  But he did find its progenitor, Hermann Muller, who claimed to have induced heritable point mutations with X-rays in the fruit-fly Drosophila. But where was the data and the peer-reviewed study?  Muller did author a brief article in Science on July 22, 1927, but, as Calabrese notes in his brand new paper, “He made this gene mutation claim/interpretation in an article that discussed his findings, but failed to include any data.”  The Science article said the data would be in a subsequent publication.

In fact, the data underlying what may have been the most important claim in the history of regulatory science, were never published in a peer-reviewed journal. 

Nonetheless amidst public concern about atomic radiation, the National Academy of Sciences formed the Biological Effects of Atomic Radiation (BEAR-1) panel, which reported its findings in Science in 1956.  Muller was obviously highly influential, and the Science report clearly established the LNT:

Any radiation dose, however small, can induce some mutations. There is no minimum amount of radiation dose, that is, which must be exceeded before any harmful mutations occur.

Calabrese documents that Muller’s good friend and another Drosophila geneticist, Edgar Altenberg, confidentially challenged Muller’s interpretation that he was inducing point mutations.  Rather, the very large doses of x-rays that Muller subjected the fruit flies to was simply knocking out wholesale portions of the chromosomes. 

But Altenburg never went public with his criticism.  Perhaps, Calabrese speculates, it was because of personal loyalty and a deep relationship.  When Muller attempted suicide in 1932, rather than addressing his family, his final note was to Altenburg. Muller and Altenburg ultimately lived until 1967, dying within months of each other.

Muller’s Science publication allowed him to claim research primacy, which landed him both prestige and the eventual 1946 Nobel Prize in Physiology or Medicine.

That prize validated Muller’s hypothesis and ultimately enshrined the LNT model as gospel, and it spread beyond ionizing radiation to other carcinogens and mutagens, as well as to many toxic chemicals in which, literally, the dose makes the poison. In Calabrese’s words,

…it has been Muller’s incorrect gene mutation interpretation and its legacy that created the LNT dose response model, leading to its recommendation by the US National Academy of Sciences in 1956…and then subsequently adopted by all regulatory programs throughout the world.

As a result of his recent testimony and publication, Calabrese may be changing the regulatory world. 

Unlinked References:

Biological Effects of Atomic Radiation Panel, 1956. Genetic effects of atomic radiation.  Science 123, 1157-64.

Muller H. J, 1927.  Artificial transmutation of the gene. Science 66, 84–87

 

Bloomberg/NYU Center Embeds Lawyers In AG Offices To Pursue Green Causes

When is it appropriate to privatize the work of public prosecutors? And does it make things better or worse when “cause” lawyering is at issue? As Jeff Patch reports at Real Clear Investigations, a project called the State Energy & Environmental Impact Center at New York University supplies seasoned lawyers to the offices of nine state attorney general offices, plus D.C. They serve there in such roles as special assistant attorney general while being paid by the NYU project, which is funded by and closely identified with former New York City Mayor Michael Bloomberg. The catch, which explains why the program is not likely to hold appeal for AGs in some other states: “Under terms of the arrangement, the fellows work solely to advance progressive environmental policy at a time when Democratic state attorneys general have investigated and sued ExxonMobil and other energy companies over alleged damages due to climate change.” 

Private funding of lawyers inside public prosecutors’ offices is not a new idea. Iowa’s AG office, for example, told Patch that it has employed legal talent from an American Bar Association-supported program. In another variation, it is not unusual for prosecutors to accept funding from the insurance industry for efforts to combat insurance fraud. Undergirding the political viability of these schemes is the (perhaps wobbly) premise that the state office is not farming out influence over politically or ideologically sensitive policy matters to outside groups that may have their own agenda.  

The AG offices participating in the program (Illinois, Maryland, Massachusetts, New Mexico, New York, Oregon, Pennsylvania, Virginia, and Washington state, as well as the District of Columbia) might plausibly argue that the projects they’re paying the Bloomberg embeds to work on are mostly ones they’d want to pursue zealously in any case, such as suing the EPA and other federal agencies over alleged lapses. Critics point to the ideologically fraught nature of the work and say the arrangement could violate some states’ ethics rules or generate improper conflicts of interest, as through an obligation to report activities back to the Bloomberg center. 

The spotlight on backstage doings at state AG offices arises from reports by Chris Horner of the Competitive Enterprise Institute based on public records requests that were fought tooth and nail by various AGs. (Besides the CEI report on attorneys general, Horner’s written a companion report on governors.) CEI is anything but a disinterested party in all this, of course, having been hit with a AG subpoena (later beaten back in court) over its supposedly wrongful advocacy on climate issues. That was itself part of a subpoena campaign targeting more than 100 research and advocacy groups, scientists, and private figures on the putatively wrong side of climate debates, which we and others decried at the time as a flagrant attack on rights protected by the First Amendment. 

Donald Trump, 60 Minutes, and Global Warming

Earlier this week, Leslie Stahl and 60 Minutes got into the subject of global warming with President Trump.  

Her question, “Do you still think climate change is a hoax” followed background on recent hurricanes Michael, Florence, Maria, and Harvey.

The President’s response was “I think something’s happening. Something’s changing and it’ll change back again. I don’t think it’s a hoax, I think there’s probably a difference. But I don’t know that it’s manmade.” 

This is a huge walk-back from his old rhetoric, which was enough to make scientists like me cringe. 

And in the context of hurricanes, his comment is also is consistent with what the National Oceanic and Atmospheric Administration’s Geophysical Fluid Dynamics Laboratory (GFDL) said in its September 20 statement titled “Global Warming and Hurricanes”: “In the Atlantic, it is premature to conclude that human activities–and particularly greenhouse gas emissions that cause global warming–have already had a detectable impact on hurricane activity.”

It is noteworthy that GFDL’s statement was in an update, and that “Global Warming and Hurricanes” has said the same about Atlantic hurricanes for years, long predating the Trump Administration.

Stahl then went on to Greenland.  Here’s the relevant transcript:

Lesley Stahl: I wish you could go to Greenland, watch these huge chunks of ice just falling into the ocean, raising the sea levels.

President Donald Trump: And you don’t know whether that would have happened with or without man. You don’t know.

Another reasonable response. For reasons having nothing to do with humans, ice-covered areas in Greenland endured 6,000 years of warming centering around 118,000 years ago that, in terms of integrated heating, was larger than anything humans can do to it. Yet it only lost about 30% of its ice. There were certainly more “huge chunks of ice just falling into the ocean raising sea levels” back then, with no human influence on climate.

It’s also true that the current high-latitude north polar warming is largely (but not completely) consistent with global warming theory.

Finally, they got into a “he said, he said” discussion about climate scientists’ various viewpoints.  Here’s how it ended:

Lesley Stahl: Yeah, but what about the scientists who say it’s worse than ever?

President Donald Trump: You’d have to show me the scientists because they have a very big political agenda, Lesley.

Lesley Stahl: I can’t bring them in.

President Donald Trump: Look, scientists also have a political agenda.

No, 60 Minutes cannot be expected to bring in hundreds of scientists on either side of this debate to investigate whether or not they have a political agenda.  But Al Gore may have been on to something in his comments on the recent UN report claiming temperature increases of a mere 0.6°C will be catastrophic.  He said it was “torqued up a little bit, appropriately – how [else] do they get the attention of policy-makers around the world”[?].

Hmmm. Seems like a political agenda.

Pages