Tag: global warming

The Hurricane Last Time

As of this writing, Tuesday, September 11, Hurricane Florence is threatening millions of folks from South Carolina to Delaware. It’s currently forecast to be near the threshold of the dreaded Category 5 by tomorrow afternoon. Current thinking is that its environment will become a bit less conducive as it nears the North Carolina coast on Thursday afternoon, but still hitting as a Major Hurricane (Category 3+). It’s also forecast to slow down or stall shortly thereafter, which means it will dump disastrous amounts of water in southeastern North Carolina. Isolated totals of over two feet may be common. 

At the same time that it makes landfall, there is going to be the celebrity-studded “Global Climate Action Summit” in San Francisco, and no doubt Florence will be the poster girl.

There’s likely to be the usual hype about tropical cyclones (the generic term for hurricanes) getting worse because of global warming, even though their integrated energy and frequency, as published by Cato Adjunct Scholar Ryan Maue, show no warming-related trend whatsoever.

Maue’s Accumulated Cyclone Energy index shows no increase in global power or strength.

Maue’s Accumulated Cyclone Energy index shows no increase in global power or strength.

Here is the prevailing consensus opinion of the National Oceanic and Atmospheric Administration’s Geophysical Fluid Dynamics Laboratory (NOAA GFDL): “In the Atlantic, it is premature to conclude that human activities–and particularly greenhouse gas emissions that cause global warming–have already had a detectable impact on hurricane activity.”

We’ll also hear that associated rainfall is increasing along with oceanic heat content. Everything else being equal (dangerous words in science), that’s true. And if Florence does stall out, hey, we’ve got a climate change explanation for that, too! The jet stream is “weirding” because of atmospheric blocking induced by Arctic sea-ice depletion. This is a triple bank shot on the climate science billiards table. If that seems a stretch, it is, but climate models can be and are “parameterized” to give what the French Climatologist, Pierre Hourdin, recently called “an anticipated acceptable range” of results.

The fact is that hurricanes are temperamental beasts. On September 11, 1984, Hurricane Diana, also a Category 4, took aim at pretty much the same spot that Florence is forecast to landfall—Wilmington, North Carolina. And then—34 years ago—it stalled and turned a tight loop for a day, upwelling the cold water that lies beneath the surface, and it rapidly withered into a Category 1 before finally moving inland. (Some recent model runs for Florence have it looping over the exact same place.) The point is that what is forecast to happen on Thursday night—a major category 3+ landfall—darned near happened over three decades earlier… and exactly 30-years before that, in 1954, Hurricane Hazel made a destructive Category 4 landfall just south of the NC/SC border. The shape of the Carolina coastlines and barrier islands make the two states very susceptible to destructive hits. Fortunately, this proclivity toward taking direct hits from hurricanes has also taught the locals to adapt—many homes are on stilts, and there is a resilience built into their infrastructure that is lacking further north.

There’s long been a running research thread on how hurricanes may change in a warmer world. One thing that seems plausible is that the maximum potential power may shift a bit further north. What would that look like? Dozens of computers have cranked away thousands years of simulations and we have a mixture of results: but the consensus is that there will be slightly fewer but more intense hurricanes by the end of the 21st Century. 

We actually have an example of how far north a Category 4 can land, on August 27, 1667 in the tidewater region of southeast Virginia. It prompted the publication of a pamphlet in London called “Strange News from Virginia, being a true relation of the great tempest in Virginia.” The late, great weather historian David Ludlum published an excerpt:

Having this opportunity, I cannot but acquaint you with the Relation of a very strange Tempest which hath been in these parts (with us called a Hurricane) which began on Aug. 27 and continued with such Violence that it overturned many houses, burying in the Ruines much Goods and many people, beating to the ground such as were in any ways employed in the fields, blowing many Cattle that were near the Sea or Rivers, into them, (!!-eds), whereby unknown numbers have perished, to the great affliction of all people, few escaped who have not suffered in their persons or estates, much Corn was blown away, and great quantities of Tobacco have been lost, to the great damage of many, and the utter undoing of others. Neither did it end here, but the Trees were torn up by their roots, and in many places the whole Woods blown down, so that they cannot go from plantation to plantation. The Sea (by the violence of the winds) swelled twelve Foot above its usual height, drowning the whole country before it, with many of the inhabitants, their Cattle and Goods, the rest being forced to save themselves in the Mountains nearest adjoining, where they were forced to remain many days in great want.

Ludlum also quotes from a letter from Thomas Ludwell to Virginia Governor Lord Berkeley about the great tempest:

This poore Country…is now reduced to a very miserable condition by a continual course of misfortune…on the 27th of August followed the most dreadful Harry Cane that ever the colony groaned under. It lasted 24 hours, began at North East and went around to Northerly till it came to South East when it ceased. It was accompanied by a most violent raine, but no thunder. The night of it was the most dismal time I ever knew or heard of, for the wind and rain raised so confused a noise, mixed with the continual cracks of falling houses…the waves were impetuously beaten against the shores and by that violence forced and as it were crowded the creeks, rivers and bays to that prodigious height that it hazarded the drownding of many people who lived not in sight of the rivers, yet were then forced to climb to the top of their houses to keep themselves above water…But then the morning came and the sun risen it would have comforted us after such a night, hat it not lighted to us the ruins of our plantations, of which I think not one escaped. The nearest computation is at least 10,000 house blown down.

It is too bad that there were no anemometers at the time, but the damage and storm surge are certainly consistent with a Category 4 storm. And this was in 1667, at the nadir of the Little Ice Age.

Greenland Update: New Evidence for Post Ice-Age Warmth

Last month, we summarized evidence for the long-term stability of Greenland’s ice cap, even in the face of dramatically warmed summer temperatures. We drew particular attention to the heat in northwest Greenland at the beginning of the previous (as opposed to the current) interglacial. A detailed ice core shows around 6000 years of summer temperatures averaging 6-8oC (11-14oF) warmer than the 20th century average, beginning around 118,000 years ago. Despite six millenia of temperatures that are likely warmer than we can get them for a mere 500 years, Greenland only lost about 30% of its ice. That translates to only about five inches of sea level rise per century from meltwater.

We also cited evidence that after the beginning of the current interglacial (nominally 10,800 years ago) it was also several degrees warmer than the 20th century, but not as warm as it was at the beginning of the previous interglacial.

Not so fast. Work just published online in the Proceedings of the National Academy of Sciences by Jamie McFarlin (Northwestern University) and several coauthors now shows July temperatures averaged 4-7oC (7-13oF) warmer than the 1952-2014 average over northwestern Greenland from 8 to 10 thousand years ago. She also had some less precise data for maximum temperatures in the last interglacial, and they are in agreement (maybe even a tad warmer) with what was found in the ice core data mentioned in the first paragraph.

Award McFarlin some serious hard duty points. Her paleoclimate indicator was the assembly of midges buried in the annual sediments under Wax Lips Lake (we don’t make this stuff up), a small freshwater body in northwest Greenland between the ice cap and Thule Air Base, on the shore of the channel between Greenland and Ellesmere Island. Midges are horrifically irritating, tiny biting flies that infest most high-latitude summer locations. They’re also known as no-see-ums, and they are just as nasty now as they were thousands of years ago.  

Getting the core samples form Wax Lips Lake means being out there during the height of midge season.

She acknowledges the seeming paradox of the ice core data: how could it have been so warm even as Greenland retained so much of its ice? Her (reasonable) hypothesis is that it must have snowed more over the ice cap—recently demonstrated to be occurring for the last 200 years in Antarctica as the surrounding ocean warmed a tad. 

The major moisture source for snow in northwesternmost Greenland is the Arctic Ocean and the broad passage between Greenland and Ellesmere. The only way it would snow so much as to compensate for the two massive warmings that have now been detected, is for the water to have been warmer, increasing the amount of moisture in the air. As we noted in our last Greenland piece, the Arctic Ocean was periodically ice-free for millenia after the ice age.  

McFarlin’s results are further consistent, at least in spirit, with other research showing northern Eurasia to have been much warmer than previously thought at the beginning of the current interglacial.

Global warming apocalypse scenarios are driven largely by the rapid loss of massive amounts of Greenland ice, but the evidence keeps coming in that, in toto, it’s remarkably immune to extreme changes in temperature, and that an ice-free Arctic Ocean has been common in both the current and the last interglacial period. 

Our Review of the Draft Fourth “National Assessment” of Climate Change Impacts on the U.S.

Public comments on the draft fourth “National Assessment” of present and future climate change impacts on the U.S. are due at 11:59 PM tonight and will be embargoed from public release until after then. As soon as it is made public, we’ll link to our comments. Until then, just think about the previous three Assessments.

Reviewing the first one in 2000, myself and Chip Knappenberger discovered that the science team just happened to choose the two most extreme models (for temperature and precipitation) out of the 14 they considered. And then we discovered that they were worse than bad: when applied to a really simple record of temperature, they performed worse than a table of random numbers. Really, it was the same situation as if you took a multiple choice test with four possible answers, and somehow managed to get less than 25% right. That’s the highly sought after “negative knowledge,” something you might think impossible!

The second one (2009) was so bad that we covered it with a 211-page palimpsest, a document that looked exactly like the federal original in both design and content. Except that it contained all the missing science as well as correcting as many half-truths and incomplete statements as we could find. Like we said, that took 211 pages of beautiful typeset and illustrated prose.

The National Oceanic and Atmospheric Administration was instrumental in producing the third (2014) Assessment, and in their press release at its debut, gushed that “it is a key deliverable in President Obama’s Climate Action Plan.” That has been recently undelivered.

So what did we say in our review of the upcoming fourth one? Well, you’ll have to wait until tomorrow. 

UPDATE: comments by Ryan Maue and myself are now available on the Cato website.

More Data Fiddling—Is Another Warming “Pause” About to Start?

Yesterday Jim Hansen, now with Columbia University, and several of his colleagues released their summary of 2017 global temperatures. Their history, published by the NASA Goddard Institute for Space Studies, has constantly been evolving in ways that make the early years colder and the later years hot. I recently posted on how this can happen, and the differences between these modified datasets and those determined objectively (i.e. without human meddling).

For a couple years I have been pointing out (along with Judith Curry and others) that the latest fad—which puts a lot of warming in recent data—is to extend high-latitude land weather station data far out over the Arctic Ocean. Hansen’s crew takes stations north of 64⁰ latitude and extends them an astounding 1200 kilometers into the ocean.

This, plainly speaking, is a violation of one of the most fundamental principles of thermodynamics, which is that when matter is changing its state (from, say, solid to liquid), a stirred fluid will remain at “freezing” until it is all liquid, whereupon warming will commence.

This also applies in the Arctic, where the fluid is often stirred by strong winds. So if, say, Resolute, one of the northernmost land stations, is 50⁰F, and the Arctic is mixed water-ice (it always is), that 50 degrees will be extended out 1200 kilometers where the air-sea boundary temperature has to be around 30⁰F, the freezing point of seawater up there.

Hansen et al. did pay some attention to this, noting this extension, which they normally apply to their data, was responsible for making 2017 the second-warmest year in their record. If they “only” extended 250km (still dicey), it would drop their “global” temperatures by a tenth of a degree, which would send the year down a rank. The result of all of this is that the big “spike” at the end of their record is in no small part due to the 1200km extension that turns thermodynamics on its head.

There’s another interesting pronouncement in the NASA announcement; many people have noted that the sun is a bit cool in recent years, and that it continues to trend slightly downward. The changes in its radiance are probably good for a tenth of a degree (C) of surface temperature or so. Hansen et al. use this to provide covering fire should warming stall out yet again:

Therefore, because of the combination of the strong 2016 El Niño and the phase of the solar cycle, it is plausible, if not likely, that the next 10 years of global temperature change will leave an impression of a ‘global warming hiatus’.

The significance of this will all fall out in the next year or so. If temperatures head back down all the way to their pre-El Niño levels, that will ultimately bring back the post-1996 “pause.” We’re going to guess they are going to remain a couple of tenths of a degree above that, based on what happened after the big one in 1998, where they settled a small amount above the pre-El Niño of the earlier 1990s.

If the recent warming rate (adjusting for El Niño) continues, we’ll hear that it is doing so “despite” the sun. Given that one year (2018) can have little influence on a recent trendline, that copy may already have been written!

All of this begs the question: Hansen notes in his release that the warming rate since 1970 has been fairly constant, about 0.17⁰C per decade, and didn’t note that the average of the UN’s climate models say it should be about twice that now. More lukewarming.

Global Science Report: Another Indication of Lukewarming

In March 1990, NASA’s Roy Spencer and University of Alabama-Huntsville’s (UAH) John Christy dropped quite a bomb when they published the first record of lower atmospheric temperatures sensed by satellites’ microwave sounding units (MSUs). While they only had ten years of data, it was crystal clear there was no significant warming trend.

It was subsequently discovered by Frank Wentz of Remote Sensing Systems (RSS), a Santa Rosa (CA) consultancy, that the orbits of the sensing satellites successively decay (i.e., become lower) and this results in a spurious but slight cooling trend. Using a record ending in 1995, Wentz showed a slight warming trend of 0.07⁰C/decade, about half of what was being observed by surface thermometers. 

In 1994, Christy and another UAH scientist, Richard McNider, attempted to remove “natural” climate change from the satellite data by backing out El Niño/La Niña fluctuations and the cooling associated with two big volcanoes in 1983 and 1991. They arrived at a warming trend of 0.09⁰C/decade after their removal.

Over the years, Spencer and Christy slightly revised their record repeatedly, and its latest iteration shows a total warming trend of 0.13⁰C/decade, which includes natural variability. But it is noteworthy that this is biased upward by very warm readings near the end of the record, thanks to the 2015–16 El Niño.

Global Science Report: JRA-55—Better Than the BEST Global Surface Temperature History, and Cooler Than the Rest

This is the first in a series of posts on global temperature records. The problems with surface thermometric records are manifold. Are there more reliable methods for measuring the temperature of the surface and the lower atmosphere?

Let’s face it, global surface temperature histories measured by thermometers are a mess. Recording stations come on- and offline seemingly at random. The time of day when the high and low temperatures for the previous 24 hours are recorded varies, often changing at the same station. This has a demonstrable biasing effect on high or low readings. Local conditions can further bias temperatures. What is the effect of a free-standing tree 100 feet away from a station growing into maturity? And the “urban heat island,” often very crudely accounted for, can artificially warm readings from population centers with as few as 2,500 residents. Neighboring reporting stations can diverge significantly from each other for no known reason.

The list goes on. Historically, temperatures have been recorded by mercury-in-glass thermometers housed in a ventilated white box. But, especially in poorer countries, there’s little financial incentive to keep these boxes the right white, so they may darken over time. That’s guaranteed to make the thermometers read hotter than it actually is. And the transition from glass to electronic thermometers (which read different high temperatures) has hardly been uniform.

Some of these problems are accounted for, and they produce dramatic alterations of original climate records (see here for the oft-noted New York Central Park adjustments) via a process called homogenization. Others, like the problem of station darkening, are not accounted for, even though there’s pretty good evidence that it is artificially warming temperatures in poor tropical nations.

What You Won’t Find in the New National Climate Assessment

Under the U.S. Global Change Research Act of 1990, the federal government has been charged with producing large National Climate Assessments (NCA), and today the most recent iteration has arrived. It is typical of these sorts of documents–much about how the future of mankind is doomed to suffer through increasingly erratic weather and other tribulations. It’s also missing a few tidbits of information that convincingly argue that everything in it with regard to upcoming 21st century climate needs to be taken with a mountain of salt.

The projections in the NCA are all based upon climate models. If there is something big that is systematically wrong with them, then the projections aren’t worth making or believing. 

Here’s the first bit of missing information:

Pages