Topic: Energy and Environment

AAAS’s Guide to Climate Alarmism

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

Back in the Bush II Administration, the American Association for the Advancement of Science (AAAS) nakedly tried to nudge the political process surrounding the passage of the environmentally-horrific ethanol fuel mandate.  It hung a large banner from the side of its Washington headquarters, picturing a corn stalk morphing into a gas pump, all surrounded by a beautiful, pristine, blue ocean.  They got their way, and we got the bill, along with a net increase in greenhouse gas emissions.

So it’s not surprising that AAAS is on the Washington Insider side of global warming, releasing  a report today that is the perfect 1-2-3 step-by-step how-to guide to climate change alarm.

This is how it is laid out in the counterfactually-titled AAAS report  “What We Know”:

Step 1: State that virtually all scientists agree that humans are changing the climate,

Step 2: Highlight that climate change has the potential to bring low risk but high impact outcomes, and

Step 3: Proclaim that by acting now, we can do something to avert potential catastrophe.

To make this most effective, appeal to authority, or in this case, make the case that you are the authority. From the AAAS:

We’re the largest general scientific society in the world, and therefore we believe we have an obligation to inform the public and policymakers about what science is showing about any issue in modern life, and climate is a particularly pressing one,” said Dr. Alan Leshner, CEO of AAAS. “As the voice of the scientific community, we need to share what we know and bring policymakers to the table to discuss how to deal with the issue.

But despite promising to inform us as to “what the science is showing,” the AAAS report largely sidesteps the best and latest science that points to a much lowered risk of extreme climate change, choosing instead to inflate and then highlight what meager evidence exists for potential catastrophic outcomes—evidence that in many cases has been scientifically challenged (for example here and here).

New NWF Report “Mascot Madness: How Climate Change Is Hurting School Spirit”—They’re Kidding, Right?

The latest from the National Wildlife Federation has to rank among the most absurd global warming reports I have encountered.  And, after 30 years of encountering all sorts of wacky warming hype, this is saying a lot.

This NWF doozey is entitled “Mascot Madness: How Climate Change is Hurting School Spirit” and was timed so as to try to take advantage of the pre-coverage of the upcoming March Madness—the popular annual NCAA college basketball tournament. Apparently linking climate change to negative impacts on sports is a new green tactic.

The NWF’s premise is that human-caused global warming is threatening the natural version of school mascots, and, in some cases, causing them to be dissociated from the region that includes the university that they represent, presumably dampening “school spirit.”

The NWF offered up its solution to this vexing problem:

• Passing effective laws that reduce carbon pollution and other air pollutants that drive climate change and endanger the health of our communities and wildlife.

• Investing in clean, wildlife-friendly, renewable energy sources to replace our dangerous dependence on dirty fossil fuels.

• Practicing “climate-smart conservation” by taking climate change into account in our wildlife and natural resource management efforts.

Of course.

Even if it were true that anthropogenic climate change could be scientifically linked to changes in the location and/or health of the various school mascot species—which it almost certainly can’t—how this impacts “school spirit” is completely beyond me.

If the real-world situation that the mascots find themselves in is reflected in school spirit, can you imagine the level of dejection in the fan base of say the San Diego State Aztecs, the University of Southern California Trojans, the University of Calgary Dinos, or the Indiana University-Purdue University Fort Wayne Mastodons? It is a wonder that a single seat is filled for home games.

And as to the relationship between the natural territory of the mascot and the degree of rah-rahness, consider what must be the struggle facing the booster clubs behind the UC Irvine Anteaters, the Pittsburg (Kansas) State Gorillas, the Youngstown State Penguins, or the University of Missouri-Kansas City Kangaroos. Global warming’s impact is small beans compared to this kind of territorial displacement!

The NWF draws special attention to the worrisome case of the rivalry between the University of Michigan Wolverines and the Ohio State Buckeyes, fretting that climate change is driving the wolverine out of the state of Michigan while simultaneously driving the buckeye tree into Michigan (and out of Ohio).

But, according to this webpage from the University of Michigan athletic association, how the University’s mascot became the Wolverines is a matter of some debate. Interestingly, the page goes on to note that an actual wolverine has never been captured in the state of Michigan, and the first verified sighting of one didn’t occur until 2004!

And a quick peak at the USDA Plant Guide indicates that distribution of the Ohio buckeye tree shows that while the tree may extend is natural boundary northward in a warming climate, there is still plenty of territory south of Ohio to keep the tree in the state for a long time to come.  So, everyone (including the NWF) can rest assured that climate change will not serve to lessen the Michigan/Ohio state rivalry.

In keeping with the ringing the global warming alarm bells, I am a bit surprised that the NWF didn’t compile a companion report titled “Mascot Madness: How Climate Change is Boosting School Spirit to Unhealthy Levels.” In that report, they could have featured the Miami Hurricanes, the University of British Columbia-Okanagan Heat, the Geneva College Golden Tornadoes, the Southeastern Oklahoma Savage Storm, and, of course, the most obvious of all, the Dartmouth College Big Greens.

Lessons from the New Transit Data

The American Public Transportation Association (APTA) argues that a 0.7 percent increase in annual transit ridership in 2013 is proof that Americans want more “investments” in transit–by which the group means more federal funding. However, a close look at the actual data reveals something entirely different.

It turns out that all of the increase in transit ridership took place in New York City. New York City subway and bus ridership grew by 120 million trips in 2013; nationally, transit ridership grew by just 115 million trips. Add in New York commuter trains (Long Island Railroad and Metro North) and New York City transit ridership grew by 123 million trips, which means transit in the rest of the nation declined by 8 million trips. As the New York Times observes, the growth in New York City transit ridership resulted from “falling unemployment,” not major capital improvements. 

Meanwhile, light-rail and bus ridership both declined in Portland, which is often considered the model for new transit investments. Light-rail ridership grew in Dallas by about 300,000 trips, but bus ridership declined by 1.7 million trips. Charlotte light rail gained 27,000 new rides in 2013, but Charlotte buses lost 476,000 rides. Declines in bus ridership offset part or all of the gains in rail ridership in Chicago, Denver, Salt Lake City, and other cities. Rail ridership declined in Albuquerque, Baltimore, Minneapolis, Sacramento, and on the San Francisco BART system, among other places. 

APTA wants people to believe that transit is an increasingly important form of transportation. In fact, it is increasingly irrelevant. Although urban driving experienced a downward blip after the 2008 crash, it is now rising again, while transit outside of New York City is declining. Source: Urban driving data from Federal Highway Administration, urban population from the Census Bureau, and transit numbers from APTA. Transit PM = transit passenger miles.

Rail and bus ridership have grown in Seattle and a few other cities, but the point is that construction of expensive transit projects with federal funds is not guaranteed to boost transit ridership. In many cases, overall transit ridership declines because the high costs of running the rail systems forces transit agencies to cut bus service.

APTA wants more federal funding because many of its associate members are rail contractors who depend on federal grants to build obsolete transit systems. Light-rail lines being planned or built today cost an average of more than $100 million per mile, while some cities have built new four-lane freeways for $10 million to $20 million per mile, and each of those freeway lanes will move far more people per day than a light-rail line. 

Congress will be reconsidering federal funding for highways and transit this year, and APTA wants as much money as possible diverted to transit. President Obama has proposed a 250 percent increase in deficit spending on transportation, most of which would go to transit.

Transit only carries about 1 percent of urban travel, yet it already receives more than 20 percent of federal surface transportation dollars. Since most of those federal dollars come out of gas taxes, auto drivers are being forced to subsidize rail contractors, often to the detriment of low-income transit riders whose bus services are cut in order to pay for rail lines into high-income neighborhoods.

The real problem with our transportation system is not a shortage of funds, but too much money being spent in the wrong places. New York City transit was the only major transit system in the country that covered more than half its operating costs out of fares in 2012; the average elsewhere was less than 30 percent. Funding transportation out of user fees, such as mileage-based user fees and transit fares, would give transportation agencies incentives to spend the money where it is needed by transport users, not where it will create the most pork for politicians. 

Climate Insensitivity: What the IPCC Knew and Didn’t Tell Us, Part II

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

The bottom line from the new report from the Global Warming Policy Foundation (GWPF) is that the U.N.’s Intergovernmental Panel on Climate Change (IPCC) knew, but didn’t highlight, the fact that the best available scientific evidence suggests that the earth’s climate is much less sensitive to atmospheric carbon dioxide input than the climate models they relied upon  to forecast future global warming portray.

We covered the GWPF report and its implications in this post. But one implication is worth mentioning again, from the report’s conclusions:

The [climate models] overestimate future warming by 1.7–2 times relative to an estimate based on the best observational evidence.

While the report’s authors, Nicolas Lewis and Marcel Crok, are talking about the future, the same thing should apply to the past. In fact, a strong test of Lewis and Crok’s prediction is whether or not the same climate models predict too much warming to have already taken place than observations indicate.

There is perhaps a no better general assessment of past model behavior than the analysis that we developed for a post back in the fall.

The figure below is our primary finding. It shows how the observed rate of global warming compares with the rate of global warming projected to have occurred by the collection of climate models used by the IPCC. We performed this comparison over all time scales ranging from from 10 to 63 years. Our analysis ended in 2013 and included an analysis of the global temperature trend beginning in each year from 1950 through 2004. 

As can be clearly seen in our figure, climate models have consistently overestimated the amount of warming that has taken place. In fact, they are so bad, that over the course of the past 25 years (and even at some lengths as long as 35 years) the observed trend falls outside of the range which includes 95 percent of all model runs. In statistical parlance, this situation means that the observed trend cannot be reliably considered to be part of the collection of modeled trends. In other words, the real world is not accurately captured by the climate models—the models  predict that the world should warm up much faster than it actually does.

Climate Insensitivity: What the IPCC Knew But Didn’t Tell Us

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

In a remarkable example of scientific malfeasance, it has become apparent that the IPCC knew a lot more than it revealed in its 2013 climate compendium about how low the earth’s climate sensitivity is likely to be.

The importance of this revelation cannot be overstated. If the UN had played it straight, the “urgency” of global warming would have evaporated, but, recognizing that this might cause problems, they preferred to mislead the world’s policymakers.

Strong words? Judge for yourself.

The report Oversensitive—how the IPCC hid the good news on global warming,” was released today by the Global Warming Policy Foundation (GWPF)—a U.K. think-tank which is “concerned about the costs and other implications of many of the policies currently being advocated” regarding climate change (disclosure: our Dick Lindzen is a member of the GWPF Academic Advisory Council).

The new GWPF report concluded:

We believe that, due largely to the constraints the climate model-orientated IPCC process imposed, the Fifth Assessment Report failed to provide an adequate assessment of climate sensitivity – either ECS [equilibrium climate sensitivity] or TCR [transient climate response] – arguably the most important parameters in the climate discussion. In particular, it did not draw out the divergence that has emerged between ECS and TCR estimates based on the best observational evidence and those embodied in GCMs. Policymakers have thus been inadequately informed about the state of the science.

The study was authored by Nicholas Lewis and Marcel Crok. Crok is a freelance science writer from The Netherlands and Lewis, an independent climate scientist, was an author on two recent important papers regarding the determination of the earth’s equilibrium climate sensitivity (ECS)—that is, how much the earth’s average surface temperature will rise as a result of a doubling of the atmospheric concentration of carbon dioxide.

The earth’s climate sensitivity is the most important climate factor in determining how much global warming will result from our greenhouse gas emissions (primarily from burning of fossil fuels to produce, reliable, cheap energy). But, the problem is, is that we don’t know what the value of the climate sensitivity is—this makes projections of future climate change–how should we say this?–a bit speculative.

Some Like It Hot

The Current Wisdom is a series of monthly articles in which Patrick J. Michaels and Paul C. “Chip” Knappenberger, from Cato’s Center for the Study of Science, review interesting items on global warming in the scientific literature that may not have received the media attention that they deserved, or have been misinterpreted in the popular press.

 

With all the stern talk about global warming and widespread concern over climate change, you would think that we humans would have a propensity for cooler temperatures. Everywhere you look, the misery that rising temperatures (and the associated evils) will supposedly heap upon us seems to dominate reports about the coming climate. But do patterns of population movement really support the idea that we prefer cooler locations?

Increased Mobility

Since 1900, the population of the United States increased from about 76 million people to about 309 million people in 2010. Accompanying that population growth were major advances in technology and industry, including vast improvements in our nation’s system of transportation. As planes, trains, and automobiles replaced the horse and buggy, Americans became more mobile, and where we live was no longer connected primarily with proximity to where we were born. Instead, we became much freer to choose our place of residence based on considerations other than ease of getting there.

Where has our new-found freedom of mobility led us? Figure 1 shows the rate of population change from 1900 to 2010 for each of the contiguous 48 states. Notice the increases in states with warm climates such as Florida, Texas, and California, and also in states with big industry (that is, jobs), such as New York, Michigan, and Ohio for example.

 

 

Figure 1. The state-by-state population trend (people/year) from 1900 to 2010 (data from U.S. Census Bureau).

More Evidence for a Low Climate Sensitivity

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

We have two new entries to the long (and growing) list of papers appearing the in recent scientific literature that argue that the earth’s climate sensitivity—the ultimate rise in the earth’s average surface temperature from a doubling of the atmospheric carbon dioxide content—is close to 2°C, or near the low end of the range of possible values presented by the U.N.’s Intergovernmental Panel on Climate Change (IPCC).  With a low-end warming comes low-end impacts and an overall lack of urgency for federal rules and regulations (such as those outlined in the President’s Climate Action Plan) to limit carbon dioxide emissions and limit our energy choices.

The first is the result of a research effort conducted by Craig Loehle and published in the journal Ecological Modelling. The paper is a pretty straightforward determination of the climate sensitivity.  Loehle first uses a model of natural modulations to remove the influence of natural variability (such as solar activity and ocean circulation cycles) from the observed temperature history since 1850. The linear trend in the post-1950 residuals from Loehle’s natural variability model was then assumed to be largely the result, in net, of human carbon dioxide emissions.  By dividing the total temperature change (as indicated by the best-fit linear trend) by the observed rise in atmospheric carbon dioxide content, and then applying that relationship to a doubling of the carbon dioxide content, Loehle arrives at an estimate of the earth’s transient climate sensitivity—transient, in the sense that at the time of CO2 doubling, the earth has yet to reach a state of equilibrium and some warming is still to come. 

Loehle estimated the equilibrium climate sensitivity from his transient calculation based on the average transient:equilibrium ratio projected by the collection of climate models used in the IPCC’s most recent Assessment Report. In doing so, he arrived at an equilibrium climate sensitivity estimate of 1.99°C with a 95% confidence range of it being between 1.75°C and 2.23°C.

Compare Loehle’s estimate to the IPCC’s latest assessment of the earth’s equilibrium climate sensitivity which assigns a 66 percent or greater likelihood that it lies somewhere in the range from 1.5°C to 4.5°C. Loehle’s determination is more precise and decidedly towards the low end of the range.

The second entry to our list of low climate sensitivity estimates comes from  Roy Spencer and William Braswell and published in the Asia-Pacific Journal of Atmospheric Sciences. Spencer and Braswell used a very simple climate model to simulate the global temperature variations averaged over the top 2000 meters of the global ocean during the period 1955-2011. They first ran the simulation using only volcanic and anthropogenic influences on the climate. They ran the simulation again adding a simple take on the natural variability contributed by the El Niño/La Niña process. And they ran the simulation a final time adding in a more complex situation involving a feedback from El Niño/La Niña onto natural cloud characteristics. They then compared their model results with the set of real-world observations.

What the found, was the that the complex situation involving El Niño/La Niña feedbacks onto cloud properties produced the best match to the observations.  And this situation also produced the lowest estimate for the earth’s climate sensitivity to carbon dioxide emissions—a value of 1.3°C.

Spencer and Braswell freely admit that using their simple model is just the first step in a complicated diagnosis, but also point out that the results from simple models provide insight that should help guide the development of more complex models, and ultimately could help unravel some of the mystery as to why full climate models produce  high estimates of the earth’s equilibrium climate sensitivity, while estimates based in real-world observations are much lower.

Our Figure below helps to illustrate the discrepancy between climate model estimates and real-world estimates of the earth’s equilibrium climate sensitivity. It shows Loehle’s determination as well as that of Spencer and Braswell along with 16 other estimates reported in the scientific literature, beginning in 2011. Also included in our Figure is both the IPCC’s latest assessment of the literature as well as the characteristics of the equilibrium climate sensitivity from the collection of climate models that the IPCC uses to base its impacts assessment.

Figure 1. Climate sensitivity estimates from new research beginning in 2011 (colored), compared with the assessed range given in the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5) and the collection of climate models used in the IPCC AR5. The “likely” (greater than a 66% likelihood of occurrence)range in the IPCC Assessment is indicated by the gray bar. The arrows indicate the 5 to 95 percent confidence bounds for each estimate along with the best estimate (median of each probability density function; or the mean of multiple estimates; colored vertical line). Ring et al. (2012) present four estimates of the climate sensitivity and the red box encompasses those estimates. The right-hand side of the IPCC AR5 range is actually the 90% upper bound (the IPCC does not actually state the value for the upper 95 percent confidence bound of their estimate). Spencer and Braswell (2013) produce a single ECS value best-matched to ocean heat content observations and internal radiative forcing.

Quite obviously, the IPCC is rapidly losing is credibility.

As a result, the Obama Administration would do better to come to grips with this fact and stop deferring to the IPCC findings when trying to justify increasingly  burdensome  federal regulation of  carbon dioxide emissions, with the combined effects of manipulating markets and restricting energy choices.

References:

Loehle, C., 2014. A minimal model for estimating climate sensitivity. Ecological Modelling, 276, 80-84.

Spencer, R.W., and W. D. Braswell, 2013. The role of ENSO in global ocean temperature changes during 1955-2011 simulated with a 1D climate model. Asia-Pacific Journal of Atmospheric Sciences, doi:10.1007/s13143-014-0011-z.