Topic: Energy and Environment

Free Market Recycling

Environmentalists often assume that free markets work against their goals. But the market is the best friend of the natural world because it generates constant pressure to innovate, to cut costs, and to use resources efficiently. The price system prompts consumers and businesses to minimize consumption of dwindling resources. To ease California’s water problems, for example, we need markets not regulatory controls.

The Wall Street Journal today has a pair of stories on scrap metal recycling:

Waste has long been a major U.S. export, providing material to be melted in foreign steel mills or made into new paper products. But the strength of the dollar has made American waste pricier abroad, cutting demand…

… That has been hard on the network of waste dealers and scrap gatherers who are the backbone of the industry. Bob Hooper, who goes by Hoop, finds discarded metal on curbs and in dumpsters around Pittsburgh and carries it to scrapyards in a rusting Chevy pickup with a bungee cord to keep the driver’s door shut.

… On a recent day, he hauled in more than 1,000 pounds of scrap, including two discarded refrigerators, a water heater and a broken microwave buried in egg shells and other moist trash. After gasoline expenses, he netted about $80.

From a related story in today’s Journal:

Wherever he goes in his Chevy pickup, Bob “Hoop” Hooper scans for discarded metal—a mangled bike, a broken microwave, even a beer can. “That’s like the No. 1 rule of scrapping,” Mr. Hooper, 48 years old, explained recently. “Don’t pass up metal.”

Scrapping—gathering metal and selling it to scrap dealers—is a tough job, involving excavations inside dumpsters, forays into dangerous neighborhoods and, lately, falling metal prices.

… Most mornings he hits the road around 9 a.m., and by late afternoon has filled the back of his pickup and earned anywhere from $40 to several hundred dollars at scrapyards. In the evening, he dismantles appliances and sorts valuable metals like copper and brass into plastic buckets. “It gives me something to do while I’m watching TV,” he said.

One regular stop is a housing complex with 31 dumpsters. On a recent morning, he found an umbrella and a mop in one. “It don’t seem like much, but as long as you’re getting something from every stop, it piles up,” he said.

Green groups often confer awards on politicians who press for more control over markets. But they should instead champion people like Bob Hooper. He is devoting his career to recycling, which is helping to reduce landfill waste. His work also boosts the economy, which we know because he is earning a net return in the marketplace.

Bob Hooper has a dirty job, but he is creating a cleaner environment the market-based way.

Spin Cycle: Whither the Hiatus

The Spin Cycle is a reoccurring feature based upon just how much the latest weather or climate story, policy pronouncement, or simply poo-bah blather spins the truth. Statements are given a rating between 1-5 spin cycles, with less cycles meaning less spin. For a more in-depth description, visit the inaugural edition.

Today’s press buzz is about a new paper appearing in this week’s Science magazine which concludes that the “hiatus” in global warming is but a byproduct of bad data. The paper, “Possible artifacts of data biases in the recent global surface warming hiatus,” was authored by a research team led by Director of the National Oceanic and Atmospheric Administration’s National Climatic Data Center, Dr. Thomas Karl. Aside from missing the larger point—that the relevant question is not whether the earth is warming, but why it’s warming so much slower than the computer model projections—the paper’s conclusions have been well-run through the spin cycle.

The spin was largely conducted by the American Association for the Advancement of Science (AAAS), publisher of Science magazine, through its embargo campaign and the courting of major science writers in the media before the article had been made available to the general public (and other scientists). Given the obvious weaknesses in the new paper (see below and here, for starters), there seems the potential for more trouble at Science—something that Editor-in-Chief Marcia McNutt is up to her eyeballs with already.

One major problem with the new Karl and colleagues paper is that the headline-making finding turns out not even to be statistically significant at the standard scientific level—that is, having a less than 1-in-20 chance of being due to chance (unexplained processes) alone.

Instead, the results are reported as being “statistically significant” if they have less than a 1-in-10 chance of being caused by randomness.

More and more we are seeing lax statistical testing being applied in high profile papers (see here and here for recent examples). This tendency is extremely worrisome, as at the same time, the validity of large portions of the scientific literature is being questioned on the basis of (flawed) methodological design and poor application and interpretation of statistics. An illuminating example of how easily poor statistics can make it into the scientific literature and produce a huge influence on the media was given last week in the backstory of a made-up paper claiming eating chocolate could enhance weight loss efforts.

But, as the Karl et al. paper (as well as the other recent papers linked above) shows, some climate scientists are pushing forward with less than robust results anyway.

Why? Here’s a possible clue.

Recall an op-ed in the New York Times a few months back by Naomi Oreskes titled “Playing Dumb on Climate Change.” In it, Oreskes, a science historian (and author of the conspiratorial Merchants of Doubt) argued that since climate change was such an urgent problem, we shouldn’t have to apply the same 1-in-20 set of rigorous statistics to the result—it is slowing down the push for action. Climate scientists, Oreskes argued, were being too conservative in face of a well-known threat and therefore, “lowering the burden of proof” should be acceptable.

Is There No “Hiatus” in Global Warming After All?

A new paper posted today on ScienceXpress (from Science magazine), by Thomas Karl, Director of NOAA’s Climate Data Center, and several co-authors[1], that seeks to disprove the “hiatus” in global warming prompts many serious scientific questions.

The main claim[2] by the authors that they have uncovered a significant recent warming trend is dubious. The significance level they report on their findings (.10) is hardly normative, and the use of it should prompt members of the scientific community to question the reasoning behind the use of such a lax standard.

In addition, the authors’ treatment of buoy sea-surface temperature (SST) data was guaranteed to create a warming trend. The data were adjusted upward by 0.12°C to make them “homogeneous” with the longer-running temperature records taken from engine intake channels in marine vessels. 

As has been acknowledged by numerous scientists, the engine intake data are clearly contaminated by heat conduction from the engine itself, and as such, never intended for scientific use. On the other hand, environmental monitoring is the specific purpose of the buoys. Adjusting good data upward to match bad data seems questionable, and the fact that the buoy network becomes increasingly dense in the last two decades means that this adjustment must put a warming trend in the data.

The extension of high-latitude arctic land data over the Arctic Ocean is also questionable. Much of the Arctic Ocean is ice-covered even in high summer, meaning the surface temperature must remain near freezing. Extending land data out into the ocean will obviously induce substantially exaggerated temperatures.

Additionally, there exist multiple measures of bulk lower atmosphere temperature independent from surface measurements which indicate the existence of a “hiatus”[3]. If the Karl et al., result were in fact robust, it could only mean that the disparity between surface and mid-tropospheric temperatures is even larger that previously noted. 

Getting the vertical distribution of temperature wrong invalidates virtually every forecast of sensible weather made by a climate model, as much of that weather (including rainfall) is determined in large part by the vertical structure of the atmosphere.

Instead, it would seem more logical to seriously question the Karl et al. result in light of the fact that, compared to those bulk temperatures, it is an outlier, showing a recent warming trend that is not in line with these other global records.

And finally, even presuming all the adjustments applied by the authors ultimately prove to be accurate, the temperature trend reported during the “hiatus” period (1998-2014), remains significantly below (using Karl et al.’s measure of significance) the mean trend projected by the collection of climate models used in the most recent report from the United Nation’s Intergovernmental Panel on Climate Change (IPCC). 

It is important to recognize that the central issue of human-caused climate change is not a question of whether it is warming or not, but rather a question of how much. And to this relevant question, the answer has been, and remains, that the warming is taking place at a much slower rate than is being projected.

The distribution of trends of the projected global average surface temperature for the period 1998-2014 from 108 climate model runs used in the latest report of the U.N.’s Intergovernmental Panel on Climate Change (IPCC)(blue bars). The models were run with historical climate forcings through 2005 and extended to 2014 with the RCP4.5 emissions scenario. The surface temperature trend over the same period, as reported by Karl et al. (2015, is included in red. It falls at the 2.4th percentile of the model distribution and indicates a value that is (statistically) significantly below the model mean projection.

The distribution of trends of the projected global average surface temperature for the period 1998-2014 from 108 climate model runs used in the latest report of the U.N.’s Intergovernmental Panel on Climate Change (IPCC)(blue bars). The models were run with historical climate forcings through 2005 and extended to 2014 with the RCP4.5 emissions scenario. The surface temperature trend over the same period, as reported by Karl et al. (2015, is included in red. It falls at the 2.4th percentile of the model distribution and indicates a value that is (statistically) significantly below the model mean projection.


[1] Karl, T. R., et al., Possible artifacts of data biases in the recent global surface warming hiatus. Scienceexpress, embargoed until 1400 EDT June 4, 2015.

[2] “It is also noteworthy that the new global trends are statistically significant and positive at the 0.10 significance level for 1998-2012…”

[3] Both the UAH and RSS satellite records are now in their 21st year without a significant trend, for example

Sen. Whitehouse: Bring RICO Charges against Climate Wrongthink

Another step toward criminalizing advocacy: writing in the Washington Post, Sen. Sheldon Whitehouse (D-R.I.) urges the U.S. Department of Justice to consider filing a racketeering suit against the oil and coal industries for having promoted wrongful thinking on climate change, with the activities of “conservative policy” groups an apparent target of the investigation as well. A trial balloon, or perhaps an effort to prepare the ground for enforcement actions already afoot?

Sen. Whitehouse cites as precedent the long legal war against the tobacco industry. When the federal government took the stance that pro-tobacco advocacy could amount to a legal offense, some of us warned tobacco wouldn’t remain the only or final target. To quote what I wrote in The Rule of Lawyers:

In a drastic step, the agreement ordered the disbanding of the tobacco industry’s former voices in public debate, the Tobacco Institute and the Council for Tobacco Research (CTR), with the groups’ files to be turned over to anti-tobacco forces to pick over the once-confidential memos contained therein; furthermore, the agreement attached stringent controls to any newly formed entity that the industry might form intended to influence public discussion of tobacco. In her book on tobacco politics, Up in Smoke, University of Virginia political scientist Martha Derthick writes that these provisions were the first aspect in news reports of the settlement to catch her attention. “When did the governments in the United States get the right to abolish lobbies?” she recalls wondering. “What country am I living in?” Even widely hated interest groups had routinely been allowed to maintain vigorous lobbies and air their views freely in public debate.

By the mid-2000s, calls were being heard, especially in other countries, for making denial of climate change consensus a legally punishable offense or even a “crime against humanity,” while widely known advocate James Hansen had publicly called for show trials of fossil fuel executives. Notwithstanding the tobacco precedent, it had been widely imagined that the First Amendment to the U.S. Constitution might deter image-conscious officials from pursuing such attacks on their adversaries’ speech. But it has not deterred Sen. Whitehouse.

Law professor Jonathan Adler, by the way, has already pointed out that Sen. Whitehouse’s op-ed “relies on a study that doesn’t show what he (it) claims.” And Sen. Whitehouse, along with Sen. Barbara Boxer (D-Calif.) and Edward Markey (D-Mass.), has been investigating climate-dissent scholarship in a fishing-expedition investigation that drew a pointed rebuke from then-Cato Institute President John Allison as an “obvious attempt to chill research into and funding of public policy projects you don’t like…. you abuse your authority when you attempt to intimidate people who don’t share your political beliefs.”

P.S. Kevin Williamson notes that if the idea of criminalizing policy differences was ever something to dismiss as an unimportant fringe position, it is no longer. (cross-posted from Overlawyered)

You Ought to Have a Look: Climate Change Subtleties, Hurricanes, and Chocolate Bunnies

You Ought to Have a Look is a feature from the Center for the Study of Science posted by Patrick J. Michaels and Paul C. (“Chip”) Knappenberger.  While this section will feature all of the areas of interest that we are emphasizing, the prominence of the climate issue is driving a tremendous amount of web traffic.  Here we post a few of the best in recent days, along with our color commentary.

We highlight a couple of headlines this week that made us chuckle a bit, although what they portend is far from funny.

The first was from the always amusing “Energy and Environment” section of the Washington Post. Climate change beat writer Chris Mooney penned a piece headlined “The subtle — but real — relationship between global warming and extreme weather events” that was a hit-you-over-the-head piece about how human-caused global warming could be linked to various weather disasters of the past week, including the floods in Houston, the heatwave in India and hurricanes in general.

Mooney starts out, lamenting:

Last week, some people got really mad at Bill Nye the Science Guy. How come? Because he had the gall to say this on Twitter:

Billion$$ in damage in Texas & Oklahoma. Still no weather-caster may utter the phrase Climate Change.

Nye’s comments, and the reaction to them, raise a perennial issue: How do we accurately parse the relationship between climate change and extreme weather events, as they occur in real time?

It’s a particularly pressing question of late, following not only catastrophic floods in Texas and Oklahoma, but also a historic heatwave in India that has killed over 2,000 people so far, and President Obama’s recent trip to the National Hurricane Center in Miami, where he explicitly invoked the idea that global warming will make these storms worse (which also drew criticism).

As the Nye case indicates, there is still a lot of pushback whenever anyone dares to link climate change to extreme weather events. But we don’t have to be afraid to talk about this relationship. We merely have to be scrupulously accurate in doing so, and let scientists lead the way.

Life In One D.C. Suburb: “Town Has Become Farcically Overregulated”

Discontent at a land-use control process perceived as “condescending and obnoxious” helped fuel a surprise voter revolt in affluent Chevy Chase, Md., just across the D.C. border in Montgomery County, reports Bill Turque at the Washington Post. Aside from intensive review of requests to expand a deck or convert a screened-in porch to year-round space, there are the many tree battles:

[Insurgents] cite the regulations surrounding tree removal as especially onerous. Property owners seeking to cut down any tree 24 inches or larger in circumference must have a permit approved by the town arborist and town manager attesting that the tree is dead, dying or hazardous.

If turned down, residents can appeal to a Tree Ordinance Board, which applies a series of nine criteria to its decision, including the overall effect on the town’s tree canopy, the “uniqueness” or “desirability” of the tree in question and the applicant’s willingness to plant replacement trees.

MorePhilip K. Howard with ideas for fixing environmental permitting. [cross-posted from Overlawyered and Free State Notes]

The Spin Cycle: Accelerating Sea Level Rise

The Spin Cycle is a reoccurring feature based upon just how much the latest weather or climate story, policy pronouncement, or simply poo-bah blather spins the truth. Statements are given a rating between 1-5 spin cycles, with less cycles meaning less spin. For a more in-depth description, visit the inaugural edition.

A popular media story of the week was that sea level rise was accelerating and that this was worse than we thought. The stories were based on a new paper published in the journal Nature Climate Change by an author team led by the University of Tasmania’s Christopher Watson.

Watson and colleagues re-examined the satellite-based observations of sea level rise (available since the early 1990s) using a new methodology that supposedly better accounts for changes in the orbital altitude of the satellites—obviously a key factor when assessing sea levels by determining the height difference between the ocean’s surface and the satellites, the basic idea behind altimetry-based sea level measurements.

So far so good.

Their research produced two major findings, 1) their new adjusted measurements produced a lower rate of sea level rise than the old measurements (for the period 1993 to mid-2014), but 2) the rate of sea level rise was accelerating.

It was the latter that got all of the press.

But, it turns out, that in neither case, were the findings statistically significant at even the most basic levels used in scientific studies. Generally speaking, scientists report a findings as being “significant” if there is a less than 1-in-20 chance that the same result could have been produced by random (i.e., unexplained) processes. In some fields, the bar is set even higher (like 1 in 3.5 million). We can’t think of any scientific field that accepts a lower than a 1-in-20 threshold (although occasional individual papers do try to get away with applying a slightly lower standard).