Topic: Energy and Environment

GE and Obama: A Betrothal at the Altar of Industrial Policy

The angry Left has been calling for President Obama to fire Jeffrey Immelt from his position as head of the President’s Council on Jobs and Competitiveness. I think that would be a good idea, but for different reasons.

Sen. Russ Feingold, Moveon.Org, and the regular scribes at the Huffington Post see Immelt, the chairman and CEO of General Electric, as unfit to advise the president because GE invests some of its resources abroad and, despite worldwide profits of $14.2 billion, paid no taxes in 2010. No illegalities are alleged, mind you; GE — like every other U.S. multinational — responds to incentives, including those resulting from tax policy and regulations concocted in Washington. 

But there are more substantive reasons for why Immelt is unfit to advise the president.  In particular, GE is a major player in several industries that President Obama has been promoting as part of his administration’s cocksure embrace of industrial policy. With over $100 billion in direct subsidies and tax credits already devoted to “green technology,” President Obama is convinced that America’s economic future depends on the ability of U.S. firms to compete and succeed in the solar panel, wind harnessing, battery, and other energy storage technologies. Concerning those industries, the president said: “Countries like China are moving even faster… I’m not going to settle for a situation where the United States comes in second place or third place or fourth place in what will be the most important economic engine of the future.”

Well, just yesterday GE announced plans to open the largest solar panel production facility in the United States, which nicely complements its role as the largest U.S. producer of wind turbines (and one of the largest in the world). The 2011 Economic Report of the President describes the taxpayer largesse devoted to subsidizing these green industries:

[T]he Recovery Act directed over $90 billion in public investment and tax incentives to increasing renewable energy sources such as wind and solar power, weatherizing homes, and boosting R&D for new technologies. Looking forward, the President has proposed a Federal Clean Energy Standard to double the share of electricity produced by clean sources to 80 percent by 2035, a substantial commitment to cleaner transportation infrastructure, and has increased investments in energy efficiency and clean energy R&D.

And Box 6.2 on page 129 of the 2011 ERP conveniently breaks out those subsidies by specific industry, most of which are spaces in which GE competes.

Tim Carney gave his impressions of this budding relationship between GE and the Obama administration in the DC Examiner last July:

First, there’s the policy overlap: Obama wants cap-and-trade, GE wants cap-and-trade. Obama subsidizes embryonic stem-cell research, GE launches an embryonic stem-cell business. Obama calls for rail subsidies, GE hires Linda Daschle [wife of former South Dakota Senator and Obama confidante Tom Dachle] as a rail lobbyist. Obama gives a speeeh, GE employee Chris Matthews feels a thrill up his leg. I could go on.

And Carney does go on in a December 2009 Examiner piece:

Look at any major Obama policy initiative — healthcare reform, climate-change regulation, embryonic stem-cell research, infrastructure stimulus, electrical transmission smart-grids — and you’ll find GE has set up shop, angling for a way to pocket government handouts, gain business through mandates, or profit from government regulation.

One month after President Obama proposed subsidizing high-speed rail because, in his words, ”everybody stands to benefit,” the head of GE’s Transportation division proclaimed, “GE has the know-how and the manufacturing base to develop the next generation of high-speed passenger locomotives. We are ready to partner with the federal government and Amtrak to make high-speed rail a reality.”

About the optics of these related events, Carney writes: “This was typical — an Obama policy pronouncement in close conjunction with a GE business initiative. It happens across all sectors of the economy and in all corners of GE’s sprawling enterprise.” And he goes on to list other examples.

Jeff Immelt should step down as head of the President’s Council on Jobs and Competitiveness because there is simply no avoiding a conflict of interest.  Even if he recommends courses of action to the president that don’t advance GE’s bottom line, it’s hard to see how that wouldn’t be an abrogation of his fiduciary responsibility to GE’s shareholders.

But more troubling is that Immelt and the president appear to be two peas in a pod when it comes to faith in government-directed industrial policy.  Immelt admires the German model of industrial policy because the Germans believe in “government and business working as a pack.”  He admires China’s “incredible unanimity of purpose from top to bottom.”  And days after Obama’s inauguration, Immelt wrote to shareholders:

[W]e are going through more than a cycle. The global economy, and capitalism, will be “reset” in several important ways. The interaction between government and business will change forever. In a reset economy, the government will be a regulator; and also an industry policy champion, a financier, and a key partner.

Citizens of a country that owes so much of its unmatched economic success to innovation and entrepreneurship and an absence of heavy-handed top-down mandates should be wary of the changes the presdient and Mr. Immelt are fostering.

Cato Unbound - There Ain’t No Such Thing As Free Parking

This month at Cato Unbound we’re discussing a practical, everyday issue – parking!

Yes, Cato Unbound is supposed to cover big ideas, deep thoughts, and the like, but parking policy is both important in its own right and also points to what I consider a very interesting problem: Given a theoretical or abstract commitment to free markets, well, how do we get there in the real world? What would a free-market policy look like in this or that issue area?

The answer isn’t always obvious, and the map isn’t the territory. Parking is interesting in this respect and possibly helpful. Parking is all around us, most of us deal with it every day, and the unintended consequences of parking policy are I think maybe easier to see than the unintended consequences in other fields. Parking affects how we live, how we shop, and how we work. It touches our cities, our family life, our environment, and even our health. Learning to look for such unintended consequences is part of developing a political culture that values economic insights and puts them to work.

That’s why this month we’ve invited four urban economists, each of whom can fairly be said to value the free market. Still, there will be a few disagreements among them – as I said, the map isn’t the territory. Donald Shoup leads the issue with his essay “Free Parking or Free Markets?” – arguing that our expectation of abundant free parking is both bad for our communities and the product of anti-market planning.

The conversation will continue throughout the month, with contributions from Professor Sanford Ikeda, Dr. Clifford Winston of the Brookings Institution, and Cato’s own Randal O’Toole. Be sure to stop by throughout the month, or else subscribe via RSS.

Energy Error Continued

When Barack Obama emerged as a serious contender for the presidency, he offered a core menu of curing everything by increased federal intervention in health care, education, and energy. Whenever new problems arose that lessened the urgency of earlier concerns, Obama has crafted assertions that his original prescriptions will also resolve the new difficulties. In energy, this has involved extending his program to new, even more dubious projects. He also has a habit of incessantly repeating the same tired arguments in the vain hope that his skill at persuasion will win the day.

His March 30, 2011 energy speech and accompanying Blueprint are typical. About the only differences between these and his June 15, 2010 speech on energy were more bad ideas. He added to the panic-driven slowdown in offshore oil and gas drilling permits, now rationalized as a prudent response; a post-Japan crisis review of nuclear power; and another for new methods of producing natural gas. For no good reason, he argued that Brazilian oil development needed U.S. government support despite the long history that successful oil development in some of the most backward countries in the world has occurred without major U.S. government aid. (In fact, the aid offered was an Export-Import Bank loan and thus more an exercise in crony capitalism than a useful move.)

Otherwise Obama continued to display the central characteristic of his philosophy — that he and his advisers possess such superior insight that they can guide the average American to better decisions. This is precisely the Progressive error that has led to the present political mess and the cause of the dramatic 2010 shift in the composition of the U.S. House of Representatives. Whenever concerns arise that he has overreached, he claims that he was doing the sensible thing.

His Blueprint constitutes Exhibit A in the case against this interventionism. It is essentially a list of the many mandates that Obama has achieved or desires, ranging from high-speed rail to micromanaging the design of every new building in the United States. This list is dominated by the many provisions of the infamous stimulus bill that indiscriminately threw money at every favored area including energy. Obama seems to believe that seeing where the money went will counteract the outrage at ill-conceived, unnecessary, and counterproductive spending. At least to energy specialists, what actually appears is resounding proof that the voters were right — every idea is bad.

The speech also showcased Obama’s talent at making dubious assertions. Many have commented that he does not deserve the credit that he seems to claim for the rise in U.S. oil output. The very long lead times, which Democrats traditionally use to oppose expanded oil-and-gas leasing, imply that the rise was facilitated by actions in prior administrations. An even greater whopper was his intimation that the existence of many undeveloped leases suggests that no rush exists to lease and license more. The more obvious criticism is that his cumbersome licensing policy contributes to the inability to develop. Less apparent is the likelihood that many of those leases proved, after further examination, to be unattractive while more promising areas are being withheld from leasing.

He similarly selected the most misleading possible way to understate U.S. oil-production potential. He indicated correctly that the United States has only 2 percent of world “proved” reserves of oil. What he ignored is that proved reserves cover only already-known sources and wild methodological differences among countries in how this is calculated make cross-country comparisons dubious. (This situation was worsened by 1970s hysteria. The highly efficient existing U.S. system was replaced because it was run by the supposedly untrustworthy industry. The government created its own far more expensive and far less satisfactory system.) The more reliable measure of actual production shows an 8.5 percent U.S. share in 2009. Neither measure satisfactorily indicates what really matters — the potential efficiently to add production. Obama thus adds to his prior unjustifiable aim to reduce petroleum use by also misstating the petroleum potential. Substantial oil imports remain desirable for the U.S. because of the underlying economics. Nevertheless, the federal government has imposed undesirable restrictions on oil and gas production.

Energy Independence: Obama Embraces the Department of Nutty Ideas

Every president since Richard Nixon has asserted that we are sitting ducks for those who brandish the oil weapon. To keep the evildoers at bay, the government must adopt policies that ensure our energy independence. Like his predecessors, President Obama is worshiping at this altar. And why not? How many elections have been lost by blaming foreigners for an impending crisis?

Despite their cynicism about politicians, most people actually believe that mineral resources, including oil, are doomed to disappear. It’s obvious: Start with a given stock of provisions in the cupboard, subtract consumption and eventually the cupboard will be bare.

But what is obvious is often wrong. We never run out of minerals. At some point it just costs too much to produce them profitably. In the 19th century, the big energy scare was in Europe. Most thought Europe was running out of coal. That doomsday scenario never materialized. Thanks to a plethora of substitutes, the prices that European coal could fetch today are far below its development and extraction costs. Consequently, Europe sits on top of billions of tons of worthless coal.

Once economics enters the picture, the notion of fixed reserves becomes meaningless. Reserves are not fixed. Proven oil reserves, for example, represent a warehouse inventory of the expected cumulative profitable output, not a fixed stock of oil thought to be in the ground.

When thinking about oil reserves, we must also acknowledge another economic reality: Oil is sold in a world market in which every barrel, regardless of its source, competes with every other barrel. Think globally, not locally. When we do, the dwindling reserves dogma becomes nonsense. In 1971, the world’s proven oil reserves were 612 billion barrels. Since then the world has produced approximately 990 billion barrels. We should have run out of reserves fourteen years ago, but we didn’t. In fact, today’s proven reserves are 1,354 billion barrels, or 742 billion barrels more than in 1971.

How could this be? Thanks to improved exploration and development techniques, costs have declined, investments have been made and reserves have been created. The sky is not falling.

What’s Wrong with Imported Oil?

In a speech today at Georgetown University, President Obama called for a goal of cutting America’s oil imports by one-third within a decade. Like all efforts to wean Americans from big, bad imports, such a policy will mean we will all pay more than we need to for the energy that helps to power our economy.

I’ll leave it to my able Cato colleagues to dissect the president’s proposal in terms of energy policy, but in terms of trade policy, this is about as bad as it gets.

We Americans benefit tremendously from our relatively free trade in petroleum products. Like all forms of trade, the importation of oil produced abroad allows us to acquire it at a price far lower than we would pay if we had to rely more heavily on domestic oil supplies.

The money we save buying oil more cheaply on global markets allows our whole economy to operate more efficiently. Oil is the ultimate upstream input that virtually all U.S. producers use to make their final products, either in the product itself or for shipping. If U.S. manufacturers and other sectors are forced to pay sharply higher prices for petroleum products because of import restrictions, their final goods will cost more and will be less competitive in global markets. If households are forced to pay more for gasoline and heating oil, consumer will have less to spend on domestic goods and services.

The president talked in the speech about the goal of not being “dependent” on foreign suppliers, but most of our oil imports come from countries that are either friendly or at least not in any way an adversary. According to the U.S. Department of Commerce, one third of our oil imports in 2010 came from our two closest neighbors and NAFTA partners, Canada and Mexico. Another third came from the problematic providers in the Arab Middle East and Venezuela (none from Iran, less than one-third of 1 percent from Libya.) The rest came from places such as Nigeria, Angola, Colombia, Brazil, Russia, Ecuador and Great Britain.

Even if, by the force of government, we could reduce our imports by a third, there is no reason to expect that the reduction would be concentrated in the problematic providers. In fact, oil is generally cheaper to extract in the Middle East, so a blanket reduction would probably tilt our imports away from our friends and toward our real and potential adversaries.

In one speech, the president has managed to state a policy goal that is bad trade policy, bad security policy, and bad foreign policy.

Pielke’s Problem

I generally admire the work of Roger Pielke Jr., a political scientist in the University of Colorado-Boulder’s Center for Science and Technology Policy Research. His new book on climate change is refreshingly honest and non-ideological, if a bit overly technophilic. His broader work offers the important insight that science alone cannot direct public policy, but rather it can only lay out possible results of different policy choices.

Given the quality of his work, I was disappointed by Pielke’s op-ed in today’s NYT defending Congress’s legislated obsolescence of the incandescent light bulb. He argues that government standard-setting is an important contribution to human welfare, and the light bulb standard is just part of that standard-setting (though he does suggest some minor policy tweaks to allow limited future availability of incandescents). 

To justify his argument, Pielke points out the great benefit of government-established standard measures, as well as quality standards:

Indeed, [in the United States of the late 19th century] the lack of standards for everything from weights and measures to electricity — even the gallon, for example, had eight definitions — threatened to overwhelm industry and consumers with a confusing array of incompatible choices.

This wasn’t the case everywhere. Germany’s standards agency, established in 1887, was busy setting rules for everything from the content of dyes to the process for making porcelain; other European countries soon followed suit. Higher-quality products, in turn, helped the growth in Germany’s trade exceed that of the United States in the 1890s.

America finally got its act together in 1894, when Congress standardized the meaning of what are today common scientific measures, including the ohm, the volt, the watt and the henry, in line with international metrics. And, in 1901, the United States became the last major economic power to establish an agency to set technological standards.

 Alas, this argument doesn’t support Pielke’s light bulb standard.

The weights-and-measures and product standards that he cites are examples of government response to market failures—instances where private action is unable to reach efficient results. Concerning weights and measures, a type of market failure known as the collective action problem can make it difficult to establish standard measures privately. Getting everyone to agree can be like herding cats, and there is ample incentive to secretly defect from that standard — e.g., a gas station would love to sell you a 120-ounce “gallon” that you assume is a standard 128 ounces. (OTOH, there are plenty of examples of private action overcoming this problem, such as the standardization of railroad track gauges in the late 19th century.) Likewise, quality standards can be understood as a response to a kind of market failure known as the information asymmetry problem— e.g., a producer of low-quality goods may knowingly try to pass them off as high-quality goods. (Again, there are plenty of examples of private action overcoming this problem.)

As libertarians, we recognize that there are market failures, and that government can sometimes mitigate them. (That’s why we’re not anarchists.) Also as libertarians, we recognize that government intervention can result in outcomes even less efficient than the original market failure. (That’s why we’re not run-of-the-mill Democrats or Republicans.)

But where is the market failure with incandescent bulbs? After nearly 125 years of use, people know the drawbacks and advantages of incandescents—that they use more electricity than other types of bulbs and have a shorter lifespan, but they cost very little and work much better in certain applications—from dimmer switches to Easy-Bake Ovens—than other bulbs. Besides, CFL bulbs were widely available before Congress’s 2007 legislation, and LED lights were already in the R&D pipeline.

Perhaps Pielke would argue that there is a market failure with incandescents: the negative externality of air pollution, including greenhouse gas emissions. But incandescent lighting is only one of many, many electricity-using devices, and electricity generation is just one of many, many sources of air pollution. So why the focus on just this one externality source instead of advocating a policy that broadly addresses emissions? And why devote his op-ed to discussing technology standards, and make no mention of air pollution?

The Current Wisdom: Overplaying the Human Contribution to Recent Weather Extremes

The Current Wisdom is a series of monthly posts in which Senior Fellow Patrick J. Michaels reviews interesting items on global warming in the scientific literature that may not have received the media attention that they deserved, or have been misinterpreted in the popular press.

The Current Wisdom only comments on science appearing in the refereed, peer-reviewed literature, or that has been peer-screened prior to presentation at a scientific congress.

**********

 The recent publication of two articles in Nature magazine proclaiming a link to rainfall extremes (and flooding) to global warming, added to the heat in Russia and the floods in Pakistan in the summer of 2010, and the back-to-back cold and snowy winters in the eastern U.S. and western Europe, have gotten a lot of public attention.  This includes a recent hearing in the House of Representatives, despite its Republican majority.  Tying weather extremes to global warming, or using them as “proof” that warming doesn’t exist (see: snowstorms), is a popular rhetorical flourish by politicos of all stripes.  

The hearing struck many as quite odd, inasmuch as it is much clearer than apocalyptic global warming that the House is going to pass meaningless legislation commanding the EPA to cease and desist from regulating greenhouse gas emissions.  “Meaningless” means that it surely will not become law.  Even on the long-shot probability that it passes the Senate, the President will surely veto, and there are nowhere near enough votes to override such an action.

Perhaps “wolf!” has been cried yet again.  A string of soon-to-be-published papers in the scientific literature finds that despite all hue and cry about global warming and recent extreme weather events, natural climate variability is to blame.

Where to start?  How about last summer’s Russian heat wave?

The Russian heat wave (and to some degree the floods in Pakistan) have been linked to the same large-scale, stationary weather system, called an atmospheric “blocking” pattern. When the atmosphere is “blocked” it means that it stays in the same configuration for period of several weeks (or more) and keeps delivering the same weather to the same area for what can seem like an eternity to people in the way.  Capitalizing on the misery in Russia and Pakistan, atmospheric blocking was added to the list of things that were supposed to be “consistent with” anthropogenically stimulated global warming which already, of course included heat waves and floods. And thus the Great Russian Heat Wave of 2010 became part of global warming lore.

But then a funny thing happened – scientists with a working knowledge of atmospheric dynamics started to review the situation and found scant evidence for global warming.

The first chink in the armor came back in the fall of 2010, when scientists from the Physical Sciences Division (PSD) of the Earth System Research Laboratory (ESRL) of the National Oceanic and Atmospheric Administration (NOAA) presented the results of their preliminary investigation on the web , and concluded that “[d]espite this strong evidence for a warming planet, greenhouse gas forcing fails to explain the 2010 heat wave over western Russia. The natural process of atmospheric blocking, and the climate impacts induced by such blocking, are the principal cause for this heat wave.”

The PSD folks have now followed this up with a new peer-reviewed article in the journal Geophysical Research Letters that rejects the global warming explanation. The paper is titled “Was There a Basis for Anticipating the 2010 Russian Heat Wave?” Turns out that there wasn’t.

To prove this, the research team, led by PSD’s Randall Dole, first reviewed the observed temperature history of the region affected by the heat wave (western Russia, Belarus, the Ukraine, and the Baltic nations). To start, they looked at the recent antecedent conditions: “Despite record warm globally-averaged surface temperatures over the first six months of 2010, Moscow experienced an unusually cold winter and a relatively mild but variable spring, providing no hint of the record heat yet to come.” Nothing there.

Then they looked at the long-term temperature record: “The July surface temperatures for the region impacted by the 2010 Russian heat wave shows no significant warming trend over the prior 130-year period from 1880 to 2009…. A linear trend calculation yields a total temperature change over the 130 years of -0.1°C (with a range of 0 to -0.4°C over the four data sets [they examined]).” There’s not a hint of a build-up to a big heat wave.

And as to the behavior of temperature extremes: “There is also no clear indication of a trend toward increasing warm extremes. The prior 10 warmest Julys are distributed across the entire period and exhibit only modest clustering earlier in this decade, in the 1980s and in the 1930s…. This behavior differs substantially from globally averaged annual temperatures, for which eleven of the last twelve years ending in 2006 rank among the twelve warmest years in the instrumental record since 1850….”

With regard any indication that “global” warming was pushing temperatures higher in Russia and thus helped to fuel the extreme heat last summer, Dole et al. say this: “With no significant long-term trend in western Russia July surface temperatures detected over the period 1880-2009, mean regional temperature changes are thus very unlikely to have contributed substantially to the magnitude of the 2010 Russian heat wave.”

Next the PSD folks looked to see if the existing larger-scale antecedent conditions, fed into climate models would produce the atmospheric circulation patterns (i.e. blocking) that gave rise to the heat wave.  The tested “predictors” included patterns of sea surface temperature and arctic ice coverage, which most people feel have been subject to some human influence.  No relationship: “These findings suggest that the blocking and heat wave were not primarily a forced response to specific boundary conditions during 2010.”

In fact, the climate models exhibited no predilection for projecting increases in the frequency of atmospheric blocking patterns over the region as greenhouse gas concentrations increased. Just the opposite: “Results using very high-resolution climate models suggest that the number of Euro-Atlantic blocking events will decrease by the latter half of the 21st century.”

At this point, Dole and colleagues had about exhausted all lines of inquiry and summed things up:

 Our analysis points to a primarily natural cause for the Russian heat wave. This event appears to be mainly due to internal atmospheric dynamical processes that produced and maintained an intense and long-lived blocking event. Results from prior studies suggest that it is likely that the intensity of the heat wave was further increased by regional land surface feedbacks. The absence of long-term trends in regional mean temperatures and variability together with the model results indicate that it is very unlikely that warming attributable to increasing greenhouse gas concentrations contributed substantially to the magnitude of this heat wave.

Can’t be much clearer than that.

But that was last summer. What about the past two winters? Both were very cold in the eastern U.S. with record snows events and/or totals scattered about the country.

Cold, snow, and global warming? On Christmas Day 2010, the New York Times ran an op-ed by Judah Cohen, a long-range forecaster for the private forecasting firm Atmospheric and Environmental Research, outlining his theory as to how late summer Arctic ice declines lead to more fall snow cover across Siberia which in turn induces atmospheric circulation patterns to favor snowstorms along the East Coast of the U.S. Just last week, the Union of Concerned Scientists held a news conference where they handed out a press release  headlined “Climate Change Makes Major Snowstorms Likely.” In that release, Mark Serreze, director of the National Snow and Ice Data Center, laid out his theory as to how the loss of Arctic sea ice is helping to provide more moisture to fuel winter snowstorms across the U.S. as well as altering atmospheric circulation patterns into a preferred state for big snowstorms. Weather Underground’s Jeff Masters chimed in with “Heavy snowstorms are not inconsistent with a warming planet.”

As is the wont for this Wisdom, let’s go back to the scientific literature.

Another soon-to-be released paper to appear in Geophysical Research Letters describes the results of using the seasonal weather prediction model from the European Center for Medium-Range Weather Forecasts (ECMWF) to help untangle the causes of the unusual atmospheric circulation patterns that gave rise to the harsh winter of 2009-2010 on both sides of the Atlantic. A team of ECMWF scientists led by Thomas Jung went back and did experiments changing initial conditions that were fed into the ECMWF model and then assessed how well the model simulated the known weather patterns of the winter of 2009-2010. The different set of initial conditions was selected so as to test all the pet theories behind the origins of the harsh winter.  Jung et al. describe their investigations this way: “Here, the origin and predictability of the unusual winter of 2009/10 are explored through numerical experimentation with the ECMWF Monthly forecasting system. More specifically, the role of anomalies in sea surface temperature (SST) and sea ice, the tropical atmospheric circulation, the stratospheric polar vortex, solar insolation and near surface temperature (proxy for snow cover) are examined.”

Here is what they found after running their series of experiments.

Arctic sea ice and sea surface temperature anomalies.  These are often associated with global warming caused by people. Finding:  “These results suggest that neither SST nor sea ice anomalies explain the negative phase of the NAO during the 2009/10 winter.”

(NAO are the commonly used initials for the North Atlantic Oscillation – and atmospheric circulation pattern that can act to influence winter weather in the eastern U.S. and western Europe. A negative phase of the NAO is associated with cold and stormy weather and during the winter of 2009-10, the NAO value was the lowest ever observed.)

A global warming-induced weakening stratospheric (upper-atmosphere) jetstream. “Like for the other experiments, these stratospheric relaxation experiments fail to reproduce the magnitude of the observed NAO anomaly.”

Siberian snow cover.  “The resulting [upper air patterns] show little resemblance with the observations…. The implied weak role of snow cover anomalies is consistent with other research….”

Solar variability.  “The experiments carried out in this study suggest that the impact of anomalously low incoming [ultraviolet] radiation on the tropospheric circulation in the North Atlantic region are very small… suggesting that the unusually low solar activity contributed little, if any, to the observed NAO anomaly during the 2009/10 winter.”

Ok then, well what did cause the unusual weather patterns during the 2009-10 winter?

The results of this study, therefore, increase the likelihood that both the development and persistence of negative NAO phase resulted from internal atmospheric dynamical processes.

Translation: Random variability.

To drive this finding home, here’s another soon-to-be-released paper (D’Arrigo et al., 2001) that uses tree ring-based reconstructions of atmospheric circulation patterns and finds a similar set of conditions (including a negative NAO value second only to the 2009-10 winter) was responsible for the historically harsh winter of 1783-84 in the eastern U.S. and western Europe, which  was widely noted by historians. It followed the stupendous eruption of the Icelandic volcano Laki the previous summer. The frigid and snowy winter conditions have been blamed on the volcano. In fact, Benjamin Franklin even commented as much.

But in their new study, Roseanne D’Arrigo and colleagues conclude that the harshness of that winter primarily was the result of anomalous atmospheric circulation patterns that closely resembled those observed during the winter of 2009-10, and that the previous summer’s volcanic eruption played a far less prominent role:

Our results suggest that Franklin and others may have been mistaken in attributing winter conditions in 1783-4 mainly to Laki or another eruption, rather than unforced variability.

Similarly, conditions during the 2009-10 winter likely resulted from natural [atmospheric] variability, not tied to greenhouse gas forcing… Evidence thus suggests that these winters were linked to the rare but natural occurrence of negative NAO and El Niño events.

The point is that natural variability can and does produce extreme events on every time scale, from days (e.g., individual storms), weeks (e.g., the Russian heat wave), months (e.g., the winter of 2009-10), decades (e.g., the lack of global warming since 1998), centuries (e.g., the Little Ice Age), millennia (e.g., the cycle of major Ice Ages), and eons (e.g., snowball earth).

Folks would do well to keep this in mind next time global warming is being posited for the weather disaster du jour. Almost assuredly, it is all hype and little might.

Too bad these results weren’t given a “hearing” in the House!

References:

D’Arrigo, R., et al., 2011. The anomalous winter of 1783-1784: Was the Laki eruption or an analog of the 2009–2010 winter to blame? Geophysical Research Letters, in press.

Dole, R., et al., 2011. Was there a basis for anticipating the 2010 Russian heat wave? Geophysical Research Letters, in press.

Jung et al., 2011. Origin and predictability of the extreme negative NAO winter of 2009/10. Geophysical Research Letters, in press.

Min, S-K., et al., 2011. Human contribution to more-intense precipitation extremes. Nature, 470, 378-381.

Pall, P., et al., 2011. Anthropogenic greenhouse gas contribution to flood risk in England and Wales in autumn 2000. Nature, 470, 382-386.