Archives: December, 2012

Exporting Natural Gas

Suddenly, due to improved drilling techniques, the U.S. is overflowing with natural gas, driving down domestic prices. But foreign prices remain high, which means there is an opportunity for us to export natural gas.  Unfortunately, the infrastructure does not currently exist. To transport natural gas across the ocean, you have to liquefy it first. We have the facilities to import liquefied natural gas, but not to liquefy it ourselves and export.  In order to start exporting, we need to build the appropriate facilities, which requires regulatory approval from the Energy Department. A number of applications have been made to build new facilities.

So why wouldn’t the Energy Department approve this?  Some are arguing that allowing exports would raise prices for domestic consumers and manufacturers, and this would be bad for American users of natural gas.

For more on all this, see the Washington Post here and the NY Times here.

Normally, free trade is about whether or not to allow imports, but preventing exports in an effort to help domestic interest groups is really just the same situation in reverse.  The Washington Post has a good editorial in which they argue for allowing exports.  As they put it:

USUALLY, OPPONENTS of freer trade argue that Americans shouldn’t be buying so many cheap products from abroad, sending their cash overseas. But when it comes to exporting some of this nation’s abundant supplies of natural gas, those who oppose opening up to the world turn that logic on its head — arguing, strangely, that Americans shouldn’t be trying to sell this particular product to other nations, bringing money into the country in the process. Both arguments are unconvincing, and for the same reason: When countries can buy and sell to each other, their economies do what they are best at, producing more with less and driving economic growth.

That’s well put, but let me just add one thing: Under our international trade agreements, we have promised not to restrict exports.  We can’t restrict exports just to keep domestic prices down. In fact, we have already brought a successful WTO complaint against China for doing similar things.  If we want others to play by the rules, we have to do so as well.

So, not only is allowing exports of natural gas good policy, it is what we have promised to do, and what we are demanding of others.

It Is The Same Banks that Repeatedly Get in Trouble

When the December issue of the Journal of Finance landed on my desk, it was almost like Christmas had come early.   Among the articles was an interesting examination of banks which failed (or were rescued) during the recent financial crisis (for a non-pay-wall working paper version see here).  The authors set out to ask a simple question:  how well does the performance of individual banks in 1998 predict their performance in the recent crisis? 

Recall in 1998 Russia defaulted on some of its debts.  It was generally believed (erroneously) that nuclear powers did not default.  Market participants did not take the news well, with a resulting flight to quality and spike in lending spreads.  Then Treasury Secretary Robert Rubin called it “the worst financial crisis in the last 50 years” (sounds a little familiar).

While the authors find that other factors, such as leverage and reliance on short-term funding, were significant predictors of failure, 1998 performance predicted well which banks got in trouble this past crisis.  This effect is likely capturing a variety of bank specific characteristics, such a firm culture, risk tolerance and management style. 

One of the central debates about financial crises is to what extent are shocks contagious, like a disease that spreads from one bank to another, or rather do shocks, such as recessions, separate weak firms from strong firms?  If the former then broad-based Geithner-Bernanke style rescues might be appropriate.  If however failures are limited to weak firms, then rescues keep these weak firms, with their dysfunctional cultures around. 

The results of this paper suggest to me the importance of allowing firms to fail, rather than resorting to bailouts.  One of the fundamental problems of our current bank regulatory regime is that it is subject to its own flawed theory of intelligent design.  If only enlightened regulators are given sufficient power, they can design the best system.  I believe reality is quite different.  Only by allowing the evolutionary sorting of banks, and their firm cultures, can we improve the stability and efficiency of our financial system.

How Government Actually Works, Especially Unaccountable, Multi-Jurisdictional Government

In my book Libertarianism: A Primer, I have a chapter of pop public choice called “What Big Government Is All About.” The Metropolitan Washington Airports Authority isn’t really big government, just a local D.C.-Virginia-Maryland authority to run a couple of airports. But it demonstrates some of the problems you can expect from economic entities that don’t face a market test. Here’s how the Washington Post story today begins:

Meet the Kulle family: mom Helen, daughter Ann Kulle-Helms, son-in-law Douglas Helms, son Albert, daughter-in-law Michele Kulle and Michele’s brother, Jeffrey Thacker.

They all worked for the Metropolitan Washington Airports Authority. All at the same time.

And what about Dad, I wonder. No job for Dad?

Anyway, officers of the agency don’t seem perturbed by the story.

“There were no clear-cut guidelines,” said MWAA board member H.R. Crawford, who will leave the board next month when his term expires.

Crawford, who has had at least three relatives, including a daughter-in-law, work at the agency, said family members are employed frequently, particularly among board members.

“If you ask a third of those folks, their relatives work there,” he said. “I never thought that we were doing anything wrong.”…

“This is a government town and an agency town,” Crawford said. “If there’s a possibility that you can hire a relative . . . it was the norm.”…

“This is not a patronage mill,” said Davis, whose daughter worked in the fire department for two months in 2011. “Dozens of employees’ kids worked there.”

At this point the response of good-government liberals is always: Pass an ethics law. Yeah, that ought to work.

MWAA’s ethics code prohibits employees from hiring, supervising or working with relatives. They also cannot supervise family members — directly or indirectly — or “have influence over their work.”

The Current Wisdom: ‘Dumb People’ Syndrome

The Current Wisdom is a series of monthly articles in which Patrick J. Michaels, director of the Center for the Study of Science, reviews interesting items on global warming in the scientific literature that may not have received the media attention that they deserved, or have been misinterpreted in the popular press. Occasionally — as in this edition — we examine recent global warming perceptions that are at odds with reality.

“The habitability of this planet for human beings really is at risk.”
–Al Gore, July 18, 2007

The notion that people just can’t adapt to change (and therefore that governments must regulate change) is known as “Dumb People Syndrome” (DPS).  Given the fact that the planet is “habitable” (meaning  that there large numbers of people) over a mean annual temperature range of approximately 40°C , Gore’s statement—which is about a few degrees C, at best—is quintessential DPS.  

DPS has its subtypes, such as “Dumb Farmer Syndrome”, in which there’s agricultural Armageddon as the world’s farmers fail to adapt to warming conditions.  It’s not only preposterous, it’s inconsistent with history.

Farmers aren’t dumb, and there are incentives for their supply chain—breeders, chemical manufacturers, equipment companies, etc.—to produce adaptive technologies.  Corn is already much more water-use efficient than it was, thanks to changes in genetics, tillage practices, and farm equipment.  The history of U.S. crop yield bears strong witness (Figure 1).

Figure 1. U.S. national corn and wheat yields, 1900-2012 (source: USDA National Agricultural Statistics Service).

A look at the horrible crop year of 2012 is instructive. Corn yield drops about 38 bushels per acre  from what’s known as the “technological trend line.”  Because the “expected” yield—thanks to  technology—with good weather is so high (around 160 bushels/acre), that’s a drop of about 24%, which is simply unremarkable when compared to the other lousy weather years of 1901 (36%), 1947 (21%), 1983 (29%) and 1988 (30%).  Did we mention that the direct fertilization effect of atmospheric CO2  has resulted in a  corn yield increase of approximately seven per cent?

Most assessments of the impacts of climate change give some credence to DPS. Below is one of the  “Key Findings” from the report Global Climate Change Impacts in the United States produced by the U.S. Climate Change Global Change Research Program (USGCRP), which was used as a major support for  the U.S. Environmental Protections Agency’s “Endangerment Finding”  that human carbon dioxide emissions are a threat to health and welfare. According to the USGCRP:

Crop and livestock production will be increasingly challenged.

Many crops show positive responses to elevated carbon dioxide and low levels of warming, but higher levels of warming often negatively affect growth and yields. Increased pests, water stress, diseases, and weather extremes will pose adaptation challenges for crop and livestock production.

Now compare that to the corresponding “Key Finding” from our report Addendum: Global Climate Change Impacts in the United States which is an independent (from the USGCRP) assessment of the scientific literature relating to environmental changes and how they may impact U.S. agriculture:

Crop and livestock production will adapt to climate change.

There is a large body of evidence that demonstrates substantial untapped adaptability of U.S. agriculture to climate change, including crop-switching that can change the species used for livestock feed. In addition, carbon dioxide itself is likely increasing crop yields and will continue to do so in increasing increments in the future.

Another example of the DPS relates to projections of the effects  of more or stronger  heat waves on human mortality.  Everyone has heard—especially after last summer—how human use of fossil fuels to produce energy will increase the frequency and severity of killer heat waves.

Here is how the USGCRP sees it, according to the “Key Messages” from the “Human Health” chapter of their report:

Increases in the risk of illness and death related to extreme heat and heat waves are very likely.

History shows that things don’t work this way.

Why? Because people are not dumb. Instead of dying in increasing numbers as temperatures rise, people take better precautions to protect themselves from the heat.

Numerous examples of this abound, including some pioneering work that we did on the subject about 10 years ago.  We clearly demonstrated that across the U.S., people were becoming less sensitive to high temperatures, despite the fact that high temperatures were increasing. In other words, adaptation was taking place in the face of (or, perhaps even because of) rising temperatures. Adaptations include expanding use of air conditioning, increasing public awareness, and more widespread community action programs.

What was interesting  about our work is we didn’t even need global warming to drive increasing heat waves.  All we needed was economic activity that concentrates in cities.  As they grow, buildings and pavement retain the heat of the day and impede the flow of ventilating winds.  In recent years, the elevation of night temperatures here in Washington (where your tax dollars virtually guarantee economic growth),  compared to the countryside, has become truly remarkable.  But you won’t find  an increase in heat-related mortality.  Instead, there’s been a decrease.
Our research was limited to major cities across the United States. But similar findings have since been reported for other regions of the world, the most recent being the from the Czech Republic.

Czech researchers Jan Kyselý and Eva Plavcová recently published the results of their investigation of changes in heat-related impacts there from 1986 through 2009.  What they found sure wasn’t surprising to us, but surely must come as quite a shock to the fans of DPS.

Declining trends in the mortality impacts are found in spite of rising temperature trends. The finding remains unchanged if possible confounding effects of within-season acclimatization to heat and the mortality displacement effect are taken into account. Recent positive socioeconomic development, following the collapse of communism in Central and Eastern Europe in 1989, and better public awareness of heat-related risks are likely the primary causes of the declining vulnerability. The results suggest that climate change may have relatively little influence on heat-related deaths, since changes in other factors that affect vulnerability of the population are dominant instead of temperature trends. It is essential to better understand the observed nonstationarity of the temperature-mortality relationship and the role of adaptation and its limits, both physiological and technological, and to address associated uncertainties in studies dealing with climate change projections of temperature-related mortality.

Findings like these, along with our own work, caused us to conclude in our Addendum report that:

“In U.S. cities, heat-related mortality declines as heat waves become stronger and/or more frequent.”

Evidence is much more compelling in  support of a “smart people” diagnosis than its opposite.  In fact, if humankind was really as dumb as the fans of DPS would have us believe, we wouldn’t be around  today to hear their doomsaying, because Homo sapiens would have been wiped out during vastly larger environmental swings (in and out of ice ages, for example) in our past,  than those expected as a consequence of the burning of fossil fuels to produce the energy that powers our world—a world in which the human life expectancy, perhaps the best measure of our level of “dumbness” or “smartness”—has more than doubled over the last century and continues to grow ever longer.

Simply put, we are not “dumb” when it comes to our survival and our ability to adapt to changing environmental conditions, but “scientific” assessments that assume otherwise most certainly are.


Davis, R.E., Knappenberger, P.C., Novicoff, W.M., Michaels, P.J., 2002. Decadal changes in heat-related human mortality in the Eastern US. Climate Research, 22, 175–184.

Davis, R.E., Knappenberger, P.C., Novicoff, W.M., Michaels, P.J.,2003a. Decadal changes in summer mortality in U.S. cities. International  Journal of Biometeorology, 47, 166–175.

Davis, R.E., Knappenberger, P.C., Michaels, P.J., Novicoff, W.M., 2003b. Changing heat-related mortality in the United States. Environmental Health Perspectives, 111, 1712–1718.

Davis, R.E., Knappenberger, P.C., Michaels, P.J., Novicoff, W.M., 2004. Seasonality of climate-human mortality relationships in US cities and impacts of climate change. Climate Research, 26, 61–76.

Jan Kyselý, j., and E. Plavcová, 2012.Declining impacts of hot spells on mortality in the Czech Republic, 1986–2009: adaptation to climate change? Climatic Change, 113, 437-453.

Climate Sensitivity Going Down

Global Science Report is a weekly feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

“Climate sensitivity” is the amount that the average global surface temperature will rise, given a doubling of the concentration of atmospheric carbon dioxide (CO2) in the atmosphere from its pre-industrial value. This metric is the key to understanding how much global warming will occur as we continue to burn fossil fuels for energy and emit the resultant CO2 into the atmosphere.

The problem is that we don’t know what the value of the climate sensitivity really is.

In its Fourth Assessment Report, released in 2007, the United Nations’ Intergovernmental Panel on Climate Change (IPCC) had this to say about the climate sensitivity:

It is likely to be in the range 2°C to 4.5°C with a best estimate of about 3.0°C, and is very unlikely to be less than 1.5°C. Values substantially higher than 4.5°C cannot be excluded…

In IPCC parlance, the term “likely” means a probability of greater than 66% and “very likely” means a greater than 90% change of occurrence. The IPCC’s 90% range for the climate sensitivity  includes values at the low end which, if proven true, would engender very little concern over our use of fossil fuels as a primary energy source, and values at the high end would generate calls for frantic efforts (which would likely fail)  to lower carbon dioxide emissions.

While there has been a lot of effort expended to better constrain estimates of sensitivity over the past several decades, little progress has been made in narrowing the range.  The IPCC’s First Assessment Report, released back in 1990, gave a range of 1.5°C to 4.5°C.  It’s not that climate science hasn’t progressed since then, but just that the advanced understanding has not led to substantially better constraints.

But what has occurred over the past several decades is that greenhouse emissions have continued to rise (in fact, half of the total anthropogenerated  carbon dioxide emissions have been since the mid-1980s), and global temperature observations have continued to be collected.  We now have much more data with which to use to try to determine the sensitivity.

While global carbon dioxide emissions continue to rise year-over-year (primarily driven by the rapid growth in developing countries such as China), global temperatures have not kept up—in fact, there has been little to no overall global temperature increase (depending upon the record used) over the past decade and a half.

That doesn’t bode well for the IPCC’s high-end temperature sensitivity estimates. The scientific literature is now starting to reflect that reality.

Never mind that Pat Michaels and I published a paper in 2002 showing that the sensitivity lies near the low side of the IPCC’s range.  This idea (and those in similar papers subsequently published by others) had largely been ignored by the “mainstream” scientists self-selected to produce the IPCC Assessments.  But new results supporting lower and tighter estimates of the climate sensitivity are now appearing with regularity,  a testament to just how strong the evidence has become, for such results had to overcome the guardians of the IPCC’s so called “consensus of scientists”, which the Climategate emails showed to be less than gentlemanly.

Figure 1 shows the estimates of the climate sensitivity from five research papers that have appeared in the past two years, including the recent contributions from Ring et al. (2012) and van Hateren (2012)—both of which put the central estimate of the climate sensitivity at 2°C or lower, values which are at or beneath the IPCC’s  current “likely” range.

Figure 1. Climate sensitivity estimates from new research published in the past two years (colored), compared with the range given in the IPCC Fourth Assessment Report (black). The arrows indicate the 5 to 95% confidence bounds for each estimate along with the mean (vertical line) where available. Ring et al. (2012) present four estimates of the climate sensitivity and the red box encompasses those estimates.  The right-hand side of the IPCC range is dotted to indicate that the IPCC does not actually state the value for the upper 95% confidence bound of their estimate. The thick gray line represents the IPCC’s “likely” range.

The IPCC is scheduled to release its Fifth Assessment Report in 2013.  We’ll see whether these new, lower, and more constrained estimates of climate sensitivity  that are increasing populating the literature result in a modification of the IPCC estimates, or whether the IPCC authors manage to wave  them all away (or simply ignore them, as was the case with our 2002 paper).

Regardless of how the IPCC ultimately assesses climate science in 2013, the fact of the matter is that there is growing evidence that anthropogenic climate change from the burning of fossil fuels is not going to turn out to be as much as climate alarmists have made it out to be.


Annan, J.D., and J.C. Hargreaves, 2011. On the genera­tion and interpretation of probabilistic estimates of climate sensitivity. Climatic Change, 104, 324-436.

Lindzen, R.S., and Y-S. Choi, 2011. On the observational determination of climate sensitivity and its implica­tions. Asia-Pacific Journal of Atmospheric Sciences, 47, 377-390.

Michaels, P.J., P.C. Knappenberger, O.W. Frauenfeld, and R.E. Davis, 2002. Revised 21st century temperature predictions. Climate Research, 23, 1-9.

Ring, M.J., et al., 2012. Causes of the global warming observed since the 19th century. Atmospheric and Climate Sciences, 2, 401-415, doi:10.4236/acs.2012.24035.

Schmittner, A., et al., 2011. Climate sensitivity estimat­ed from temperature reconstructions of the Last Glacial Maximum, Science, 334, 1385-1388, doi: 10.1126/science.1203513.

Solomon, S., et al., (eds.), 2007. Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, 996pp.

van Hateren, J.H., 2012. A fractal climate response function can simulate global average temperature trends of the modern era and the past millennium. Climate Dynamics, doi:10.1007/s00382-012-1375-3.

Tarullo: No Return to Glass-Steagall

Finally, a senior banking regulator has acknowledged the so-called repeal of Glass-Steagall had nothing to do with the 2008 financial crisis. In a recent speech, Fed governor Daniel Tarullo noted that most firms at the center of the financial crisis in 2008 were either stand-alone commercial banks or investment banks, and therefore would not have been affected by the repeal. Tarullo also expressed concern that a reinstatement of Glass-Steagall would be costly for banks and their clients and would result in less product diversification.

Of all the myths underpinning the response to the 2008 financial crisis, one of the most persistent is that the repeal of Glass-Steagall was a major contributing factor. So Tarullo’s comments are heartening. But still, he misses out one key piece of the puzzle, namely, that multifunctional, diversified financial firms are not just more efficient and cost-effective than their more specialized counterparts; they are frequently more stable.

The banks that got into trouble in 2008 did so because they concentrated their risk in one kind of asset. The firms that did comparatively well throughout the crisis avoided this particular mistake and were able to come to the rescue, admittedly with some government assistance, of their ailing counterparts—think Wells Fargo or JPMorgan. Firms fail when they make bad investment decisions, regardless of their structure.

The U.S. Postal Service vs. Greece

Postmaster General Michael Donahoe has occasionally remarked that the U.S. Postal Service will end up in a Greek-like crisis if Congress doesn’t allow it to reduce costs and operate with more flexibility. Michael Schuyler, now with the Tax Foundation, examines the analogy between Greece and the USPS in a paper that was released on Monday.

The “good” news for the USPS is that its fiscal situation isn’t as bad as what the Greeks are dealing with—at least not yet. Whereas previous Greek governments intentionally understated deficits and debt until it caught up to the country in 2009, the USPS hasn’t tried to hide the fact that its prospects are bleak. In addition, the USPS has been able to shed excess workers (through attrition) over the past several years, while recent attempts by the Greek government to cut its bloated workforce have been met with rioting.

The problem is that powerful interests maintain convenient opinions on some of biggest issues facing the USPS. Mike singles out for particular scrutiny the postal employee unions for continuing to pretend that the USPS would be alright if it didn’t have to make annual payments to “prefund” retiree health care benefits. Both of us have been critical of this claim, but I think Mike’s latest reality check is worth sharing in its entirety:

If Congress did not require the Service to put aside money to pay the costly health benefits it promises its workers after they retire, the deficits it reported in the last several years would have been substantially reduced and so would its reported deficits in the near future. Some stakeholders claim from this that the Service’s problems are artificial, the fault of a funding requirement Congress imposed in 2006 as part of the Postal Accountability and Enhancement Act (PAEA, P.L. 109-435). They assert that the Service is, in reality, in fairly good shape. For example, Fredric Rolando, president of the National Association of Letter Carriers, declared, “The Postal Service has performed well in operational terms, nearly breaking even despite the worst recession in 80 years.” It should be noted, however, that even if the RHBF is entirely ignored, the Service would have lost $4.8 billion in 2012, $5.1 billion in 2011, $3.0 billion in 2010, and $2.4 billion in 2009. Losses of $4.8 billion, $5.1 billion, $3.0 billion, and $2.4 billion caused by problems other than the RHBF do not equal performing well. No wonder Postmaster General Donahoe characterized as “irresponsible” the argument that the Service would be fine except for retiree health benefit contributions and said, “The idea that if we just eliminate the prefunding…we’ll be OK—wrong!”

Mr. Rolando and others also argue that because the RHBF “already has $45 billion [of assets], enough to pay for decades of future retiree health care,” Congress should not require the Service to make further contributions. The flaw in that argument is that although its projected assets in the fund were $45.7 billion at the end of 2012, its projected liabilities were $93.6 billion, leaving an unfunded liability of $47.8 billion. If Congress let it cease contributing to the retiree health fund without also enacting reforms to dramatically reduce projected liabilities, it would virtually guarantee a huge taxpayer bailout of the Service down the road. The call for a prolonged contribution holiday is reminiscent of the approach that has landed the Greeks in so much trouble.

Never mind the fact that we’re talking about a benefit that a small and shrinking number of private sector workers are offered.

There’s one more point that Mike makes that I found interesting. He notes that defenders of the status quo often accuse USPS management and certain members of Congress of pushing for reforms that will ultimately lead to postal privatization. But Mike argues that reforms that would help fix the USPS’s financial imbalances would make privatization less likely:

[O]ne of the major postal reforms bills in Congress, the Postal Reform Act of 2011 (H.R. 2309), is often accused of paving the way for privatization. In fact, it would do the opposite. The House bill takes a tough-love approach and contains several controversial provisions. These include the creation of a Commission on Postal Reorganization Act, modeled on the successful military Base Realignment and Closure (BRAC) Commissions, as well as the creation of a Financial Responsibility and Management Assistance Authority, modeled on the effective District of Columbia Financial Control Board. No position is taken here on whether those provisions ought to be part of postal reform, but it should be noted that BRAC Commissions have saved the military billions of dollars and the DC Financial Control Board helped revitalize the District of Columbia. Those earlier, bipartisan efforts were not intended to privatize the Defense Department or the District; the goal was to help government run better.

Hmmm… I don’t want government to “run better.” And I think the U.S. Postal Service should be privatized regardless of how it’s run or the state of its finances. So if Mike’s right, perhaps free-market fans should be hoping that the USPS goes the way of Greece. After all, because Greece’s finances are such a mess, privatization of Hellenic Post is now on the table.

The danger, of course, is that Congress would just bail it out with taxpayer money.