Topic: Energy and Environment

Property Rights and the Takoma Park Tree Tussle

It’s enviro vs. enviro in Washington’s most “progressive” suburb, Takoma Park. Indeed, the Washington Post reports, “a potentially bough-breaking debate between sun-worshipers and tree-huggers.” That is, which is more environmentally desirable, solar power or tree cover?

The modest gray house in Takoma Park was nearly perfect, from Patrick Earle’s staunchly environmentalist point of view. It was small enough for wood-stove heating, faced the right way for good solar exposure and, most important, was in a liberal suburb that embraces all things ecological.

Or almost all. When Earle and his wife, Shannon, recently sought to add solar panels to the house, which they have been turning into a sustainability showplace, the couple discovered that Takoma Park values something even more than new energy technologies: big, old trees.

When they applied to cut down a partially rotten 50-foot silver maple that overshadowed their roof, the Earles ran into one of the nation’s strictest tree-protection ordinances. Under the law, the town arborist would approve removing the maple only if the couple agreed to pay $4,000 into a city tree-replacement fund or plant 23 saplings on their own.

So now the rival environmentalists are squaring off in front of the city council:

Takoma Park City Council members, who are considering revising the 1983 tree-protection law, listened Monday night as otherwise like-minded activists vied to claim the green high ground.

Tree partisans hailed the benefits of the leafy canopy that shades 59 percent of the town: Trees absorb carbon, take up stormwater, control erosion and provide natural cooling….

Solar advocates at the hearing said that they are tree lovers, too, but that scientific studies support the idea of poking select holes in the tree cover to let a little sun power through.

Being an environmentalist homeowner can become a full-time job:

But even some veteran solar users don’t like the idea of trading trees for panels. Mike Tidwell, founder of the Chesapeake Climate Action Network, installed solar panels on his Takoma Park house 10 years ago. As the trees have grown, the panels’ effectiveness has diminished, and Tidwell now buys wind power credits to supplement them.

Still, he said, “I don’t believe you should cut down trees for solar.” Rather, he thinks neighbors should work together to place shared panels on the sunniest roofs.

The city’s “official arborist” turned down Earle’s application to tear down one rotting tree to accommodate his solar panels. Now the council is debating the issue.

The Earles’ council member, Josh Wright, said he was sympathetic to their plight. He said it should remain hard to cut down a tree, but he’d like to see a break for people installing solar power. Wright also wants all homeowners to get credit for trees they may have planted in the years before they remove a tree.

It all sounds very complicated. And who knows what the right answer is? Or if there is a right answer? Or if the right answer might change next year?

And that’s where property rights come in.  They allocate both jurisdiction and liability over scarce resources, like roofs, trees, and access to sunlight.  A little “law and economics” can help to understand the Takoma Park Tree Tussle.  Nobel Laureate in Economics Ronald Coase, who just turned 100, brought law and economics together to study the way that people externalize costs (make others pay for them) or internalize them (take them into account when making decisions).  When property rights are well defined and legally secure, and rights can be exchanged at low cost, resources will be directed to their most highly valued use.  In fact, the initial allocation of property rights doesn’t affect the allocation of resources, if the transfers are freely and easily negotiable.

That, unfortunately, is no longer the case in Takoma Park, where instead of a fairly straightforward transaction (facilitated by a purchase), there is a tussle over ill-defined rights and obligations that have little or no legal security, in a very expensive and costly process of negotiation that will almost certainly consume more wood pulp for memos than is contained in the tree in question.  Well-defined and legally secure property rights save us the rather substantial trouble of sitting down like the Takoma Park City Council and trying to judge the advisability of every proposed purchase, all the while consuming large amounts of paper and exuding large amount of hot air.

The Traffic Congestion Problem

A new report says that traffic congestion is worse, and the American Public Transportation Association urges Congress to … spend more money on public transportation.

Cato senior fellow Randal O’Toole has been challenging the received wisdom on traffic and mass transit for years. See his book Gridlock: Why We’re Stuck in Traffic and What to Do About It, and lots of other studies. In November he debated the head of the American Public Transportation Association at a Cato Policy Forum:

Preparing for Life as a Light Bulb Black Marketeer

 I’ve decided the time has come to become an entrepreneur – as a black market operator.

Come next January, 100-watt incandescent light bulbs will be illegal, courtesy of Congress and President George W. Bush.  Lower wattages will be banned the following year.  As usual, politicians in Washington believe they know best and are determined to inconvenience the public in the name of saving energy.

No matter that incandescent lights offer a softer light and are a better value than fluorescent bulbs if turned on only briefly.  And no matter that breaking a fluorescent light will spill mercury, creating what in any other circumstance would be considered to be a biohazard.

There are other consequences of the coming prohibition.  Notes Tim Carney of the Washington Examiner:

  • Citing this law, GE has closed its incandescent light plant in Virginia. For the coming years, while they’re still legal, Americans will be buying their GE incandescents from Mexico. This probably means less efficient manufacturing and more shipping.
  • GE makes its CFLs in China. The factories are likely dirtier and less efficient, and certainly there will be more shipping costs.
  • Because of the warm-up time for CFLs and the knowledge that they use less energy, people are more likely to leave them on for longer, I imagine.
  • In northern latitudes, incandescents’ inefficiency is not wasted. Think about it: in Alaska, summer nights are very short and winter nights are very long. That means a vast majority of light-bulb time happens in the winter. The incandescents waste energy in the form of heat, but if it’s cold, that added heat slightly reduces your need to use a furnace.
Of course, it’s hard to decide how many bulbs to buy.  What would be a lifetime supply of 100 watt lights?

And why stop there?  I could become an incandescent bulb pusher once the prohibition takes effect.  I don’t think drug prohibition makes any sense, but I have no desire to get into that market.  Customers and competitors are an ugly lot and I really don’t want to go to prison.  But selling light bulbs – now there’s something I could do!

I’d be even happier, however, if the new Congress dropped the coming prohibition.  Fluorescent bulbs often are a wise choice, but not always.  A supposedly free society should leave at least a few choices to people – like deciding which light bulbs to use.

Norfolk Light-Rail Scandal

Another city has discovered that light rail is not the road to utopia. In 2007, Norfolk, Virginia decided to revitalize its downtown by building a rail transit line. That line is now 45 percent over budget and its opening has been delayed by more than 16 months.

When Flickr user DearEdward took this construction photo in July, 2008, Norfolk officials were promising to open the light-rail line in December, 2009 at a cost of $232 million. Now the cost has grown to $338 million and the opening delayed to late in 2011.

A 45-percent cost overrun is about average for rail transit construction, but it has hit Norfolk particularly hard. In 2007, the Federal Transit Administration agreed to fund 72 percent of the then-projected $232 million cost, with the Commonwealth of Virginia and city of Norfolk each funding about half the remainder. Since the feds did not agree to cover any of the cost overruns, the overruns represent a near-tripling of the costs to state and city taxpayers.

Furious about the unexpected costs and delays., the Hampton Roads Transit board (which consists of city councilors from the cities the agency serves) forced the agency’s general manager to retire. A month ago, a state audit found that the Norfolk city manager and transit planners knew about the cost overrun long before they bothered to tell the city. As a result, both the city manager and the agency’s agency’s head planner resigned this week.

On top of cost overruns are construction delays. Originally scheduled to open to the public in December 2009, opening was delayed until May 2011. But now the transit agency admits it won’t even meet that date and can’t say when it will open.

Meanwhile, far from becoming a place “where people can work, shop, eat, and play,” Norfolk’s downtown now features “hundreds of empty condos, apartments, and storefronts.” The latest news is that a major supermarket is closing after a mere three years in business. While the truth is that the relative handful of people who are expected to ride the light-rail line are not enough to revitalize any downtown, lengthy construction delays certainly don’t help.

The best thing Norfolk might do now is to abandon the light-rail line and dedicate the $10 million or so that it will cost to subsidize rail operations to improving bus service in the city and its surroundings. Unfortunately, doing so means that it would have to refund the federal government the $168 million that it contributed to the line. As a result, city taxpayers will have to throw good money after bad by funding tens of millions in operating subsidies for the next several decades.

The Current Wisdom: Better Model, Less Warming

The Current Wisdom is a series of monthly posts in which Senior Fellow Patrick J. Michaels reviews interesting items on global warming in the scientific literature that may not have received the media attention that they deserved, or have been misinterpreted in the popular press.

The Current Wisdom only comments on science appearing in the refereed, peer-reviewed literature, or that has been peer-screened prior to presentation at a scientific congress.


Better Model, Less Warming

Bet you haven’t seen this one on TV:  A newer, more sophisticated climate model has lost more than 25% of its predicted warming!  You can bet that if it had predicted that much more warming it would have made the local paper.

The change resulted from a more realistic simulation of the way clouds work, resulting in a major reduction in the model’s “climate sensitivity,” which is the amount of warming predicted for a doubling of  the concentration of atmospheric carbon dioxide over what it was prior to the industrial revolution.

Prior to the modern era, atmospheric carbon dioxide concentrations, as measured in air trapped in ice in the high latitudes (which can be dated year-by-year) was pretty constant, around 280 parts per million (ppm).  No wonder CO2 is called a “trace gas”—there really is not much of it around.

The current concentration is pushing about 390 ppm, an increase of about 40% in 250 years.  This is a pretty good indicator of the amount of “forcing” or warming pressure that we are exerting on the atmosphere.  Yes, there are other global warming gases going up, like the chlorofluorocarbons (refrigerants now banned by treaty), but the modern climate religion is that these are pretty much being cancelled by reflective  “aerosol” compounds that go in the air along with the combustion of fossil fuels, mainly coal.

Most projections have carbon dioxide doubling to a nominal 600 ppm somewhere in the second half of this century, absent no major technological changes (which history tells us is a very shaky assumption).  But the “sensitivity” is not reached as soon as we hit the doubling, thanks to the fact that it takes a lot of time to warm the ocean (like it takes a lot of time to warm up a big pot of water with a small burner).

So the “sensitivity” is much closer to the temperature rise that a model projects about 100 years from now – assuming (again, shakily) that we ultimately switch to power sources that don’t release dreaded CO2 into the atmosphere somewhere around the time its concentration doubles.

The bottom line is that lower sensitivity means less future warming as a result of anthropogenic greenhouse gas emissions. So our advice… keep on working on the models, eventually, they may actually arrive at something close puny rate of warming that is being observed

At any rate, improvements to the Japanese-developed Model for Interdisciplinary Research on Climate (MIROC) are the topic of a new paper by Masahiro Watanabe and colleagues in the current issue of the Journal of Climate. This modeling group has been working on a new version of their model (MIROC5) to be used in the upcoming 5th Assessment Report of the United Nations’ Intergovernmental Panel on Climate Change, due in late 2013. Two incarnations of the previous version (MIROC3.2) were included in the IPCC’s 4th Assessment Report (2007) and contribute to the IPCC “consensus” of global warming projections.

The high resolution version (MIROC3.2(hires)) was quite a doozy – responsible for far and away the greatest projected global temperature rise (see Figure 1). And the medium resolution model (MIROC3.2(medres)) is among the Top 5 warmest models. Together, the two MIROC models undoubtedly act to increase the overall model ensemble mean warming projection and expand the top end of the “likely” range of temperature rise.

FIGURE 1

Global temperature projections under the “midrange” scenario for greenhouse-gas emissions produced by the IPCC’s collection of climate models.  The MIROC high resolution model (MIROC3.2(hires)) is clearly the hottest one, and the medium range one isn’t very far behind.

The reason that the MIROC3.2 versions produce so much warming is that their  sensitivity is very high, with the high-resolution  at 4.3°C (7.7°F) and the medium-resolution  at  4.0°C (7.2°F).  These sensitivities are very near the high end of the distribution of climate sensitivities from the IPCC’s collection of models (see Figure 2).

FIGURE 2

Equilibrium climate sensitivities of the models used in the IPCC AR4 (with the exception of the MIROC5). The MIROC3.2 sensitivities are highlighted in red and lie near the upper und of the collection of model sensitivities.  The new, improved, MIROC5, which was not included in the IPCC AR4, is highlighted in magenta, and lies near the low end of the model climate sensitivities (data from IPCC Fourth Assessment Report, Table 8.2 and Watanabe et al., 2010).

Note that the highest sensitivity is not necessarily in the hottest model, as observed warming is dependent upon how the model deals with the slowness of the oceans to warm.

The situation is vastly different in the new MIROC5 model.  Watanabe et al. report that the climate sensitivity is now  2.6°C (4.7°F) – more than 25% less than in the previous version on the model.[1] If the MIROC5 had been included in the IPCC’s AR4 collection of models, its climate sensitivity of 2.6°C would have been found near the low end of the distribution (see Figure 2), rather than pushing the high extreme as MIROC3.2 did.

And to what do we owe this large decline in the modeled climate sensitivity?  According to Watanabe et al., a vastly improved handling of cloud processes involving “a prognostic treatment for the cloud water and ice mixing ratio, as well as the cloud fraction, considering both warm and cold rain processes.”  In fact, the improved cloud scheme—which produces clouds which compare more favorably with satellite observations—projects that under a warming climate low altitude clouds become a negative feedback rather than acting as positive feedback as the old version of the model projected.[2] Instead of enhancing the CO2-induced warming, low clouds are now projected to retard it.

Here is how Watanabe et al. describe their results:

A new version of the global climate model MIROC was developed for better simulation of the mean climate, variability, and climate change due to anthropogenic radiative forcing….

MIROC5 reveals an equilibrium climate sensitivity of 2.6K, which is 1K lower than that in MIROC3.2(medres)…. This is probably because in the two versions, the response of low clouds to an increasing concentration of CO2 is opposite; that is, low clouds decrease (increase) at low latitudes in MIROC3.2(medres) (MIROC5).[3]

Is the new MIROC model perfect? Certainly not.  But is it better than the old one? It seems quite likely.  And the net result of the model improvements is that the climate sensitivity and therefore the warming projections (and resultant impacts) have been significantly lowered. And much of this lowering comes as the handling of cloud processes—still among the most uncertain of climate processes—is improved upon. No doubt such improvements will continue into the future as both our scientific understanding and our computational abilities increase.

Will this lead to an even greater reduction in climate sensitivity and projected temperature rise?  There are many folks out there (including this author) that believe this is a very distinct possibility, given that observed warming in recent decades is clearly beneath the average predicted by climate models. Stay tuned!

References:

Intergovernmental Panel on Climate Change, 2007.  Fourth Assessment Report, Working Group 1 report, available at http://www.ipcc.ch.

Watanabe, M., et al., 2010. Improved climate simulation by MIROC5: Mean states, variability, and climate sensitivity. Journal of Climate, 23, 6312-6335.


[1] Watanabe et al. report that the sensitivity of MIROC3.2 (medres) is 3.6°C (6.5°), which is less that what was reported in the 2007 IPCC report.  So 25% is likely a conservative estimate of the reduction in warming.

[2] Whether enhanced cloudiness enhances or cancels carbon-dioxide warming is one of the core issues in the climate debate, and is clearly not “settled” science.

[3] Degrees Kelvin (K) are the same as degrees Celsius (C) when looking at relative, rather than absolute temperatures.

The Current Wisdom

The Current Wisdom is a series of monthly posts in which Senior Fellow Patrick J. Michaels reviews interesting items on global warming in the scientific literature that may not have received the media attention that they deserved, or have been misinterpreted in the popular press.

The Current Wisdom only comments on science appearing in the refereed, peer-reviewed literature, or that has been peer-screened prior to presentation at a scientific congress.

History to Repeat:  Greenland’s Ice to Survive, United Nations to Continue Holiday Party

This year’s installment of the United Nations’ annual climate summit (technically known as the 16th meeting of the Conference of the Parties to the Framework Convention on Climate Change) has come and gone in Cancun. Nothing substantial came of it policy-wise; just the usual attempts by the developing world to shake down our already shaky economy in the name of climate change.   News-wise probably the biggest story was that during the conference, Cancun broke an all time daily low temperature record.  Last year’s confab in Copenhagen was pelted by snowstorms and subsumed in miserable cold.  President Obama attended, failed to forge any meaningful agreement, and fled back to beat a rare Washington blizzard. He lost.

But surely as every holiday season now includes one of these enormous jamborees, dire climate stories appeared daily.  Polar bear cubs are endangered!  Glaciers are melting!!

Or so beat the largely overhyped drums, based upon this or that press release from Greenpeace or the World Wildlife Fund.

And, of course, no one bothered to mention a blockbuster paper appearing in Nature the day before the end of the Cancun confab, which reassures us that Greenland’s ice cap and glaciers are a lot more stable than alarmists would have us believe.  That would include Al Gore, fond of his lurid maps showing the melting all of Greenland’s ice submerging Florida.

Ain’t gonna happen.

The disaster scenario goes like this:  Summer temperatures in Greenland are warming, leading to increased melting and the formation of ephemeral lakes on the ice surface.  This water eventually finds a crevasse and then a way down thousands of feet to the bottom of a glacier, where it lubricates the underlying surface, accelerating the seaward march of the ice.  Increase the temperature even more and massive amounts deposit into the ocean by the year 2100, catastrophically raising sea levels.

According to Christian Schoof of the University of British Columbia (UBC), “The conventional view has been that meltwater permeates the ice from the surface and pools under the base of the ice sheet….This water then serves as a lubricant between the glacier and the earth underneath it….”

And, according to Schoof, that’s just not the way things work. A UBC press release about his Nature article noted that he found that “a steady meltwater supply from gradual warming may in fact slow down the glacier flow, while sudden water input could cause glaciers to speed up and spread.”

Indeed, Schoof finds that sudden water inputs, such as would occur with heavy rain, are responsible for glacial accelerations, but these last only one or a few days.

The bottom line?  A warming climate has very little to do with accelerating ice flow, but weather events do.

How important is this?  According to University of Leeds Professor Andrew Shepherd, who studies glaciers via satellite, “This study provides an elegant solution to one of the two key ice sheet instability problems” noted by the United Nations in their last (2007) climate compendium.  “It turns out that, contrary to popular belief, Greenland ice sheet flow might not be accelerated by increased melting after all,” he added.

I’m not so sure that those who hold the “popular belief” can explain why Greenland’s ice didn’t melt away thousands of years ago.  For millennia, after the end of the last ice age (approximately 11,000 years ago) strong evidence indicates that the Eurasian arctic averaged nearly 13°F warmer in July than it is now.

That’s because there are trees buried and preserved in the acidic Siberian tundra, and they can be carbon dated.  Where there is no forest today—because it’s too cold in summer—there were trees, all the way to the Arctic Ocean and even on some of the remote Arctic islands that are bare today. And, back then, thanks to the remnants of continental ice, the Arctic Ocean was smaller and the North American and Eurasian landmasses extended further north.

That work was by Glen MacDonald, from UCLA’s Geography Department. In his landmark 2000 paper in Quaternary Research, he noted that the only way that the Arctic could become so warm is for there to be a massive incursion of warm water from the Atlantic Ocean.  The only “gate” through which that can flow is the Greenland Strait, between Greenland and Scandinavia.

So, Greenland had to have been warmer for several millennia, too.

Now let’s do a little math to see if the “popular belief” about Greenland ever had any basis in reality.

In 2009 University of Copenhagen’s B. M. Vinther and 13 coauthors published the definitive history of Greenland climate back to the ice age, studying ice cores taken over the entire landmass. An  exceedingly conservative interpretation of  their results is that Greenland was 1.5°C (2.7°F) warmer for the period from 5,000-9000 years ago, which is also the warm period in Eurasia that MacDonald detected.  The integrated warming is given by multiplying the time (4,000 years) by the warming (1.5°), and works out (in Celsius) to 6,000 “degree-years.” 

Now let’s assume that our dreaded emissions of carbon dioxide spike the temperature there some 4°C.  Since we cannot burn fossil fuel forever, let’s put this in over 200 years.  That’s a pretty liberal estimate given that the temperature there still hasn’t exceeded values seen before in the 20th century.  Anyway, we get 800 (4 x 200) degree-years.

If the ice didn’t come tumbling off Greenland after 6,000 degree-years, how is it going to do so after only 800?  The integrated warming of Greenland in the post-ice-age warming (referred to as the “climatic optimum” in textbooks published prior to global warming hysteria) is over seven times what humans can accomplish in 200 years.  Why do we even worry about this?

So we can all sleep a bit better.  Florida will survive.  And, we can also rest assured that the UN will continue its outrageous holiday parties, accomplishing nothing, but living large.  Next year’s is in Durban, South Africa, yet another remote warm spot hours of Jet-A away.

References:

MacDonald, G. M., et al., 2000.  Holocene treeline history and climatic change across Northern Eurasia.  Quaternary Research 53, 302-311.

Schoof, C., 2010. Ice-sheet acceleration driven by melt supply variability. Nature 468, 803-805.

Vinther, B.M., et al., 2009.  Holocene thinning of the Greenland ice sheet. Nature 461, 385-388.

Bad Advice from Gov. Polar Star

In 2006, Michigan Gov. Jennifer Granholm told citizens, “In five years, you’re going to be blown away by the strength and diversity of Michigan’s transformed economy.” When those words were uttered, Michigan’s unemployment rate was 6.7 percent. It’s now almost 13 percent.

Although Michigan’s economic doldrums can’t entirely be pinned on Granholm, her fiscal policies have not helped, such as her higher taxes on businesses.

The Mackinac Center’s Michael LaFaive explains why Granholm’s grandiose proclamation in 2006 hasn’t panned out:

In this case, Gov. Granholm was promoting her administration and the Legislature’s massive expansion of discriminatory tax breaks and subsidies for a handful of corporations. The purpose and main effect of this policy is to provide “cover” for the refusal of the political class to adopt genuine tax, labor and regulatory reforms, which they shy away from because it would anger and diminish the privileges and rewards of unions and other powerful special interests.

LaFaive’s colleague James Hohman recently pointed out that “Michigan’s economy produced 8 percent less in 2009 than it did in 2000 when adjusted for inflation. The nation rose 15 percent during this period.”

Granholm has written an op-ed in Politico on how federal policymakers can “win the race for jobs.” This would be like Karl Rove penning an op-ed complaining about Obama spending too much. Oh wait, bad example.

Granholm advises federal policymakers to create a “Jobs Race to the Top” modeled after the president’s education Race to the Top, which as Neal McCluskey explains, has not worked as she claims. Granholm’s plan boils down to more federal subsidies to state and local governments and privileged businesses to develop “clean energy” industries.

Typical of the dreamers who believe that the government can effectively direct economic activity, Granholm never considers the costs of government handouts and central planning. A Cato essay on federal energy interventions explains:

The problem is that nobody knows which particular energy sources will make the most sense years and decades down the road. But this level of uncertainty is not unique to the energy industry—every industry faces similar issues of innovation in a rapidly changing world. In most industries, the policy solution is to allow the decentralized market efforts of entrepreneurs and early adopting consumers figure out the best route to the future. Government efforts to push markets in certain directions often end up wasting money, but they can also delay the development of superior alternatives that don’t receive subsidies.

Granholm recently received “Sweden’s Insignia of First Commander, Order of the Polar Star for her work in fostering relations between Michigan and Sweden to promote a clean energy economy” from His Majesty King Carl XVI Gustaf. Unfortunately, her prescription for economic growth would be a royal mistake.