Archives: 11/2013

Cato at the Federalist Society Convention

The Federalist Society came into being in 1982 after a small group of conservatives and libertarians, concerned about the state of the law and the legal academy in particular, gathered for a modest conference at the Yale Law School, after which two law-student chapters were formed at Yale and at the University of Chicago. Quickly thereafter chapters sprung up at other law schools across the country. And in 1986 those students, now lawyers, started forming lawyer chapters in the cities where they practiced. Today the Federalist Society is more than 55,000 strong, its membership drawn from all corners of the law and beyond.

Toward the end of this past week many of those members gathered in Washington for the society’s 27th annual National Lawyers Convention, highlighted on Thursday evening by a gala black tie dinner at the conclusion of which Judge Diane Sykes of the Seventh Circuit Court of Appeals treated the audience to a wide-ranging interview of Justice Clarence Thomas. The convention sessions, concluding late Saturday, have now been posted at the Federalist Society’s website. As a look at the various panels and programs will show, this year’s theme, “Textualism and the Role of Judges,” was addressed in a wide variety of domains.

Concerning the role of judges, classical liberals and libertarians, who have long urged judges to be more engaged than many conservatives have thought proper, will find several panels of particular interest. Our own Walter Olson spoke about the new age of litigation financing, for example, while Nick Rosenkranz addressed textualism and the Bill of Rights – a panel that also included the spirited remarks of Cato adjunct scholar Richard Epstein. See also Epstein’s discussion of intellectual property on another panel that first day.

Then too you won’t want to miss senior fellow Randy Barnett’s treatment of textualism and constitutional interpretation the next day, especially as he spars with two opponents on the left, or his Saturday debate against Judge J. Harvie Wilkinson III of the Fourth Circuit Court of Appeals, where the proposition before the two was “Resolved: Courts are Too Deferential to the Legislature.” And finally, our own Trevor Burrus was on hand for a book signing: The book he edited, A Conspiracy Against Obamacare: The Volokh Conspiracy and the Health Care Case, has just come out and is must reading for those who want to see how the issue of the day, and many days to come, was teed up, legally, by a dedicated band of libertarians before it reached the Supreme Court.

Intellectual Property in Trade Agreements at a Crossroads

Last week, the big news in the trade agreement arena was the leak of a draft text on intellectual property (IP) in the Trans Pacific Partnership (TPP) talks.  Tim Lee of the Washington Post (and formerly a Cato adjunct scholar) explains what’s in it:

The leaked draft is 95 pages long, and includes provisions on everything from copyright damages to rules for marketing pharmaceuticals. Several proposed items are drawn from Hollywood’s wish list. The United States wants all signatories to extend their copyright terms to the life of the author plus 70 years for individual authors, and 95 years for corporate-owned works. The treaty includes a long section, proposed by the United States, requiring the creation of legal penalties for circumventing copy-protection schemes such as those that prevent copying of DVDs and Kindle books.

The United States has also pushed for a wide variety of provisions that would benefit the U.S. pharmaceutical and medical device industries. The Obama administration wants to require the extension of patent protection to plants, animals, and medical procedures. It wants to require countries to offer longer terms of patent protection to compensate for delays in the patent application process. The United States also wants to bar the manufacturers of generic drugs from relying on safety and efficacy information that was previously submitted by a brand-name drug maker — a step that would make it harder for generic manufacturers to enter the pharmaceutical market and could raise drug prices.

While the critics pounced, defenders defended.  Here’s the MPAA:

What the text does show … is that despite much hyperbole from free trade opponents, the U.S. has put forth no proposals that are inconsistent with U.S. law.

In response to this statement, it is worth noting two things.  First, many of the critics of this IP text are not “free trade opponents.”  They simply oppose overly strong IP protections.  Many of them are actually for free trade, or at least not actively against it.   Second, while these proposals may not be inconsistent with U.S. law, that doesn’t make them good policy.

I have a feeling that the IP aspect of the TPP talks is going to be very important for the future of IP in trade agreements.  IP was kind of slipped into trade agreements quietly back in the early 1990s.  But the recent backlash has been strong.  How the TPP fares politically here in the U.S. – if and when negotiations are completed – could tell us a lot about what the future holds for IP in trade agreements.

We’re On Instagram!

Are you on Instagram? The Cato Institute is!

We joined the popular image-sharing site in late October. Follow us at http://instagram.com/catoinstitute.

Wondering how YOU can spread the message of liberty on Instagram? Make sure to come to this month’s New Media Lunch. Join the Cato Institute this Thursday at noon for a lunchtime presentation, followed by a roundtable discussion. Allen Gannett of Trackmaven will highlight some interesting discoveries from TrackMaven’s recently released study of Fortune 500 companies on Instagram and share tips for translating their success to the nonprofit world. Make sure to register as space is limited.

Not in D.C.? We will be livestreaming Allen’s presentation. Just navigate to http://www.cato.org/live at noon Eastern Time this Thursday, November 21st. You can also join the conversation on Twitter using #NewMediaLunch.

Topics:

With or Without a “Pause” Climate Models Still Project Too Much Warming

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

A new paper just hit the scientific literature that argues that the apparent pause in the rise in global average surface temperatures during the past 16 years was really just a slowdown. 

As you may imagine, this paper, by Kevin Cowtan and Robert Way is being hotly discussed in the global warming blogs, with reaction ranging from a warm embrace by the global-warming-is-going-to-be-bad-for-us crowd to revulsion from the human-activities-have-no-effect-on-the-climate claque.

The lukewarmers (a school we take some credit for establishing) seem to be taking the results in stride.  After all, the “pause” as curious as it is/was, is not central to the primary argument that, yes, human activities are pressuring the planet to warm, but that the rate of warming is going to be much slower than is being projected by the collection of global climate models (upon which mainstream projections of future climate change—and the resulting climate alarm (i.e., calls for emission regulations, etc.)—are based).

Under the adjustments to the observed global temperature history put together by Cowtan and Way, the models fare a bit better than they do with the unadjusted temperature record. That is, the observed temperature trend over the past 34 years (the period of record analyzed by Cowtan and Way) is a tiny bit closer to the average trend from the collection of climate models used in the new report from the U.N.’s Intergovernmental Panel on Climate Change (IPCC) than is the old temperature record.

Specifically, while the trend in observed global temperatures from 1979-2012 as calculated by Cowtan and Way is 0.17°C/decade, it is 0.16°C/decade in the temperature record compiled by the U.K. Hadley Center (the record that Cowtan and Way adjusted).  Because of the sampling errors associated with trend estimation, these values are not significantly different from one another.  Whether the 0.17°C/decade is significantly different from the climate model average simulated trend during that period of 0.23°C/decade is discussed extensively below.

But, suffice it to say that an insignificant difference of 0.01°C/decade in the global trend measured over more than 30 years is pretty small beer and doesn’t give model apologists very much to get happy over.

Instead, the attention is being deflected to “The Pause”—the leveling off of global surface temperatures during the past 16 years (give or take). Here, the new results from Cowtan and Way show that during the period 1997-2012, instead of a statistically insignificant rise at a rate of 0.05°C/decade as is contained in the “old” temperature record, the rise becomes a statistically significant 0.12°C/decade. “The Pause” is transformed into “The Slowdown” and alarmists rejoice because global warming hasn’t stopped after all. (If the logic sounds backwards, it does to us as well, if you were worried about catastrophic global warming, wouldn’t you rejoice at findings that indicate that future climate change was going to be only modest, more so than results to the contrary?)

The science behind the new Cowtan and Way research is still being digested by the community of climate scientists and other interested parties alike. The main idea is that the existing compilations of the global average temperature are very data-sparse in the high latitudes. And since the Arctic (more so than the Antarctic) is warming faster than the global average, the lack of data there may mean that the global average temperature trend may be underestimated. Cowtan and Way developed a methodology which relied on other limited sources of temperature information from the Arctic (such as floating buoys and satellite observations) to try to make an estimate of how the surface temperature was behaving in regions lacking more traditional temperature observations (the authors released an informative video explaining their research which may better help you understand what they did). They found that the warming in the data-sparse regions was progressing faster than the global average (especially during the past couple of years) and that when they included the data that they derived for these regions in the computation of the global average temperature, they found the global trend was higher than previously reported—just how much higher depended on the period over which the trend was calculated. As we showed, the trend more than doubled over the period from 1997-2012, but barely increased at all over the longer period 1979-2012.

Figure 1 shows the impact on the global average temperature trend for all trend lengths between 10 and 35 years (incorporating  our educated guess as to what the 2013 temperature anomaly will be), and compares that to the distribution of climate model simulations of the same period. Statistically speaking, instead of there being a clear inconsistency (i.e., the observed trend value falls outside of the range which encompasses 95% of all modeled trends) between the observations and the climate mode simulations for lengths ranging generally from 11 to 28 years and a marginal inconsistency (i.e., the observed trend value falls outside of the range which encompasses 90% of all modeled trends)  for most of the other lengths, now the observations track closely the marginal inconsistency line, although trends of length 17, 19, 20, 21 remain clearly inconsistent with the collection of modeled trends. Still, throughout the entirely of the 35-yr period (ending in 2013), the observed trend lies far below the model average simulated trend (additional information on the impact of the new Cowtan and Way adjustments on modeled/observed temperature comparison can be found here).

 

Figure 1. Temperature trends ranging in length from 10 to 35 years (ending in a preliminary 2013) calculated using the data from the U.K. Hadley Center (blue dots), the adjustments to the U.K. Hadley Center data made by Cowtan and Way (red dots) extrapolated through 2013, and the average of climate model simulations (black dots). The range that encompasses 90% (light grey lines) and 95% (dotted black lines) of climate model trends is also included.

The Cowtan and Way analysis is an attempt at using additional types of temperature information, or extracting “information” from records that have already told their stories, to fill in the missing data in the Arctic.  There are concerns about the appropriateness of both the data sources and the methodologies applied to them.  

A major one is in the applicability of satellite data at such high latitudes.   The nature of the satellite’s orbit forces it to look “sideways” in order to sample polar regions.  In fact, the orbit is such that the highest latitude areas cannot be seen at all.  This is compounded by the fact that cold regions can develop substantial “inversions” of near-ground temperature, in which temperature actually rises with height such that there is not a straightforward relationship between the surface temperature and the temperature of the lower atmosphere where the satellites measure the temperature. If the nature of this complex relationship is not constant in time, an error is introduced into the Cowtan and Way analysis.

Another unresolved problem comes up when extrapolating land-based weather station data far into the Arctic Ocean.  While land temperatures can bounce around a lot, the fact that much of the ocean is partially ice-covered for many months.  Under “well-mixed” conditions, this forces the near-surface temperature to be constrained to values near the freezing point of salt water, whether or not the associated land station is much warmer or colder.

You can run this experiment yourself by filling a glass with a mix of ice and water and then making sure it is well mixed.  The water surface temperature must hover around 33°F until all the ice melts.  Given that the near-surface temperature is close to the water temperature, the limitations of land data become obvious.

Considering all of the above, we advise caution with regard to Cowtan and Way’s findings.  While adding high arctic data should increase the observed trend, the nature of the data means that the amount of additional rise is subject to further revision.  As they themselves note, there’s quite a bit more work to be done this area.

In the meantime, their results have tentatively breathed a small hint of life back into the climate models, basically buying them a bit more time—time for either the observed temperatures to start rising rapidly as current models expect, or, time for the modelers to try to fix/improve cloud processes, oceanic processes, and other process of variability (both natural and anthropogenic) that lie behind what would be the clearly overheated projections. 

We’ve also taken a look at how “sensitive” the results are to the length of the ongoing pause/slowdown.  Our educated guess is that the “bit” of time that the Cowtan and Way findings bought the models is only a few years long, and it is a fact, not a guess, that each additional year at the current rate of lukewarming increases the disconnection between the models and reality.

 

Reference:

Cowtan, K., and R. G. Way, 2013. Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends. Quarterly Journal of the Royal Meteorological Society, doi: 10.1002/qj.2297.

 

The Aftermath of Chile’s Election

Chile went to the polls yesterday in what was perhaps the most important presidential election since the return of democracy in 1990. Many foreign observers focused on the curiosity that the two leading candidates were both daughters of Air Force generals who chose opposing sides during the military coup that toppled socialist president Salvador Allende in 1973. But what is at stake in this election wasn’t Chile’s past, but its future.

Let’s first recapitulate where Chile stands today: Thanks to the free market reforms implemented since 1975 by the military government of Augusto Pinochet – that were subsequently deepened by the democratic center-left governments that ruled the country since 1990 – Chile can boast the following accomplishments:

  • It’s the freest economy in Latin America and it stands 11th in the world (ahead of the United States) in the Economic Freedom of the World report.
  • It has more than tripled its income per capita since 1990 to $19,100 (PPP), which is the highest in Latin America.
  • According to the IMF, by 2017 Chile will reach an income per capita of $23,800, which is the official threshold to become a developed country.
  • According to the UN Economic Commission on Latin America and the Caribbean (ECLAC), Chile has the most impressive poverty reduction record in Latin America in the last two decades. The poverty rate went down from 45% in the mid-1980s to 11% in 2011, the lowest in the region.
  • It has the strongest democratic institutions of Latin America according to the Rule of Law Index of the World Justice Project.
  • It’s the least corrupt country in Latin America according to Transparency International.
  • Along with Costa Rica and Uruguay, it has the best record in Latin America on political rights and civil liberties, according to Freedom House.
  • High income inequality, which has always been a sore in the eyes of many, has decreased in the last decade.

With such an impressive record, it’s quite puzzling that the leading candidate, former president Michelle Bachelet, is running again under a platform calling for changes that would significantly alter the Chilean model by increasing the role of the government in the economy. In particular, Bachelet is proposing free higher education to everyone, the abolition of for-profit private schools and universities, the introduction of a state-owned pension fund in the country’s private pension system, higher taxes on businesses and professionals, and even a new constitution.

Bans on Child Labor

Only a heartless libertarian could possibly object to bans on child labor, right? After all, no one wants to live in some Dickensian dystopia in which children toil endlessly under brutal conditions.

Unless, of course, bans harm, rather than help, both children and their families. And in a new working paper, economists Prashant Bharadwaj (UCSD), Leah Lakdawala (Michigan State), and Nicholas Li (Toronto), find just that.  They

… examine the consequences of India’s landmark legislation against child labor, the Child Labor (Prohibition and Regulation) Act of 1986. … [and] show that child wages decrease and child labor increases after the ban. These results are consistent with a theoretical model … in which families use child labor to reach subsistence constraints and where child wages decrease in response to bans, leading poor families to utilize more child labor. The increase in child labor comes at the expense of reduced school enrollment.

And it gets worse.  The authors

… also examine the effects of the ban at the household level. Using linked consumption and expenditure data, [they] find that along various margins of household expenditure, consumption, calorie intake and asset holdings, households are worse off after the ban.

Good intentions are just that; intentions, not results.  The law of unintended consequences should never be ignored.

Supreme Court Should End Advertiser’s Kafkaesque Nightmare

Douglas Walburg faces potential liability of $16-48 million. What heinous acts caused such astronomical damages? A violation of 47 C.F.R. § 16.1200(a)(3)(iv), an FCC regulation that enables lawsuits against senders of unsolicited faxes.

Walburg, however, never sent any unsolicited faxes; he was sued under the regulation by a class of plaintiffs for failing to include opt-out language in faxes sent to those who expressly authorized Walburg to send them the faxes. 

The district court ruled for Walburg, holding that the regulation should be narrowly interpreted so as to require opt-out notices only for unsolicited faxes. But on appeal, the Federal Communications Commission, not previously party to the case, filed an amicus brief explaining that its regulation applies to previously authorized faxes too. Walburg argued that the FCC lacked statutory authority to regulate authorized advertisements. In response, the FCC filed another brief, arguing that the Hobbs Act prevents federal courts from considering challenges to the validity of FCC regulations when raised as a defense in a private lawsuit. Although the U.S. Court of Appeals for the Eighth Circuit recognized that Walburg’s argument may have merit, it declined to hear it and ruled that the Hobbs Act indeed prevents judicial review of administrative regulations except on appeal from prior agency review. 

In this case, however, Walburg couldn’t have raised his challenge in an administrative setting because the regulation at issue outsources enforcement to private parties in civil suits! Moreover, having not been charged until the period for agency review lapsed, he has no plausible way to defend himself from the ruinous liability he will be subject to if not permitted to challenge the regulation’s validity. Rather than face those odds, Walburg has petitioned the Supreme Court to hear his case, arguing that the Eighth Circuit was wrong to deny him the right to judicial review without having to initiate a separate (and impossible) administrative review. 

Cato agrees, and has joined the National Federation of Independent Business on an amicus brief supporting Walburg’s petition. We argue that the Supreme Court should hear the case because the Eighth Circuit’s ruling permits administrative agencies to insulate themselves from judicial review while denying those harmed by their regulations the basic due-process right to meaningfully defend themselves. The Court should hear the case because it offers the opportunity to resolve lower-court disputes about when the right to judicial review arises and whether a defendant can be forced to bear the burden of establishing a court’s jurisdiction.

These are important due-process implications raised in this case, and the Court would do well to adopt a rule consistent with the Eleventh Circuit’s holding on this issue—one that protects the right to immediately and meaningfully defend oneself from unlawful regulations. Otherwise, more and more Americans will end up finding themselves at the bad end of obscene regulatory penalties by unaccountable government agencies, with no real means to defend themselves.

The Court will decide whether to take Walburg v. Nack early in the new year.