Tag: Research

Repeat after Me: “We Are All Individuals”

A millennium or so ago, Steve Martin played a stadium with his stand-up act. He got the crowd of tens of thousands to repeat a series of statements in unison. My favorite, for sheer irony: “We Are all Individuals.”

But, the thing is, we are.

This is why I never cease to be amazed by disagreements like the one currently playing out between the curriculum groups “Common Core,” and “Partnership for 21st Century Skills.”

Is there really one curriculum that is right for every child in this nation of 300 million people? Really?

Rather than fighting a winner-take-all Shootout at the O.K. Curriculum, which is what our illustrious leaders seem to want, how about this peace-loving alternative: we let teachers teach whatever and however they want, and we let families choose and pay for whichever schools they think are best for their kids (with financial aid for those who need it).

‘Cause the thing is, a quarter century of econometric research is repeating, in Steve-Martin-Like unison that: educational freedom works.

A Picture Is Worth $300 Billion

I blogged this morning that the research shows higher public school spending slows the economy, and explained that this is because spending more on public schools doesn’t increase students’ academic performance. Some readers no doubt find that hard to accept. With them in mind, I present the following chart:

Spending vs. AchievementSpending vs. Achievement

If public schools had merely maintained the level of productivity they exhibited in 1970, Americans would enjoy a permanent $300 billion annual tax cut. Now THAT would stimulate economic growth.

In Praise of the Brain Drain

The standard view in policy discussions is that emigration of skilled workers from poor countries to rich countries is bad for development becuase it deprives poor countries of much-needed human capital and it reduces growth.

A new study by Michael Clemens at the Center for Global Development challenges this view. Clemens shows that efforts to slow the so-called brain drain “generally brings few benefits to others, and often brings diverse unintended harm.” There is little evidence that limiting skilled migration improves growth or public finances in poor countries, while following such a policy may reduce the demand for education, international trade and capital flows, and the diffusion of ideas and norms. There is also a gap between the policy discussion (that takes the negative aspects of the brain drain for granted) and the research literature (that reaches much more ambiguous conclusions). Clemens also rightly stresses choice and freedom as central factors to consider when formualting policy–an element so far missing from the policy discussions.

The study was first released this spring as a background paper to the UN’s forthcoming Human Development 2009 annual report, which will focus on migration and incorporate much of Clemens’ work.

Obama to Seek Cap on Federal Pay Raises

USA Today reports that President Obama is seeking a cap on federal pay raises:

President Obama urged Congress Monday to limit cost-of-living pay raises to 2% for 1.3 million federal employees in 2010, extending an income squeeze that has hit private workers and threatens Social Security recipients and even 401(k) investors.

…The president’s action comes when consumer prices have fallen 2.1% in the 12 months ending in July, because of a massive drop in energy prices. The recession has taken an even tougher toll on private-sector wages, which rose only 1.5% for the year ended in June — the lowest increase since the government started keeping track in 1980. Private-sector workers also have been subject to widespread layoffs and furloughs.

Last week, economist Chris Edwards discussed data from the Bureau of Economic research that revealed the large gap between the average pay of federal employees and private workers. His call to freeze federal pay “for a year or two” received attention and criticism, (FedSmith, GovExec, Federal Times, Matt Yglesias, Conor Clarke) to which he has responded.

As explained on CNN earlier this year, the pay gap between federal and private workers has been widening for some time now:

Evidence-based for Thee, But Not for Me

One of the things that strikes me as curious about supporters of the No Child Left Behind Act is that they talk regularly about “evidence” and having everything be “research-based,” yet they often ignore or distort evidence in order to portray NCLB as a success. Case in point, an op-ed in today’s New York Times by the Brookings Institution’s Tom Loveless and the Fordham Foundation’s Michael Petrilli.

Truth be told, the piece doesn’t lionize NCLB, criticizing the law for encouraging schools to neglect high-performing students because its primary goal is to improve the performance of low achievers. Fair enough. The problem is, Loveless and Petrilli assert with great confidence that the law is definitely doing the job it was intended to do. “It is clear,” they write, “that No Child Left Behind is helping low-achieving students.”

As you shall see in a moment, that is an utterly unsustainable assertion according to the best available evidence we have: results from the National Assessment of Educational Progress, which carries no consequences for schools or states and, hence, is subject to very little gaming. Ironically, Loveless and Petrilli make their indefensible pronouncement while criticizing a study for failing to use NAEP in reaching its own conclusions about NCLB.

So what’s wrong with stating that NCLB is clearly helping low-achieving students? Let me count the ways (as I have done before):

  1. Numerous reforms, ranging from class-size reduction, to school choice, to new nutritional standards, have been occurring at the same time as NCLB. It is impossible to isolate which achievement changes are attributable to NCLB, and which to myriad other reforms
  2. As you will see in a moment, few NAEP score intervals start cleanly at the beginning of NCLB – which is itself a difficult thing to pinpoint – making it impossible to definitively attribute trends to the law
  3. When we look at gains on NAEP in many periods before NCLB, they were greater on a per-year basis than during NCLB. That means other things going on in education before NCLB were working just as well or better than things since the law’s enactment.

So let’s go to the scores. Below I have reproduced score trends for both the long-term and regular NAEP mathematics and reading exams. (The former is supposed to be an unchanging test and the latter subject to revision, though in practice both have been pretty consistent measures.) I have posted the per-year score increase or decreases above the segments that include NCLB (but that might also include years without NCLB). I have also posted score increases in pre-NCLB segments that saw greater improvements than segments including NCLB. (Note that on 8th-grade reading I didn’t highlight pre-NCLB segments with smaller score decreases than seen under NCLB. I didn’t want to celebrate backward movement in any era.)

For context, NCLB was signed into law in January 2002 but it took at least a year to get all the regulations written and more than that for the law to be fully implemented. As a result, I’ll leave it to the reader to decide whether 2002, 2003, or even 2004 should be the law’s starting point, noting only that this problem alone makes it impossible to say that NCLB clearly caused anything. In addition, notice that some of the biggest gains under NCLB are in periods that also include many non-NCLB years, making it impossible to confidently attribute those gains to NCLB.

Please note that I calculated per-year changes based on having data collected in the same way from start to end. So some lines are dashed and others solid (denoting changes in how some students were counted); I calculated changes based on start and end points for the type of line used for the period. I also rounded to one decimal point to save space. Finally, I apologize if this is hard to read—I’m no computer graphics wizard—and would direct you to NAEP’s website to check out the data for yourself.

4th Grade Regular Math

8th Grade Regular Math

4th Grade Regular Reading

8th Grade Regular Reading

Age 9 Long-term Math

Age 13 Long-term Math

Age 17 Long-term Math

Age 9 Long-term Reading

Age 13 Long-term Reading

Age 17 Long-term Reading

So what does the data show us? First, that there were numerous periods that didn’t include NCLB that saw greater or equal growth for low-achieving students as periods with NCLB. That means much of what we were doing before NCLB was apparently more effective than what we’ve been doing under NCLB, though it is impossible to tell from the data what any of those things are. In addition, it is notable that those periods with the greatest gains that include NCLB are typically the ones that also include non-NCLB years, such as 2000 to 2003 for 4th and 8th-grade math. That means there is inescapable doubt about what caused the gains in those periods most favorable to NCLB. And, let’s not forget, 4th -grade reading saw a downward trend from 2002 to 2003, and 8th-grade reading dropped from 2002-2005. That suggests that NCLB was actually decreasing scores for low-achievers, and one would have to acknowledge that if one were also inclined to give NCLB credit for all gains.

And so, the evidence is absolutely clear in one regard, but in the opposite direction of what Loveless and Petrilli suggest: One thing you definitely cannot say about NCLB is that it has clearly helped low achievers. And yet, they said it anyway!

Response to Conor Clarke, Part I

Last week Conor Clarke at The Atlantic blog , apparently as part of a running argument with Jim Manzi, raised four substantive issues with my study, “What to Do About Climate Change,” that Cato published last year. Mr. Clarke deserves a response, and I apologize for not getting to this sooner. Today, I’ll address the first part of his first comment. I’ll address the rest of his comments over the next few days.

Conor Clarke: 

(1) Goklany’s analysis does not extend beyond the 21st century. This is a problem for two reasons. First, climate change has no plans to close shop in 2100. Even if you believe GDP will be higher in 2100 with unfettered global warming than without, it’s not obvious that GDP would be higher in the year 2200 or 2300 or 3758. (This depends crucially on the rate of technological progress, and as Goklany’s paper acknowledges, that’s difficult to model.) Second, the possibility of “catastrophic” climate change events – those with low probability but extremely high cost – becomes real after 2100.

Response:  First, I wouldn’t put too much stock in analyses purporting to extend out to the end of the 21st century, let alone beyond that, for numerous reasons, some of which are laid out on pp. 2-3 of the Cato study. As noted there, according to a paper commissioned for the Stern Review, “changes in socioeconomic systems cannot be projected semi-realistically for more than 5–10 years at a time.”

Second, regarding Mr. Clarke’s statement that, “Even if you believe GDP will be higher in 2100 with unfettered global warming than without, it’s not obvious that GDP would be higher in the year 2200 or 2300 or 3758,” I should note that the conclusion that net welfare for 2100 (measured by net GDP per capita) is not based on a belief.  It follows inexorably from Stern’s own analysis.

Third, despite my skepticism of long term estimates, I have, for the sake of argument, extended the calculation to 2200. See here. Once again, I used the Stern Review’s estimates, not because I think they are particularly credible (see below), but for the sake of argument. Specifically, I assumed that losses in welfare due to climate change under the IPCC’s warmest scenario would, per the Stern Review’s 95th percentile estimate, be equivalent to 35.2 percent of GDP in 2200. [Recall that Stern’s estimates account for losses due to market impacts, non-market (i.e., environmental and public health) impacts and the risk of catastrophe, so one can’t argue that only market impacts were considered.]

The results, summarized in the following figure, indicate that even if one uses the Stern Review’s inflated impact estimates under the warmest IPCC scenario, net GDP in 2200 ought to be higher in the warmest world than in cooler worlds for both developing and industrialized countries.

Source: Indur M. Goklany, “Discounting the Future,” Regulation 32: 36-40 (Spring 2009).

The costs of climate change used to develop the above figure are, most likely, overestimated because they do not properly account for increases in future adaptive capacity consistent with the level of net economic development resulting from Stern’s own estimates (as shown in the above figure).  This figure shows that even after accounting for losses in GDP per capita due to climate change – and inflating these losses – net GDP per capita in 2200 would be between 16 and 85 times higher in 2200 that it was in the baseline year (1990).  No less important, Stern’s estimate of the costs of climate change neglect secular technological change that ought to occur during the 210-year period extending from the base year (1990) to 2200. In fact, as shown here, empirical data show that for most environmental indicators that have a critical effect on human well-being, technology has, over decades-long time frames reduced impacts by one or more orders of magnitude.

As a gedanken experiment, compare technology (and civilization’s adaptive capacity) in 1799 versus 2009. How credible would a projection for 2009 have been if it didn’t account for technological change from 1799 to 2009?

I should note that some people tend to dismiss the above estimates of GDP on the grounds that it is unlikely that economic development, particularly in today’s developing countries, will be as high as indicated in the figure.  My response to this is that they are based on the very assumptions that drive the IPCC and the Stern Review’s emissions and climate change scenarios. So if one disbelieves the above GDP estimates, then one should also disbelieve the IPCC and the Stern Review’s projection for the future.

Fourth, even if analysis that appropriately accounted for increases in adaptive capacity had shown that in 2200 people would be worse off in the richest-but-warmest world than in cooler worlds, I wouldn’t get too excited just yet. Even assuming a 100-year lag time between the initiation of emission reductions and a reduction in global temperature because of a combination of the inertia of the climate system and the turnover time for the energy infrastructure, we don’t need to do anything drastic till after 2100 (=2200 minus 100 years), unless monitoring shows before then that matters are actually becoming worse (as opposing merely changing), in which case we should certainly mobilize our responses. [Note that change doesn’t necessarily equate to worsening. One has to show that a change would be for the worse.  Unfortunately, much of the climate change literature skips this crucial step.]

In fact, waiting-and-preparing-while-we-watch (AKA watch-and-wait) makes most sense, just as it does for many problems (e.g., some cancers) where the cost of action is currently high relative to its benefit, benefits are uncertain, and technological change could relatively rapidly improve the cost-benefit ratio of controls. Within the next few decades, we should have a much better understanding of climate change and its impacts, and the cost of controls ought to decline in the future, particularly if we invest in research and development for mitigation.  In the meantime we should spend our resources on solving today’s first order problems – and climate change simply doesn’t make that list, as shown by the only exercises that have ever bothered to compare the importance of climate change relative to other global problems.  See here and here.  As is shown in the Cato paper (and elsewhere), this also ought to reduce vulnerability and increase resiliency to climate change.

In the next installment, I’ll address the second point in Mr. Clarke’s first point, namely, the fear that “the possibility of ‘catastrophic’ climate change events – those with low probability but extremely high cost – becomes real after 2100.”

Week in Review: The War on Drugs, SCOTUS Prospects and Credit Card Regulation

White House Official Says Government Will Stop Using Term ‘War on Drugs’

The Wall Street Journal reports that White House Drug Czar Gil Kerlikowske is calling for a new strategy on federal drug policy and is putting a stop to the term “War on Drugs.”

The Obama administration’s new drug czar says he wants to banish the idea that the U.S. is fighting ‘a war on drugs,’ a move that would underscore a shift favoring treatment over incarceration in trying to reduce illicit drug use…. The Obama administration is likely to deal with drugs as a matter of public health rather than criminal justice alone, with treatment’s role growing relative to incarceration, Mr. Kerlikowske said.

Will Kerlikowske’s words actually translate to an actual shift in policy? Cato scholar Ted Galen Carpenter calls it a step in the right direction, but remains skeptical about a true change in direction. “A change in terminology won’t mean much if the authorities still routinely throw people in jail for violating drug laws,” he says.

Cato scholar Tim Lynch channels Nike and says when it comes to ending the drug war, “Let’s just do it.” In a Cato Daily Podcast, Lynch explained why the war on drugs should end:

Cato scholars have long argued that our current drug policies have failed, and that Congress should deal with drug prohibition the way it dealt with alcohol prohibition. With the door seemingly open for change, Cato research shows the best way to proceed.

In a recent Cato study, Glenn Greenwald examined Portugal’s successful implementation of a drug decriminalization program, in which drug users are offered treatment instead of jail time. Drug use has actually dropped since the program began in 2001.

In the 2009 Cato Handbook for Policymakers, David Boaz and Tim Lynch outline a clear plan for ending the drug war once and for all in the United States.

Help Wanted: Supreme Court Justice

Justice David Souter announced his retirement from the Supreme Court at the end of last month, sparking national speculation about his replacement.Souter Dedication

Calling Souter’s retirement “the end of an error,” Cato senior fellow Ilya Shapiro makes some early predictions as to whom President Obama will choose to fill the seat in October. Naturally, there will be a pushback regardless of who he picks. Shapiro and Cato scholar Roger Pilon weigh in on how the opposition should react to his appointment.

Shapiro: “Instead of shrilly opposing whomever Obama nominates on partisan grounds, now is the time to show the American people the stark differences between the two parties on one of the few issues on which the stated Republican view continues to command strong and steady support nationwide. If the party is serious about constitutionalism and the rule of law, it should use this opportunity for education, not grandstanding.”

Obama Pushing for Credit Card Regulation

President Obama has called for tighter regulation of credit card companies, a move that “would prohibit so-called double-cycle billing and retroactive rate hikes and would prevent companies from giving credit cards to anyone under 18,” according to CBSNews.com.

But Cato analyst Mark Calabria argues that this is no time to be reducing access to credit:

We are in the midst of a recession, which will not turn around until consumer spending turns around — so why reduce the availability of consumer credit now?

Congress should keep in mind that credit cards have been a significant source of consumer liquidity during this downturn. While few of us want to have to cover our basic living expenses on our credit card, that option is certainly better than going without those basic needs. The wide availability of credit cards has helped to significantly maintain some level of consumer purchasing, even while confidence and other indicators have nosedived.

In a Cato Daily Podcast, Calabria explains how credit card companies have been a major source of liquidity for a population that is strapped for cash to pay for everyday goods.