Last week, the Cato Institute held a policy forum on school choice regulations. Two of our panelists, Dr. Patrick Wolf and Dr. Douglas Harris, were part of a team that authored one of the recent studies finding that Louisiana’s voucher program had a negative impact on participating students’ test scores. Why that was the case – especially given the nearly unanimously positive previous findings – was the main topic of our discussion. Wolf and I argued that there is reason to believe that the voucher program’s regulations might have played a role in causing the negative results, while Harris and Michael Petrilli of the Fordham Institute pointed to other factors.
The debate continued after the forum, including a blog post in which Harris raises four “problems” with my arguments. I respond to his criticisms below.
The Infamous Education Productivity Chart
Problem #1: Trying to discredit traditional public schools by placing test score trends and expenditure changes on one graph. These graphs have been floating around for years. They purport to show that spending has increased much faster than expenditures [sic], but it’s obvious that these comparisons make no sense. The two things are on different scales. Bedrick tried to solve this problem by putting everything in percentage terms, but this only gives the appearance of a common scale, not the reality. You simply can’t talk about test scores in terms of percentage changes.
The more reasonable question is this: Have we gotten as much from this spending as we could have? This one we can actually answer and I think libertarians and I would probably agree: No, we could be doing much better than we are with current spending. But let’s be clear about what we can and cannot say with these data.
Harris offers a reasonable objection to the late, great Andrew Coulson’s infamous chart (shown below). Coulson already addressed critics of his chart at length, but Harris is correct that the test scores and expenditures do not really have a common scale. That said, the most important test of a visual representation of data is whether the story it tells is accurate. In this case, it is, as even Harris seems to agree. Adjusted for inflation, spending per pupil in public schools has nearly tripled in the last four decades while the performance of 17-year-olds on the NAEP has been flat.
Producing a similar chart with data from the scores of younger students on the NAEP would be misleading because the scale would mask their improvement. But for 17-year-olds, whose performance has been flat on the NAEP and the SAT, the story the chart tells is accurate.