It looks like we have another terrible case of cherry-picking the evidence. But this time it’s shockingly misleading. Instead of simply pretending that the evidence on school choice is “mixed,” the Center for American Progress took it a step further by saying that the voucher evidence is “highly negative.” They are absolutely wrong. Here’s why.
The Four Evaluations
Their review of the research relies on only four voucher studies – Indiana, Ohio, Louisiana, and D.C. Two of these studies – Indiana and Ohio – are non-experimental, meaning that the researchers could not establish definitive causal relationships. But let’s go ahead and entertain them anyway.
The Ohio study used an econometric technique called regression-discontinuity-design, which can only replicate experimental results when a large number of students are used right around a treatment cutoff point. The intuition behind the method is that it is essentially random chance that students fall just around either side of the cut point, and therefore the students are randomly assigned to the voucher treatment or not.
The Ohio program used a cutoff variable - the performance of the child’s public school – to determine program eligibility. However, the researchers used student observations that were not right around the cut point and even removed the observations that were closest to the discontinuity. In other words, the authors could not establish causality, and it is more likely that the children assigned to receive the voucher program were less advantaged than those who were ineligible. After all, students in lower-performing public schools were the ones that were eligible for the choice program.
Even then, the model with the largest sample size actually found that being eligible for the program led to positive test score impacts. But the authors at CAP never mentioned that.
The Indiana study was also non-experimental, as it compared voucher students to those remaining in traditional public schools. But let’s look at it anyway. While the authors did find small negative effects of the program on test scores initially, voucher students caught up to public school students in math and performed better in reading after four years. How in the world can a positive result like this be “highly negative?” Weird.
The Louisiana experiment did find large negative effects on test scores in the first two years. However, voucher students caught up to their public school peers in both math and reading after three years. The CAP authors argue that the main model – although clearly preferred by the Louisiana research team – is less “accurate” because of the “restricted sample size.” That is odd, as using more control variables (and a consistent sample) usually makes econometric models more accurate – not less. Another thing that is odd: the CAP authors chose not to report the positive Ohio results – which came from their larger sample of students – and instead chose to report the negative results – which came from a sample that was less than a tenth of the size. Why the change in criteria?
The CAP review heavily relies on the most recent experimental evaluation of the D.C. voucher program. It just so happens to be one of the only two voucher experiments in the world to find negative effects on student test scores.
Read the rest of this post »