Tag: scores

Now You Can Draw Meaningful Time Trends from the SAT. Here’s How…

Over the years, countless reporters and even policy analysts have attempted to draw conclusions from changes in state SAT scores over time. That’s a mistake. Fluctuations in the SAT participation rate (the percentage of students actually taking the test), and in other state and student factors, are known to affect the scores.

But what if we could control for those confounding factors? As it happens, a pair of very sharp education statisticians (Mark Dynarski and Philip Gleason) revealed a way of doing just this—and of validating their results—back in 1993. In a new technical paper I’ve released this week, I extend and improve on their methods and apply them to a much larger range of years. The result is a set of adjusted SAT scores for every state reaching back to 1972. Vetted against scores from NAEP tests that are representative of the entire student populations of each state (but that only reach back to the 1990s), these adjusted SAT scores offer reasonable estimates of actual changes in states’ average level of SAT performance.

The paper linked above reveals only the methods by which these adjusted SAT scores can be computed, but next week Cato will publish a new policy paper and Web page presenting 100 charts—two for each state—illustraing the results. How has your state’s academic performance changed over the past two generations? Stay tuned to find out…

Update: Here’s the new paper and charts!

College Board’s SAT Drop Spin Doesn’t Hold Up

Nationwide verbal SAT scores fell to their lowest level in years on the most recent administration of the test, and the College Board, which administers the SAT, has an explanation:

Average SAT scores fell slightly for 2011 high-school graduates, as the number of test takers and the proportion of minority students grew, according to a report released on Wednesday by the College Board, which owns the test.

The idea—which has been offered as an explanation of earlier declines—is that the overall average score can fall even if the performance of every participating group was stable or improving—if the groups that tend to score lower comprise a larger share of the total test-taking population than they did in the past. And, indeed, minority students (who often score below white students) now comprise a larger share of the test taking population than ever before.

So: case closed? Nope. If you actually look at the score breakdown for the major race/ethnicity groups (see chart) you’ll notice that only white students’ scores held constant from last year. The scores of all the minority groups declined. And, since 1996, white students’ scores have been flat, those of Asian students have risen appreciably, and those of Hispanic and African American students have declined.

Since there has not been any government program targeted exclusively at improving the achievement of Asian students, these data don’t exactly bolster confidence in the effectiveness of either state or federal education policy. If we want to see improved educational productivity, we might just want to look at more free enterprise education systems that offer schools the freedoms and incentives that actually make it happen.