Tag: changes

Now You Can Draw Meaningful Time Trends from the SAT. Here’s How…

Over the years, countless reporters and even policy analysts have attempted to draw conclusions from changes in state SAT scores over time. That’s a mistake. Fluctuations in the SAT participation rate (the percentage of students actually taking the test), and in other state and student factors, are known to affect the scores.

But what if we could control for those confounding factors? As it happens, a pair of very sharp education statisticians (Mark Dynarski and Philip Gleason) revealed a way of doing just this—and of validating their results—back in 1993. In a new technical paper I’ve released this week, I extend and improve on their methods and apply them to a much larger range of years. The result is a set of adjusted SAT scores for every state reaching back to 1972. Vetted against scores from NAEP tests that are representative of the entire student populations of each state (but that only reach back to the 1990s), these adjusted SAT scores offer reasonable estimates of actual changes in states’ average level of SAT performance.

The paper linked above reveals only the methods by which these adjusted SAT scores can be computed, but next week Cato will publish a new policy paper and Web page presenting 100 charts—two for each state—illustraing the results. How has your state’s academic performance changed over the past two generations? Stay tuned to find out…

Update: Here’s the new paper and charts!

The SAT Commits Suicide

The College Board announced this week that it is dropping the more arcane words and more advanced mathematics from its SAT test, among other changes. This, however noble its intentions, seems counterproductive and institutionally suicidal.

The purpose of the SAT is to help predict success in college. It does this in the same way as every other test: by distinguishing between those who know the tested content and those who do not. Not surprisingly, most modern tests are designed using something called “Item Discrimination Analysis.” That unfortunately-named technique has nothing to do with racism or classism. It is simply a mathematical formula. What it does is measure, for every question, the difference between the percentage of high-performers who got the question right and the percentage of low-performers who got it right. In general, the higher this “Discrimination Index” (DI) rises, the more useful the question is and therefore the more likely it is to be retained.

The problem with the College Board’s announced revisions is that they seem likely to eliminate questions with high DI values in favor of others with lower DI values. You might guess that reducing the SAT’s ability to distinguish between high and low performers would inhibit its ability to predict college success. But you don’t have to guess, because there’s already at least one recent study that looked at this question. What the authors found is that the DI value of SAT mathematics questions is usually the strongest contributor to the test’s ability to predict college success—by a wide margin.

There’s a good chance that the College Board is aware of this study since two of its three authors work for the College Board and the Board hosts a presentation about the study on its own website.

The Board’s changes are intended to make the SAT more fair. In practice, they seem likely to make it less useful. And as its usefulness diminishes, so will the number of colleges using it. If this proves to be the case—and we’ll know for sure in just a few years—the College Board will have succeeded in doing something that its critics have been unable to accomplish despite decades of effort: killing the SAT.