Tag: SAT

Now You Can Draw Meaningful Time Trends from the SAT. Here’s How…

Over the years, countless reporters and even policy analysts have attempted to draw conclusions from changes in state SAT scores over time. That’s a mistake. Fluctuations in the SAT participation rate (the percentage of students actually taking the test), and in other state and student factors, are known to affect the scores.

But what if we could control for those confounding factors? As it happens, a pair of very sharp education statisticians (Mark Dynarski and Philip Gleason) revealed a way of doing just this—and of validating their results—back in 1993. In a new technical paper I’ve released this week, I extend and improve on their methods and apply them to a much larger range of years. The result is a set of adjusted SAT scores for every state reaching back to 1972. Vetted against scores from NAEP tests that are representative of the entire student populations of each state (but that only reach back to the 1990s), these adjusted SAT scores offer reasonable estimates of actual changes in states’ average level of SAT performance.

The paper linked above reveals only the methods by which these adjusted SAT scores can be computed, but next week Cato will publish a new policy paper and Web page presenting 100 charts—two for each state—illustraing the results. How has your state’s academic performance changed over the past two generations? Stay tuned to find out…

Update: Here’s the new paper and charts!

The SAT Commits Suicide

The College Board announced this week that it is dropping the more arcane words and more advanced mathematics from its SAT test, among other changes. This, however noble its intentions, seems counterproductive and institutionally suicidal.

The purpose of the SAT is to help predict success in college. It does this in the same way as every other test: by distinguishing between those who know the tested content and those who do not. Not surprisingly, most modern tests are designed using something called “Item Discrimination Analysis.” That unfortunately-named technique has nothing to do with racism or classism. It is simply a mathematical formula. What it does is measure, for every question, the difference between the percentage of high-performers who got the question right and the percentage of low-performers who got it right. In general, the higher this “Discrimination Index” (DI) rises, the more useful the question is and therefore the more likely it is to be retained.

The problem with the College Board’s announced revisions is that they seem likely to eliminate questions with high DI values in favor of others with lower DI values. You might guess that reducing the SAT’s ability to distinguish between high and low performers would inhibit its ability to predict college success. But you don’t have to guess, because there’s already at least one recent study that looked at this question. What the authors found is that the DI value of SAT mathematics questions is usually the strongest contributor to the test’s ability to predict college success—by a wide margin.

There’s a good chance that the College Board is aware of this study since two of its three authors work for the College Board and the Board hosts a presentation about the study on its own website.

The Board’s changes are intended to make the SAT more fair. In practice, they seem likely to make it less useful. And as its usefulness diminishes, so will the number of colleges using it. If this proves to be the case—and we’ll know for sure in just a few years—the College Board will have succeeded in doing something that its critics have been unable to accomplish despite decades of effort: killing the SAT.

SAT Changes = Bad News for Common Core?

Working on education every day, you get used to your subject rarely making major national news, probably because the troubles are constant and sudden crises rare.  But change the SAT – once known as the Scholastic Aptitude Test – and all heck breaks loose. Of course, something else has been springing heck all over the country, too  – the Common Core – but because that fight has been taking place mainly at the state level, the nation’s collective attention has never been turned to it all at once. The SAT brouhaha might, however, change that, likely to the chagrin of Core defenders.

What’s the connection between the Core and the SAT? A big one: David Coleman, who is both a chief architect of the Core and president of the SAT-owning College Board. Coleman announced when he took over the Board that he would align the SAT with the Core, and it was clear in the Board’s SAT press release that that is what’s happening. Employing Common Core code, the Board announced that the new SAT will focus on “college and career readiness.”

Why is this potentially bad news for Core supporters? Because the SAT changes are widely being criticized as dumbing-down the test – good-bye words like “prevaricator,” hello toughies like “synthesis” – and that may drive attention to people who are questioning the quality of the Core. Illustrating unhappiness with the changes, in the Washington Post yesterday both a house editorial and a column by Kathleen Parker dumped on the coming SAT reforms, with the editorial stating:

It sounds as though students could conceivably get a perfect score on the new exam and yet struggle to fully comprehend some of the articles in this newspaper. Colleges should want to know if their would-be English majors are conversant in words more challenging than “synthesis,” or that their scores reflect more than lucky bubble guesses…

Maybe even more troubling than losing an outlet like the Post, if you’re a Core supporter, is possibly losing a guy like Andy Smarick at the pro-Core Thomas B. Fordham Institute. Last week Smarick defended knowledge of words that SAT bosses now deem too “obscure.” To be sure, Smarick didn’t “decimate” the new SAT (see the post), but his critique was enough to elicit a response from Coleman himself.

From a Core opposition perspective, it is crucial that people make the connection between the SAT and the Core, and that may be happening. The Post noted that “it’s no accident that this push comes from a College Board president who helped produce the K-12 Common Core standards.” Similarly, the New York Times report on the changes identified Coleman as “an architect of the Common Core curriculum standards.”

Making this connection is important because Core supporters’ major pro-Core (as opposed to anti-Core-opponent) argument is that the standards are highly “rigorous.” That claim has taken heat from several subject-matter experts, but they have struggled to be heard amidst pro-Core rhetoric.  Sudden and intense national scrutiny of the SAT, if directly connected to the Core, might help doubters of Core excellence get more attention.

Of course, the primary reason to object to the Core is not that it may or may not be high-quality – though that is certainly an important concern – but that it is being foisted on the nation through federal power, and a monopoly over what schools teach is a huge problem. It kills competition among differing ideas and models of education, stifles innovation, and severely limits the ability of children – who are all unique individuals – to access education tailored to their specific needs, abilities, and dreams.

Common Core opponents should be encouraged by a national critique of coming SAT changes not, ultimately, because the changes are good or bad, but because serious scrutiny could well bolster resistance to the federally driven Core. In so doing, it could help to preserve some of the freedom necessary to ensure that standards have to earn their business rather than having children handed to them by Washington.

Squandering Assessment Test

Yesterday the annual summary of SAT—formerly Scholastic Aptitude Test—scores came out, and the news was once again disheartening. Indeed, average reading scores hit a record low, and math remained stagnant. Writing scores also dipped, but that part of the test has only existed since 2006.

There are important provisos that go with drawing conclusions about the nation’s education system using the SAT. Most notably, who takes it is largely self-selected, and growing numbers of people sitting for it—some of whom might not have bothered in the past—could lower scores without indicating the system is getting worse. That said, as the chart below shows, no likely amount of self-selection or changing test-takers can account for the overwhelming lack of correlation between spending and scores. Per-pupil outlays have taken off like a moonshot while scores have either sat on the runway, or even burrowed down a bit.

Sadly, this corresponds to the results from long-term National Assessment of Educational Progress exams—which are nationally representative—for 17-year-olds. Again, as the following chart reveals, spending has skyrocketed while scores have, um, decidedly not skyrocketed.

There are factors that make comparing year-to-year SAT scores imprecise. But the trend clearly reinforces what we should already know: we get almost no return for our education “investment.”

College Board’s SAT Drop Spin Doesn’t Hold Up

Nationwide verbal SAT scores fell to their lowest level in years on the most recent administration of the test, and the College Board, which administers the SAT, has an explanation:

Average SAT scores fell slightly for 2011 high-school graduates, as the number of test takers and the proportion of minority students grew, according to a report released on Wednesday by the College Board, which owns the test.

The idea—which has been offered as an explanation of earlier declines—is that the overall average score can fall even if the performance of every participating group was stable or improving—if the groups that tend to score lower comprise a larger share of the total test-taking population than they did in the past. And, indeed, minority students (who often score below white students) now comprise a larger share of the test taking population than ever before.

So: case closed? Nope. If you actually look at the score breakdown for the major race/ethnicity groups (see chart) you’ll notice that only white students’ scores held constant from last year. The scores of all the minority groups declined. And, since 1996, white students’ scores have been flat, those of Asian students have risen appreciably, and those of Hispanic and African American students have declined.

Since there has not been any government program targeted exclusively at improving the achievement of Asian students, these data don’t exactly bolster confidence in the effectiveness of either state or federal education policy. If we want to see improved educational productivity, we might just want to look at more free enterprise education systems that offer schools the freedoms and incentives that actually make it happen.

SAT/ACT Factoid Debunked

There’s been a factoid making the rounds during the Wisconsin union standoff that you may have seen. I’m not sure what the ultimate source of the factoid is, but here’s the meat of it as reiterated by a blogger for The Economist:

Only 5 states do not have collective bargaining for educators and have deemed it illegal. Those states and their ranking on ACT/SAT scores are as follows:

South Carolina – 50th
North Carolina – 49th
Georgia – 48th
Texas – 47th
Virginia – 44th

If you are wondering, Wisconsin, with its collective bargaining for teachers, is ranked 2nd in the country.

Now, aside from the factoid, if true, providing no real insight into whether collective bargaining is good or bad for education – there are myriad variables at work other than collective bargaining, none of which does this control for – but the factoid itself is highly dubious. Again, it is hard to find the original source for this, but I looked up 2009 ACT and SAT state rankings, and at the very least it seems highly unlikely that Virginia ranks 44th out of all states. According to the ACT ranking, for instance, Virginia places 22nd, and on the SAT (assuming the linked to list is accurate – I’m doing this fast), it ranked 33rd. It’s hard to see how those would be combined for a 44th place overall finish.

How about the Wisconsin second place-finish? Well, that is accurate for the SAT, but notably only 5 percent of Wisconsin students took the SAT – a negligible rate. On the ACT, which is the main test taken in the Badger State, Wisconsin finished 13th – not bad, but hardly great.

So what does this tell you? Not that collective bargaining is educationally good or bad – like I said, you just can’t get there from here – but that you have to be very careful about your sources of information. Unfortunately, that seems especially true when you’re dealing with education.

Future Teachers Most Likely to Cheat in College?

The current issue of the Chronicle of Higher Education features a story by a professional ghost-writer of college student papers. One passage in particular caught my eye:

it’s hard to determine which course of study is most infested with cheating. But I’d say education is the worst. I’ve written papers for students in elementary-education programs, special-education majors, and ESL-training courses. I’ve written lesson plans for aspiring high-school teachers, and I’ve synthesized reports from notes that customers have taken during classroom observations. I’ve written essays for those studying to become school administrators, and I’ve completed theses for those on course to become principals….

This is of course the weakest of anecdotal evidence and no one should take it as gospel (particularly the seminary students who apparently also contract out papers to the same ghost writer). But let’s say, for the sake of argument, that it’s true—that ed school students are the most common consumers of fraudulent papers. How could we explain that?

There’s no reason to believe that future teachers are any more ethically deficient than their peers in other fields, so that’s an unlikely explanation. Could it be that ed school students are less well prepared for college? Certainly it’s an uncomfortable truth that the SAT scores of those applying to ed school (both undergraduate and graduate) consistently rank below those of applicants to most other college programs. But it is also widely acknowledged that the academic standards of ed schools are commensurately below those of other college disciplines, so future teachers shouldn’t have any more difficulty completing their assignments than students in other fields.

But there is one way in which education is fundamentally different from every other college discipline: it’s the only one whose students will go on to work in a government monopoly industry. Not only is the hiring process of public school systems less focused on identifying candidates’ academic excellence, there is evidence that it is actively hostile to excellence (e.g., that principals are less likely to hire top-scoring candidates from elite colleges than candidates from less rarefied institutions). What’s more, compensation for public school teachers is generally a function of time served (over which teachers have no control) and degrees conferred (over which they do). This has created demand on the part of teachers for graduate degrees—not necessarily for the acquisition of advanced skills, but for the diplomas themselves, which  amount to valuable cash prizes.

Again, we can’t know from a single ghost-writer’s experience if ed school students systematically cheat more in college than their peers in other fields, but we certainly shouldn’t be surprised if they do. We’ve organized education in this country in a way that decouples skill and performance from compensation, and instead couples compensation to the mere trappings of higher learning (e.g., masters degrees). We’ve created a powerful financial incentive for existing and future teachers to cheat. Maybe not such a good idea.

Hat tip: Bill Evers.