Tag: test scores

New National Test Results Released Today

The U.S. Department of Education has just released 2013 results for the National Assessment of Educational Progress—aka, “The Nation’s Report Card.” The scores are for 12th grade reading and mathematics, and neither has changed since the last time they were administered a few years ago. But of course what we really want to know is how well students are performing today compared to those of a generation or two ago. That would tell us if our education system were improving, staying the same, or declining in performance.

The trouble is, “The Nation’s Report Card” doesn’t go back very far.  The reading results reach back to 1992 (since which time, there has been a slight but statistically significant decline), but the math results only reach back to 2005 (since which time, there’s been a slight but statistically significant increase). It’s just not that long of a time period to assess trends.

Wouldn’t it be great if there were a different set of NAEP tests, called the “Long Term Trends” series, that reached back all the way to the early 1970s! And wouldn’t it be even better if we could find out how much we’ve spent per pupil over that same time period, so that we could figure out if our schools are getting more or less efficient with our dollars? Well, what do you know, there is, and we can!

But here’s the thing. Some people look at that national trend chart and think: but my state is doing much better than that! Is it? Is it really? I decided to find out, for all fifty states. The result is my recent, mysteriously-titled paper: State Education Trends. Drop by and check out how your state has done over the past 40 years.

[Note to readers: The state charts look at changes in annual per pupil spending over time, whereas the national chart above looks at the change over time in the total cost of a full K-through-12 education, so the spending trend lines are not directly comparable].

Highlights of the New PISA International Test Results

The latest (2012) PISA results are out! PISA is a test of fairly basic, practical skills given to 15-year-olds around the world. Here are some of the highlights:

  • U.S. performance is essentially flat across subjects since 2003
  • Finland’s performance has declined substantially since 2003
  • Korea is continuing to improve, solidifiying its position as one of the highest performing nations
  • Already the highest-performing Latin American country, Chile has continued to improve, leaving the regional average further behind.

The U.S. story needs little elaboration. Neither the structure nor the content of American schooling has changed in educationally meaningful ways since 2003. We still have 50 state education monopolies, with a growing but still realtively small homogenizing federal presence.

The “Replicate Finland!” bandwagon was always misguided. It is simply not sensible to take a nation’s performance on a single test, in isolation, as evidence for the merits (or demerits) of its national education policies. There are too many other factors that affect outcomes, and there are too many important outcomes for a single test to measure. For those who nevertheless championed Finland as a model, the latest PISA results are a bit awkward (see, for instance, the book: The Smartest Kids in the World).

Though the Chilean student protests of 2011 and 2012 focused on the desire for free, universal college, the leaders of that movement also harshly criticized that nation’s universal K-12 private school choice program. About 60 percent of children in Chile attend private schools, most of them fully or substantially funded by the national government. One of the most famous protest leaders, Camila Vallejo, was recently elected to the Chilean congress as a member of the Communist party. The influence of Vallejo and her compatriots has shifted public sentiment against crucial aspects of the nation’s private school choice program, despite the fact that private schools themselves remain extremely popular with parents. It is quite possible that, in the coming years, Chile will unravel the very policies that have made it one of the fastest improving countries in the world and the top performer in Latin America.

The NEA has called for higher U.S. teachers’ salaries based on the PISA results, arguing that some of the top performing countries pay their teachers more relative to people in other careers. This is self-serving and scientifically dubious. The NEA presents no evidence for a causal link between overall teacher salaries and student performance, just a bit of random cherry picking that ignores countless confounding factors. To find the real link between average salaries and performance, we can look at domestic U.S. research on the subject. Hanushek and Rivkin, for instance, find that “overall salary increases for teachers would be both expensive and ineffective.” Not surprisingly, a recent review of Ohio’s data on teacher “value-added” and teacher pay finds an inverse relationship:

in Cleveland… teachers deemed “Least Effective” by the new state evaluation system earned, on average, about $3,000 more than the teachers deemed “Most Effective.”

There’s some evidence that tying teacher pay to student performance helps to improve learning, but that’s about it.

Finally, it’s important to remember that PISA is a test of everyday “literacy” in the three subjects it covers (math, reading, and science). If you want to know how well students are learning the specific academic content needed for continuing study at the college level, PISA isn’t your best choice. For that, take a look at TIMSS.

Rick Perry, Arne Duncan, and Michael Jackson

To my astonishment, Arne Duncan went after Republican presidential candidate Rick Perry yesterday on the grounds that Perry hasn’t done enough to improve the schools under his jurisdiction. According to Bloomberg News, Duncan said public schools have “really struggled” under Perry and that “Far too few of [the state’s] high school graduates are actually prepared to go on to college.”

I was never a huge Michael Jackson fan, but for some reason his “Man in the Mirror” track just popped into my head as I read this. You see, once upon a time, Arne Duncan was “CEO” of the Chicago Public Schools. During and for some time after his tenure, he was celebrated as having presided over “The Chicago Miracle,” in which local students’ test results had improved dramatically. That fact turns out to have been fake, but accurate. The state test results did improve, but not because students had learned more; they appear to have improved because the tests were dumbed-down.

When this charge was first leveled, I decided to look into it myself, and found that it was indeed justified. There was no “Chicago Miracle.” Arne Duncan ascended to the throne of U.S. secretary of education, at least in part, on a myth. The academic achievement of the children under his care stagnated at or slightly below the level of students in other large central cities during his time at the helm. Seems an opportune occasion for someone to “start with the man in the mirror, asking him to change his ways.”

Fordham Institute Reviews ‘The Other Lottery’

Gerilyn Slicker, of the Fordham Institute, offers a brief review of my study of charter school philanthropy in the latest “Education Gadfly” mailing, including the following observation:

Note, though, that this analysis is not without fault. The report doesn’t break down spending by pupil (only reporting aggregate grant-giving), nor does it account for student growth over time or for how long the charter networks have been operational.

All three of these concerns are worth raising, and the first two of them were actually addressed in the paper itself. The aggregate vs. per-pupil grant funding question is discussed in endnote 15:

Note that total grant funding, rather than grant funding per pupil, is the correct measure. That is because enrollment is endogenous—it is a product, in part, of earlier grant funding. So, controlling for enrollment (which dividing by enrollment would do) would control away some of the very characteristics we are trying to measure: the charter network’s ability to attract funding.

Student growth over time, as noted on page 5, cannot be measured using the California Standards Tests, because it reports results as averages of subgroups of students at the classroom level, not as individual student scores. And since the CST is the only source that has broad-based performance data for all charter and traditional public schools in the state, it is the only dataset that can be used to measure the performance of all California charter school networks. Fortunately, good controls for both student factors and school-wide peer effects are available, and the study’s results are consistent with earlier research, where it overlaps with that research.

The final concern, network age, is not one that I directly addressed in the study. There are a couple of reasons to expect it would not have much of an impact on the findings, however. First, a cursory look at the age of some of the top networks shows no particular pattern. American Indian and KIPP are both a decade old, and rank #1 and #7, respectively, out of 68 networks. Oakland Charter Academies and Rocketship are just a couple of years old, and rank #2 and #4, respectively. Similarly, some of low performing networks are brand new, while others, like GreenDot (ranked 42nd), are also over a decade old.

Second, in Appendix E, I show that network size and network academic performance are not significantly linked to one another. And since network age and network enrollment are likely to be strongly positively correlated with one another, it would be surprising if network age were correlated with performance when enrollment is not. That said, I’d probably include network age as a control in future, if I repeat the study, just to be on the safe side.

Ranking the Charter School Networks

Much of the response to the study I released last week has focused on the relative academic performance rankings of California’s charter school networks. That wasn’t the point of the study, which focuses on whether or not philanthropy + charter schooling can replace venture capital and competitive markets as a mechanism for scaling-up the best education services. Rather than try to fight the tide, I thought I’d just share the relevant rankings in an easy-to-link form, and once the debate about them dies down we can return to the larger policy point.

With that in mind, the first table below lists the top 15 charter school networks in terms of performance on the California Standards Tests, adjusted for student factors and peer effects. For comparison, two non-charter schools are included: the academically selective elite public prep schools Gretchen Whitney and Lowell–both of which feature in most lists of the top public schools in the country. There are 68 networks with the necessary data, but the lowest grant rank is 61 because eight of the networks received no philanthropic funding at all.

Next is a list of the charter networks that philanthropists have invested-in most heavily, with a view to replicating their models. Notice the minimal overlap? I repeat this comparison in the study with Advanced Placement test performance, and find the same pattern (it’s just slightly worse).

Every one of the above networks received substantially more grant funding individually than the top three highest achieving networks… combined.

Reply to Samuelson: It Is an Engineering Problem

In today’s Washington Post, Robert Samuelson argues that the performance of U.S. public schools is at least adequate, and that the relatively low achievement of black and Hispanic students is to be attributed to history and culture rather than to our education system. These claims are not new, and I might well have ignored them if he hadn’t got my Irish up with the off-hand comment that “what we face is not an engineering problem.” (More on that in a second.)

First, let’s dispatch the claim that public schooling is off the hook for the poor performance of low-income minority children. I’m currently undertaking a statistical study of the performance of 78 separate charter school networks in California, relative to one another and to the state’s traditional public schools. To foreshadow the results, the performance differences within socioeconomic groups are enormous even after controlling for school-wide peer effects. Among low-income Hispanic students, across grades, schools and subjects, average scores at two of the top charter networks (American Indian Public Schools and Oakland Charter Academies) are roughly 4 standard deviations above the statewide traditional public school mean. Quatre. Quattro. FOUR.

To put that in perspective, effect sizes in social science research are normally evaluated based on Jacob Cohen’s rule of thumb that 0.2 standard deviations is “small”, 0.5 is “moderate”, and anything bigger than 0.8 is “large.” To put it further in perspective, the low-income Hispanic effect sizes of two of California’s most elite and academically selective public schools are closer to 2 S.D. So the top charter networks, which accept every student who applies, massively outperform elite public schools that actively select their students based on prior test scores. Consistently. Across grades and subjects. [Note that there’s also wide variation in performance among charter school networks, with many performing below the mean of traditional public schools. Further details when the paper is published in a few months].

So, no, public schooling is not off the hook. We know it is possible to dramatically raise the achievement of low-income minority students above the current public school level. The problem is that we lack a system for reliably replicating the good schools and crowding out the rest. And what kind of problem is that? Even Wikipedia knows the answer:

Engineering is the discipline, art and profession of acquiring and applying scientific, mathematical, economic, social, and practical knowledge to design and build… systems… and processes that safely realize solutions to the needs of society.

Engineering is just a broad set of tools for finding practical solutions to complex problems. One of the most useful of those tools is an aversion to reinventing the wheel, so engineers always ask how the kind of problem they’re addressing has been approached previously, in other places, even in other fields. When possible, they adapt proven solutions to the problem at hand.

So let’s all be engineers for a day on January 28th and hear what education experts from Sweden and Chile have to say about how their nations have been encouraging the replication of good schools. You can register for this unique lunchtime event here.