As the Brookings Institution’s Grover “Russ” Whitehurst has been working feverishly to communicate, we simply do not have a good base of top-flight research — studies in which children are randomly assigned to large preschool programs — on which to conclude that public pre‑K works. Most assertions about its effectiveness, such as President Obama’s 2013 State of the Union claim that “every dollar we invest in high-quality early education can save more than seven dollars later on,” are based primarily on two programs: Perry Preschool of the 1960s, and Abecedarian of the 1970s. Both treated fewer than 60 children, were very expensive, and were staffed by people highly motivated to prove their program’s worth.
Pre‑K: Not So ‘Empirically Validated’
Today the unenviable task of opposing publicly funded schooling for the littlest Americans falls to me. Worse, I have to disagree with Peter Salins, whose past work I’ve greatly enjoyed. Yet oppose and disagree I shall, especially with Salins’s basic contention that positive effects of publicly funded, “high-quality preschool” are “empirically validated.”
Those programs did undergo random-assignment evaluations — though with some important randomization problems — and have shown lasting benefits. But taking such tiny efforts to a much greater scale would be highly treacherous. As California discovered when it tried, and failed, to replicate class-size reduction results from the relatively small Tennessee STAR program, scaling presents big challenges such as getting enough good teachers to staff all the new positions.
Or consider Head Start and Early Head Start, federal early childhood programs that have undergone random-assignment scrutiny. They have demonstrated very few lasting benefits, and some negative effects.
To be fair, Salins argues that Head Start “was never designed to be a true preschool program,” but is instead “a well-meaning daycare program.”
Preschool supporters argue that, unlike Head Start, undertakings such as the Abbott program in New Jersey, or Oklahoma’s pre‑K program, have demonstrated success. But in terms of methodology, research on these programs has often employed “regression discontinuity design” instead of random assignment. RDD tries to control for differences among children by comparing test scores of similarly aged kids who just missed, and just made, the age cutoff for pre‑K. There are several problems with this approach, including that a child who won’t enter pre‑K for another year will naturally be treated differently by his or her parents than a child in pre‑K, and that it is hard to adjust for kids who dropped out of preschool.
A recent study of Tennessee’s pre‑K program did use random assignment, and the program includes many “high-quality” hallmarks such as small student-adult ratios, a state approved curriculum, etc. What did the study find? No cognitive benefits by the end of first-grade, and fairly small non-cognitive benefits.
Perhaps because of its weak findings, Salins doesn’t spend much time on the U.S. research, focusing more on the conclusion of Core Knowledge founder E.D. Hirsch that, as demonstrated by preschool in France, broad programs can work.
Salins does not provide a citation in his piece, but the relevant paragraphs from his book (83–84) are available online. He writes that “numerous evaluations” have verified the value of French preschool, but the only research directly cited is a 1992 survey by the French government — presumably this is the same study that Hirsch himself summarizes in English here. It’s a non-random-assignment study that examined the effect of starting preschool at age two rather than three. If that is the main French support for pre‑K, it is weak sauce.
Neither Salins nor Hirsch, however, seems to endorse pre‑K in general; instead, they advocate pre‑K with a particular curricular emphasis. As Salins makes clear, France’s program is thought to be effective because, in addition to having “well-trained teachers” and “good facilities,” it offers “rigorous cultural-literacy content.”
Which brings me to perhaps the primary reason preschool programs don’t seem to deliver the goods: Government can’t make providers furnish “high quality.” Unlike the accountability that comes when customers use their own money to pay for a service, government provision often ends up working for service providers, not supposed beneficiaries. It is the providers who get the most direct benefit — a livelihood — from pre‑K programs, so they are the most involved in pre‑K politics. And like anyone, their natural incentive is not to be held accountable for their performance.
This has been a serious problem in Head Start, which for years suffered poor oversight of centers that kept their grants come hell or high water. There are efforts underway to fix that, but as Salins himself points out, in education we have spent “billions of dollars on a broad array of hopeful sounding initiatives, [but] they have had little in the way of academic gains to show for it.”
It is likely that the same impotence would be demonstrated in efforts along the lines that Salins proposes: school districts extending “their educational ladder by two years below kindergarten.” While people can be too quick to blame schools for results that are largely a function of outside influences, it seems a triumph of hope over experience to think that districts that have shown no ability to help kids at the K‑12 level will succeed with even younger children. And Hirsch himself has deemed the education establishment a “thoughtworld” that freezes out cultural literacy programs such as Core Knowledge.
That said, Salins’s ultimate recommendation is, rightly, somewhat modest: Run a pilot program. Unfortunately, he suggests emulating the federal Race to the Top, for which we have no meaningful evidence of success as grant-winning states continue to trudge through implementation. Rather than federal, a pilot should be private, cutting off the huge problem of creating lobbies like we’ve seen with Head Start. Still, sticking to a pilot tacitly acknowledges what we actually know about public, pre‑K programs: There is little evidence they work.