The best studies randomly assign children to “pre‐k” (treatment) and “no pre‐k” (control) groups, and then follow them for several years to see if the pre‐k children show greater achievement gains than children without pre‐k. This is the “gold standard” in education research and the same type of study used for testing prescription drugs.
A national study of the federal Head Start program followed these rigorous protocols and found no lasting results. Children in Head Start show immediate (although modest) gains during preschool, but during kindergarten and first grade the differences disappear because children without preschool quickly catch up. This is called the “fade‐out” problem.
A recent randomized study for the high‐quality Tennessee program showed the same result.
Though these results are well known among pre‐k evaluators, they often explain them away, arguing that “fade out” is caused by low quality pre‐k instruction. They point to state‐developed programs in Oklahoma, New Jersey, Georgia and Boston, all of which showed much larger gains during the preschool year than Head Start programs.
The “high‐quality” descriptor also occurs because these state programs embed pre‐k in the regular school system with certified teachers, all of whom have at least a bachelor’s degree. (The federal Head Start programs does not require BAs.)
The problem is that all of these state pre‐k studies relied upon a special non‐random design that compares kindergarten children who finished pre‐k the previous year (the treatment group) to children who are just starting preschool (the control group).
The design (called “regression continuity design” or RDD for short) requires that the school system impose a strict age cutoff so that the treatment group is one year older, and it relies on statistical methods to adjust for the age difference between the two groups of children. Testing is done at the beginning of the school year. Any difference between the two groups after adjusting for the age difference is assumed to be the result of pre‐k.
This non‐random study design has two major flaws that impair the tests’ reliability and prevent definitive conclusions. First and foremost, because both the treatment and control groups have had preschool, these studies can’t examine the critical “fade‐out” problem. Randomized studies can follow the pre‐k and no‐pre‐k children into grade school, where they can see whether the pre‐k gains are lasting or not. So far, no randomized study has found lasting effects, and the non‐random state studies provide no clarification.
The second flaw is a problem called “attrition,” meaning children who drop out of the treatment group. The control group students who are just starting pre‐k can’t have attrition by definition. According to Department of Education standards for RDD designs, valid inferences require that attrition be documented and results adjusted. The reason is that program dropouts are more likely to be disadvantaged children with lower skills and more social problems, and their test scores are inevitably lower than non‐dropouts.
Oklahoma, New Jersey and Georgia did not report attrition rates, even though attrition from the treatment group clearly occurred. For example, in the Tulsa, Okla., study, 26 percent of the control group mothers were high school dropouts, compared to only 16 percent for the treatment group. In Georgia, 26 percent of the control group were limited English speakers compared to only 8 percent of the treatment group. Attrition from the treatment group can explain why Tulsa and Georgia reported such high test scores for children completing preschool.
Two additional “high quality” programs have garnered much attention, and both used randomized designs. One is the Perry Preschool cost‐benefit evaluation by Nobel economist James Heckman, which demonstrated significant success, and the other is the Abecedarian Project, which demonstrated significant long term IQ gains.
There are several problems relying on these studies to support expanding universal preschool. Study participants were all disadvantaged African American children; the programs were far more intensive — and costly — than the type of pre‐k in contemporary state programs; and the programs educated two small groups of children in two communities more than 40 years ago. Moreover, a national experiment to replicate the Abecedarian concept, Early Head Start, has found few significant long‐term benefits, especially for the most disadvantaged children.
The reality is that the research on state preschool programs does not yet support effectiveness for the type of universal preschool programs being promoted today. It certainly does not support expensive government expansions of preschool education as currently envisioned, particularly for middle‐class children with no demonstrable need for a “head start.” We need much more high‐quality randomized research studies showing large and long‐term benefits, at reasonable costs, before any expansion of pre‐k can be justified.