Skip Navigation

Message from IES Director:

A More Systematic Approach to Replicating Research

IES’ two research centers, the National Center for Education Research (NCER) and the National Center for Special Education Research (NCSER), have funded around 450 projects testing whether interventions improve student outcomes.  Most of the roughly 300 completed projects have found no impact, conforming to Peter Rossi’s “Iron Law of Evaluation” that the expected value of any impact assessment of any large scale social program is zero. Despite this iron law, of these projects, some 1/3 show some evidence of success. While we certainly will work to improve that success rate, it’s what we do with the projects that have evidence of impact that is a growing concern for us.

A central goal of the mission of IES is to identify what works for whom under what conditions. Unfortunately most of the studies that have found impact contribute little to helping us meet that goal. Many, if not the great majority, of these projects were carried out in a single location and/or tested using a relatively small number of settings, teachers, and learners. Given the limited scope of these projects, it is usually impossible to judge whether the tested interventions would work with different types of students or in different education venues.

In the last few years, IES has explicitly called for and supported replications, but these far too often replicate the problem of the original study: too few settings, too few subjects, too little variation. Our current approach to replication, in short, does not systematically test conditions that affect the impact interventions could have and accumulates knowledge very slowly, if at all. 

We are considering a different approach to replication that, hopefully, will accelerate the accumulation of knowledge about which interventions might work for whom under what conditions. This approach revolves around the systematic replication of interventions that already have strong evidence of impact. We envision supporting sets of replications that will implement and evaluate interventions in carefully chosen venues that systematically vary in student demographics, geographic locations, implementation, or technology.

We envision that this research can better address questions frequently asked by schools and teachers looking to adopt evidence-based interventions:

  • Will an intervention likely show positive outcomes for students like the ones in my class or in my school?
  • Which outcomes will likely be affected?
  • How large are the gains likely to be?
  • Will these gains likely last?
  • What are the resources that are necessary for sustained implementation?
  • How much will this intervention likely cost if implemented?

We are considering a grants competition that will support a maximum of five replication studies per intervention, in which the recipients of these grants will carry out one or more replications. Each replication would be fully powered and systematically vary at least one aspect of the prior efficacy study’s research methods or procedures. For example, studies could vary geographical location; the population of students (testing students of different socioeconomic status, race/ethnicity, achievement levels); teachers (e.g., special education teachers vs. general education teachers); and/or how the intervention is delivered (e.g., using technology to substitute for some functions performed by school personnel).

Further, we envision replications that:

  • Evaluate IES-identified interventions that fall within one of four broad topic areas in which IES has already invested heavily: (1) literacy; (2) STEM (science, technology, engineering, and mathematics); (3) social skills and behavior that support learning; and (4) college and career readiness.
  • Focus on interventions that were developed and/or tested with IES funding; that have demonstrated positive impacts on education outcomes in a rigorous impact study; and that are ready to be implemented, either because they are available commercially or the developers are willing and able to make their interventions available for testing.
  • Use rigorous research designs that will meet What Works Clearinghouse standards with or without reservations, as well as the new IES-wide Standards for Excellence in Education Research. (SEER calls for pre-registering studies, focusing on outcomes meaningful to student success, identifying core intervention components, analyzing costs, supporting the scaling of interventions, and documenting implementation to inform use in other settings.)
  • Include, to the appropriate extent, education agencies as partners on the project.

While the goal and general structure of the research program we are considering are (hopefully) clear, we are still working on the details. We hope ultimately to release an RFA that will, by more carefully structuring and supporting replications of interventions that have evidence of impact, help all of us provide better answers to the central question of our field: what works for whom and under what conditions?

As always, we appreciate comments.

Mark Schneider
Director, IES