IES Blog

Institute of Education Sciences

Building Evidence: What Comes After an Efficacy Study?

Over the years, the Institute of Education Sciences (IES) has funded over 300 studies across its research programs that evaluate the efficacy of specific programs, policies, or practices. This work has contributed significantly to our understanding of the interventions that improve outcomes for students under tightly controlled or ideal conditions. But is this information enough to inform policymakers’ and practitioners’ decisions about whether to adopt an intervention? If not, what should come after an efficacy study?

In October 2016, IES convened a group of experts for a Technical Working Group (TWG) meeting to discuss next steps in building the evidence base after an initial efficacy study, and the specific challenges that are associated with this work. TWGs are meant to encourage stakeholders to discuss the state of research on a topic and/or to identify gaps in research.  

Part of this discussion focused on replication studies and the critical role they play in the evidence-building process. Replication studies are essential for verifying the results of a previous efficacy study and for determining whether interventions are effective when certain aspects of the original study design are altered (for example, testing an intervention with a different population of students). IES has supported replication research since its inception, but there was general consensus that more replications are needed.

TWG participants discussed some of the barriers that may be discouraging researchers from doing this work. One major obstacle is the idea that replication research is somehow less valuable than novel research—a bias that could be limiting the number of replication studies that are funded and published. A related concern is that the field of education lacks a clear framework for conceptualizing and conducting replication studies in ways that advance evidence about beneficial programs, policies and practices (see another recent IES blog post on the topic).

IES provides support for studies to examine the effectiveness of interventions that have prior evidence of efficacy and that are implemented as part of the routine and everyday practice occurring in schools without special support from researchers. However, IES has funded a relatively small number of these studies (14 across both Research Centers). TWG participants discussed possible reasons for this and pointed out several challenges related to replicating interventions under routine conditions in authentic education settings. For instance, certain school-level decisions can pose challenges for conducting high-quality effectiveness studies, such as restricting the length that interventions or professional development can be provided and choosing to offer the intervention to students in the comparison condition. These challenges can result in findings that are influenced more by contextual factors rather than the intervention itself. TWG participants also noted that there is not much demand for this level of evidence, as the distinction between evidence of effectiveness and evidence of efficacy may not be recognized as important by decision-makers in schools and districts.

In light of these challenges, TWG participants offered suggestions for what IES could do to further support the advancement of evidence beyond an efficacy study. Some of these recommendations were more technical and focused on changes or clarifications to IES requirements and guidance for specific types of research grants. Other suggestions included:

  • Prioritizing and increasing funding for replication research;
  • Making it clear which IES-funded evaluations are replication studies on the IES website;
  • Encouraging communication and partnerships between researchers and education leaders to increase the appreciation and demand for evidence of effectiveness for important programs, practices, and policies; and
  • Supporting researchers in conducting effectiveness studies to better understand what works for whom and under what conditions, by offering incentives to conduct this work and encouraging continuous improvement.

TWG participants also recommended ways IES could leverage its training programs to promote the knowledge, skills, and habits that researchers need to build an evidence base. For example, IES could emphasize the importance of training in designing and implementing studies to develop and test interventions; create opportunities for postdoctoral fellows and early career researchers to conduct replications; and develop consortiums of institutions to train doctoral students to conduct efficacy, replication, and effectiveness research in ways that will build the evidence base on education interventions that improve student outcomes.

To read a full summary of this TWG discussion, visit the Technical Working Group website or click here to go directly to the report (PDF).

Written by Katie Taylor, National Center for Special Education Research, and Emily Doolittle, National Center for Education Research

The Scoop on Replication Research in Special Education

Replication research may not grab the headlines, but reproducing findings from previous studies is critical for advancing scientific knowledge. Some have raised concerns about whether we conduct a sufficient number of replication studies. This concern has drawn increased attention from scholars in a variety of fields, including special education.

Photo array, top left going clockwise: Therrien, Lemons, Cook, and Coyne

Several special education researchers explored this issue in a recent Special Series on Replication Research in Special Education in the journal, Remedial and Special Education. The articles describe replication concepts and issues, systematically review the state of replication research in special education, and provide recommendations for the field. One finding is that there may be more replication studies than it seems—but authors don’t call them replications.

Contributors to the special issue include Bryan Cook from the University of Hawaii, Michael Coyne from the University of Connecticut, and Bill Therrien from the University of Virginia, who served as guest editors, and Chris Lemons, from Peabody College of Vanderbilt University. They shared more about the special issue and their collective insights into replications in special education research.

(In photo array, top left going clockwise: Therrien, Lemons, Coyne, and Cook)

How did you become interested in replication work?

Replication is a core component of the scientific method. Despite this basic fact that we all learned in Research 101, it is pretty apparent that in practice, replication is often ignored. We noticed how much attention the lack of replication was starting to get in other fields and in the press and were particularly alarmed by recent work showing that replications often fail to reproduce original findings. This made us curious about the state and nature of replication in the field of special education.

What is the state of replication research in special education?

It depends on how you define replication and how you search for replication articles. When a narrow definition is used and you require the term “replication” to be in the article, the rate of replication doesn’t look too good. Using this method, Lemons et al. (2016) and Makel et al. (2016) reported that the rate of replication in special education is between 0.4 to 0.5%, meaning that out of all the articles published in our field, less than 1% are replications. We suspected that—for a number of reasons (e.g., perceptions that replications are difficult to publish, are less prestigious than novel studies, and are hostile attempts to disprove a colleague’s work)—researchers might be conducting replication studies but not referring to them as such. And, indeed it’s a different story when you use a broad definition and you do not require the term replication to be in the article. Cook et al. (2016) found that out of 83 intervention studies published in six non-categorical special education journals from 2013-2014, there were 26 (31%) that could be considered replications, though few authors described their studies that way. Therrien et al. (2016) selected eight intervention studies from 1999-2001 and determined whether subsequently published studies that cited the original investigations had replicated them. They found that six of the eight original studies had been replicated by a total of 39 different studies (though few of the replications identified themselves as such).

What were some other key findings across the review articles?

Additional findings indicated that: (a) most replications conducted in special education are conceptual (i.e., some aspects are the same as the original study, but some are different) as opposed to direct (i.e., as similar to the original study as possible), (b) the findings of the majority of replications in special education agreed with the findings of the original studies, and (c) most replications in the field are conducted by one or more authors involved in the original studies. In three of the four reviews, we found it was more likely for a replication to produce the same outcome if there was author overlap between the original and replication studies. This may be due to the challenges of replicating a study with the somewhat limited information provided in a manuscript. It also emphasizes the importance of having more than one research team independently replicate study findings.  

What are your recommendations for the field around replicating special education interventions?

The article by Coyne et al. (2016) describes initial recommendations for how to conceptualize and carry out replication research in a way that contributes to the evidence about effective practices for students with disabilities and the conditions under which they are more or less effective:

  • Many studies evaluate an approach that has previously been studied under different conditions. In this case, researchers should specify which aspects replicate previous research;
  • Conceptualize and report intervention research within a framework of systematic replications, or a continuum of conceptual replications ranging from those that are more closely aligned to the original study to those that are less aligned;
  • Design and conduct closely aligned replications that duplicate, as faithfully as possible, the features of previous studies.
  • Design and conduct less closely aligned replications that intentionally vary essential components of earlier studies (e.g., participants, setting, intervention features, outcome measures, and analyses); and
  • Interpret findings using a variety of methods, including statistical significance, directions of effects, and effect sizes. We also encourage the use of meta-analytic aggregation of effects across studies.

One example of a high-quality replication study is by Doabler et al. The authors conducted a closely aligned replication study of a Tier 2 kindergarten math intervention. In the design of their IES-funded project, the authors planned a priori to conduct a replication study that would vary on several dimensions, including geographical location, participant characteristics, and instructional context. We believe this is a nice model of designing, conducting, and reporting a replication study.

Ultimately, we need to conduct more replication studies, we need to call them replications, we need to better describe how they are alike and different from the original study, and we need to strive for replication by researchers not involved in the original study. It is this type of work that may increase the impact research has on practice, because it strengthens our understanding of whether, when, and where an intervention works.

By Katie Taylor, Program Officer, National Center for Special Education Research