IES Blog

Institute of Education Sciences

Celebrating the Launch of the Registry of Efficacy and Effectiveness Studies (REES)

The Registry of Efficacy and Effectiveness Studies (REES) is now ready for use! REES is a registry of causal impact studies in education developed with a grant from the National Center for Education Research (NCER). REES will increase transparency, improve the replicability of studies, and provide easy access to information about completed and ongoing studies.

The release of REES aligns with recent IES efforts to promote study registration. In the FY 2019 Requests for Applications (RFAs) for the Education Research and Special Education Research Grants Programs, IES recommended that applicants describe a plan for pre-registering their studies in both the project narrative (as part of the research) as well as the data management plan. IES is also developing the Standards for Excellence in Education Research (SEER) and has identified study registration as an important dimension of high value education research.

We asked the REES team to tell us more about how the registry works.

How can researchers access REES? REES can be accessed through the SREE website at www.sreereg.org. Over the next year, REES will transition to a permanent home at the Inter-university Consortium for Political and Social Research (ICPSR) at the University of Michigan, but it will still be accessible through the SREE link.

Screenshot of REESWhat kinds of studies can be registered? REES is a registry of causal impact studies. It accommodates a range of study designs including randomized controlled trials, quasi-experimental designs, regression discontinuity designs, and single case designs.

What information should be included in a study entry? A REES entry includes basic study information and a pre-analysis plan. The checklist of required information for a registry entry provides detailed information for each of the different design options. All of the information for a REES entry should be easily found in a grant application.

How long does it take to register a study? For a study with a complete grant application, completing a REES entry should be straightforward and take approximately one hour.

What if a study entry needs to be changed? Principal investigators (PI) or other authorized research team members should update a REES entry as changes occur. All updates to an entry will be time-stamped. Original entries and updated entries will be publically available.

Are registered studies searchable by the public? Yes! When a PI or authorized research team member is ready to make the study available in the public domain, they click on the publish option. This will time stamp the entry and make it publically available. REES entries that are published are available on the search page. A pdf of individual entries can be downloaded from the search page or an Excel file of multiple entries can be exported.

What will happen to studies that were entered in the pilot phase of REES? A REES entry that was started and/or completed during the pilot phase is a part of the REES database. To make the study publically available and a part of the searchable database, the PI or other authorized research team member needs to click on the publish option for the entry.

Over the next two years, the REES team will be working to ensure the sustainability and visibility of REES with a grant from the National Center for Special Education Research (NCSER). To do this, the team will transfer REES to its permanent location on the ICPSR website and disseminate information about REES within the education research community, as well as with funders, publishers, and users of education research, through meetings, conferences, websites, social media, and targeted outreach.

So, what are you waiting for? Go check it out!

If you have questions about REES, please email contact@sreereg.org.

 

 

Building Evidence: Changes to the IES Goal Structure for FY 2019

The IES Goal Structure was created to support a continuum of education research that divides the research process into stages for both theoretical and practical purposes. Individually, the five goals – Exploration (Goal 1), Development and Innovation (Goal 2), Efficacy and Replication (Goal 3), Effectiveness (Goal 4), and Measurement (Goal 5) – were intended to help focus the work of researchers, while collectively they were intended to cover the range of activities needed to build evidence-based solutions to the most pressing education problems in our nation. Implicit in the goal structure is the idea that over time, researchers will identify possible strategies to improve student outcomes (Goal 1), develop and pilot-test interventions (Goal 2), and evaluate the effects of interventions with increasing rigor (Goals 3 and 4).

Over the years, IES has received many applications and funded a large number of projects under Goals 1-3.  In contrast, IES has received relatively few applications and awarded only a small number of grants under Goal 4. To find out why – and to see if there were steps IES could take to move more intervention studies through the evaluation pipeline – IES hosted a Technical Working Group (TWG) meeting in 2016 to hear views from experts on what should come after an efficacy study (see the relevant summary and blog post). IES also issued a request for public comment on this question in July 2017 (see summary).

The feedback we received was wide-ranging, but there was general agreement that IES could do more to encourage high-quality replications of interventions that show prior evidence of efficacy. One recommendation was to place more emphasis on understanding “what works for whom” under various conditions.  Another comment was that IES could provide support for a continuum of replication studies.  In particular, some commenters felt that the requirements in Goal 4 to use an independent evaluator and to carry out an evaluation under routine conditions may not be practical or feasible in all cases, and may discourage some researchers from going beyond Goal 3.   

In response to this feedback, IES revised its FY 2019 RFAs for Education Research Grants (84.305A) and Special Education Research Grants (84.324A) to make clear its interest in building more and better evidence on the efficacy and effectiveness of interventions. Among the major changes are the following:

  • Starting in FY 2019, Goal 3 will continue to support initial efficacy evaluations of interventions that have not been rigorously tested before, in addition to follow-up and retrospective studies.
  • Goal 4 will now support all replication studies of interventions that show prior evidence of efficacy, including but not limited to effectiveness studies.
  • The maximum amount of funding that may be requested under Goal 4 is higher to support more in-depth work on implementation and analysis of factors that moderate or mediate program effects.

The table below summarizes the major changes. We strongly encourage potential applicants to carefully read the RFAs (Education Research, 84.305A and Special Education Research, 84.324A) for more details and guidance, and to contact the relevant program officers with questions (contact information is in the RFA).

Applications are due August 23, 2018 by 4:30:00 pm Washington DC time.

 

Name Change

Focus Change

Requirements Change

Award Amount Change

Goal 3

Formerly “Efficacy and Replication;” in FY2019, “Efficacy and Follow-Up.”

Will continue to support initial efficacy evaluations of interventions in addition to follow-up and retrospective studies.

No new requirements.

No change.

Goal 4

Formerly “Effectiveness;” in FY2019, “Replication: Efficacy and Effectiveness.”

Will now support all replications evaluating the impact of an intervention. Will also support Efficacy Replication studies and Re-analysis studies.

Now contains a requirement to describe plans to conduct analyses related to implementation and analysis of key moderators and/or mediators. (These were previously recommended.)

Efficacy Replication studies maximum amount: $3,600,000.

Effectiveness studies maximum amount: $4,000,000.

Re-analysis studies maximum amount: $700,000.

 

 

By Thomas Brock (NCER Commissioner) and Joan McLaughlin (NCSER Commissioner)

Building Evidence: What Comes After an Efficacy Study?

Over the years, the Institute of Education Sciences (IES) has funded over 300 studies across its research programs that evaluate the efficacy of specific programs, policies, or practices. This work has contributed significantly to our understanding of the interventions that improve outcomes for students under tightly controlled or ideal conditions. But is this information enough to inform policymakers’ and practitioners’ decisions about whether to adopt an intervention? If not, what should come after an efficacy study?

In October 2016, IES convened a group of experts for a Technical Working Group (TWG) meeting to discuss next steps in building the evidence base after an initial efficacy study, and the specific challenges that are associated with this work. TWGs are meant to encourage stakeholders to discuss the state of research on a topic and/or to identify gaps in research.  

Part of this discussion focused on replication studies and the critical role they play in the evidence-building process. Replication studies are essential for verifying the results of a previous efficacy study and for determining whether interventions are effective when certain aspects of the original study design are altered (for example, testing an intervention with a different population of students). IES has supported replication research since its inception, but there was general consensus that more replications are needed.

TWG participants discussed some of the barriers that may be discouraging researchers from doing this work. One major obstacle is the idea that replication research is somehow less valuable than novel research—a bias that could be limiting the number of replication studies that are funded and published. A related concern is that the field of education lacks a clear framework for conceptualizing and conducting replication studies in ways that advance evidence about beneficial programs, policies and practices (see another recent IES blog post on the topic).

IES provides support for studies to examine the effectiveness of interventions that have prior evidence of efficacy and that are implemented as part of the routine and everyday practice occurring in schools without special support from researchers. However, IES has funded a relatively small number of these studies (14 across both Research Centers). TWG participants discussed possible reasons for this and pointed out several challenges related to replicating interventions under routine conditions in authentic education settings. For instance, certain school-level decisions can pose challenges for conducting high-quality effectiveness studies, such as restricting the length that interventions or professional development can be provided and choosing to offer the intervention to students in the comparison condition. These challenges can result in findings that are influenced more by contextual factors rather than the intervention itself. TWG participants also noted that there is not much demand for this level of evidence, as the distinction between evidence of effectiveness and evidence of efficacy may not be recognized as important by decision-makers in schools and districts.

In light of these challenges, TWG participants offered suggestions for what IES could do to further support the advancement of evidence beyond an efficacy study. Some of these recommendations were more technical and focused on changes or clarifications to IES requirements and guidance for specific types of research grants. Other suggestions included:

  • Prioritizing and increasing funding for replication research;
  • Making it clear which IES-funded evaluations are replication studies on the IES website;
  • Encouraging communication and partnerships between researchers and education leaders to increase the appreciation and demand for evidence of effectiveness for important programs, practices, and policies; and
  • Supporting researchers in conducting effectiveness studies to better understand what works for whom and under what conditions, by offering incentives to conduct this work and encouraging continuous improvement.

TWG participants also recommended ways IES could leverage its training programs to promote the knowledge, skills, and habits that researchers need to build an evidence base. For example, IES could emphasize the importance of training in designing and implementing studies to develop and test interventions; create opportunities for postdoctoral fellows and early career researchers to conduct replications; and develop consortiums of institutions to train doctoral students to conduct efficacy, replication, and effectiveness research in ways that will build the evidence base on education interventions that improve student outcomes.

To read a full summary of this TWG discussion, visit the Technical Working Group website or click here to go directly to the report (PDF).

Written by Katie Taylor, National Center for Special Education Research, and Emily Doolittle, National Center for Education Research

The Scoop on Replication Research in Special Education

Replication research may not grab the headlines, but reproducing findings from previous studies is critical for advancing scientific knowledge. Some have raised concerns about whether we conduct a sufficient number of replication studies. This concern has drawn increased attention from scholars in a variety of fields, including special education.

Photo array, top left going clockwise: Therrien, Lemons, Cook, and Coyne

Several special education researchers explored this issue in a recent Special Series on Replication Research in Special Education in the journal, Remedial and Special Education. The articles describe replication concepts and issues, systematically review the state of replication research in special education, and provide recommendations for the field. One finding is that there may be more replication studies than it seems—but authors don’t call them replications.

Contributors to the special issue include Bryan Cook from the University of Hawaii, Michael Coyne from the University of Connecticut, and Bill Therrien from the University of Virginia, who served as guest editors, and Chris Lemons, from Peabody College of Vanderbilt University. They shared more about the special issue and their collective insights into replications in special education research.

(In photo array, top left going clockwise: Therrien, Lemons, Coyne, and Cook)

How did you become interested in replication work?

Replication is a core component of the scientific method. Despite this basic fact that we all learned in Research 101, it is pretty apparent that in practice, replication is often ignored. We noticed how much attention the lack of replication was starting to get in other fields and in the press and were particularly alarmed by recent work showing that replications often fail to reproduce original findings. This made us curious about the state and nature of replication in the field of special education.

What is the state of replication research in special education?

It depends on how you define replication and how you search for replication articles. When a narrow definition is used and you require the term “replication” to be in the article, the rate of replication doesn’t look too good. Using this method, Lemons et al. (2016) and Makel et al. (2016) reported that the rate of replication in special education is between 0.4 to 0.5%, meaning that out of all the articles published in our field, less than 1% are replications. We suspected that—for a number of reasons (e.g., perceptions that replications are difficult to publish, are less prestigious than novel studies, and are hostile attempts to disprove a colleague’s work)—researchers might be conducting replication studies but not referring to them as such. And, indeed it’s a different story when you use a broad definition and you do not require the term replication to be in the article. Cook et al. (2016) found that out of 83 intervention studies published in six non-categorical special education journals from 2013-2014, there were 26 (31%) that could be considered replications, though few authors described their studies that way. Therrien et al. (2016) selected eight intervention studies from 1999-2001 and determined whether subsequently published studies that cited the original investigations had replicated them. They found that six of the eight original studies had been replicated by a total of 39 different studies (though few of the replications identified themselves as such).

What were some other key findings across the review articles?

Additional findings indicated that: (a) most replications conducted in special education are conceptual (i.e., some aspects are the same as the original study, but some are different) as opposed to direct (i.e., as similar to the original study as possible), (b) the findings of the majority of replications in special education agreed with the findings of the original studies, and (c) most replications in the field are conducted by one or more authors involved in the original studies. In three of the four reviews, we found it was more likely for a replication to produce the same outcome if there was author overlap between the original and replication studies. This may be due to the challenges of replicating a study with the somewhat limited information provided in a manuscript. It also emphasizes the importance of having more than one research team independently replicate study findings.  

What are your recommendations for the field around replicating special education interventions?

The article by Coyne et al. (2016) describes initial recommendations for how to conceptualize and carry out replication research in a way that contributes to the evidence about effective practices for students with disabilities and the conditions under which they are more or less effective:

  • Many studies evaluate an approach that has previously been studied under different conditions. In this case, researchers should specify which aspects replicate previous research;
  • Conceptualize and report intervention research within a framework of systematic replications, or a continuum of conceptual replications ranging from those that are more closely aligned to the original study to those that are less aligned;
  • Design and conduct closely aligned replications that duplicate, as faithfully as possible, the features of previous studies.
  • Design and conduct less closely aligned replications that intentionally vary essential components of earlier studies (e.g., participants, setting, intervention features, outcome measures, and analyses); and
  • Interpret findings using a variety of methods, including statistical significance, directions of effects, and effect sizes. We also encourage the use of meta-analytic aggregation of effects across studies.

One example of a high-quality replication study is by Doabler et al. The authors conducted a closely aligned replication study of a Tier 2 kindergarten math intervention. In the design of their IES-funded project, the authors planned a priori to conduct a replication study that would vary on several dimensions, including geographical location, participant characteristics, and instructional context. We believe this is a nice model of designing, conducting, and reporting a replication study.

Ultimately, we need to conduct more replication studies, we need to call them replications, we need to better describe how they are alike and different from the original study, and we need to strive for replication by researchers not involved in the original study. It is this type of work that may increase the impact research has on practice, because it strengthens our understanding of whether, when, and where an intervention works.

By Katie Taylor, Program Officer, National Center for Special Education Research