Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Site Selection When Participation is Voluntary: Improving the External Validity of Randomized Trials
Center: NCER Year: 2019
Principal Investigator: Olsen, Robert Awardee: Westat
Program: Statistical and Research Methodology in Education      [Program Details]
Award Period: 3 Years (07/01/19–06/30/22) Award Amount: $899,034
Type: Methodological Innovation Award Number: R305D190020
Description:

Co-Principal Investigator: Bell, Stephen

Purpose: Randomized controlled trials (RCTs) in education are typically conducted in districts and schools that were not selected to represent broader populations of policy interest (e.g., all schools nationwide that could have implemented the education intervention). As a result, findings from these studies may not generalize to these populations. In this project, the research team developed products to help education researchers decide whether to use random or balanced sampling in selecting sites for RCTs in education and help education researchers implement their preferred site selection methods.

Project Activities: The project team carried out two sets of activities. First, the team compared the performance of different methods for selecting sites—and replacing sites that refuse to participate—in controlling external validity bias randomized trials of educational interventions. Second the team developed and disseminated computer code and other resources that other researchers can use to implement different sampling and analysis methods for improving the generalizability of randomized trials and other impact studies.

Key Outcomes:

  • The research team published details on the external validity bias that results from different approaches to selecting sites (e.g., districts and schools) and replacing sites that decline to participate in a peer reviewed article. Their study found evidence that both random and balanced site selection yields impact estimates with less external validity bias and lower mean squared error than purposive site selection, even when a large share of sites decline to participate (Litwok., Shivji, and Olsen 2023).
  • Software (an R package and two SAS macros) to implement different approaches (random and balanced site selection) to selecting and replacing sites were developed and released (https://www.generalizability.org/).

Structured Abstract

Statistical/Methodological Product: The research team provided results from testing the performance of different methods for selecting more representative samples of districts and schools for impact studies with a focus on (1) random site selection and (2) balanced site selection designed to minimize differences in characteristics between the sample and the population. They also developed and released software to support each type of site selection.

Development/Refinement Process: The study involved simulation research using data from the Common Core of Data and the Head Start Impact Study. From these data, repeated samples were selected using various site selection and replacement methods. The external validity bias of each site selection method was calculated by subtracting the population average treatment effect from the mean sample average treatment effect across the simulated samples.

Related IES Projects: Testing Different Methods of Improving the External Validity of Impact Evaluations in Education (R305D100041)

Products and Publications

ERIC Citations: Find available citations in ERIC for this award here.

Project Website: https://www.generalizability.org/

Additional Online Resources and Information:

Select Publications:

Litwok, D., Nichols, A., Shivji, A., & Olsen, R. B. (2023). Selecting districts and schools for impact studies in education: A simulation study of different strategies. Journal of Research on Educational Effectiveness, 16(3), 501–531.


Back