|Title:||Robustness of Comparative Interrupted Time Series Designs in Practice|
|Principal Investigator:||Hallberg, Kelly||Awardee:||American Institutes for Research (AIR)|
|Program:||Statistical and Research Methodology in Education–Early Career [Program Details]|
|Award Period:||1½ years (9/1/14–2/29/16)||Award Amount:||$196,968|
|Type:||Methodological Innovation||Award Number:||R305D140030|
Co-Principal Investigator: Jared Eno
Much research in education is designed to address causal questions. By randomly assigning schools, classrooms, or students to treatment conditions, randomized controlled trials (RCTs) ensure that the treatment and control groups are equivalent in expectation, but RCTs are not always feasible to implement. When RCTs are not possible, education researchers rely on quasi-experimental research designs to address causal questions, but it is only appropriate to do so when these designs produce trustworthy estimates of causal effects. The purpose of this project is for researchers to examine the conditions under which comparative interrupted time series (CITS) study designs can yield trustworthy estimates of causal program effects. In addition, the research team for this project will provide concrete guidance to education researchers using this study design as to the best procedures to follow in order to select comparison groups and modeling approaches.
This study will use multiple methods to examine the robustness of CITS designs in practice. First, the researchers will examine the validity of CITS estimates under different conditions by conducting five within-study comparisons (WSCs) of school-level cluster randomized controlled trials (RCTs). The WSCs will compare experimental and CITS estimates of interventions’ impacts to assess the bias of the CITS estimates. Statewide data sets will be used to study aspects of school performance over time that are important for the power and validity of CITS designs, such as the intraclass correlation and functional form. The researchers will use simulation studies to evaluate the performance of decision rules for selecting CITS models and explore the potential impacts of internal validity threats to the amount of bias in CITS estimates.