Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Robustness of Comparative Interrupted Time Series Designs in Practice
Center: NCER Year: 2014
Principal Investigator: Hallberg, Kelly Awardee: American Institutes for Research (AIR)
Program: Statistical and Research Methodology in Education–Early Career      [Program Details]
Award Period: 1½ years (9/1/14–2/29/16) Award Amount: $196,968
Type: Methodological Innovation Award Number: R305D140030
Description:

Co-Principal Investigator: Jared Eno

Purpose: Much research in education is designed to address causal questions. By randomly assigning schools, classrooms, or students to treatment conditions, randomized controlled trials (RCTs) ensure that the treatment and control groups are equivalent in expectation, but RCTs are not always feasible to implement. When RCTs are not possible, education researchers rely on quasi-experimental research designs to address causal questions, but it is only appropriate to do so when these designs produce trustworthy estimates of causal effects. This project team examined the conditions under which comparative interrupted time series (CITS) study designs can yield trustworthy estimates of causal program effects. In addition, the research team for this project provided concrete guidance to education researchers using this study design as to the best procedures to follow in order to select comparison groups and modeling approaches.

Project Activities: This study used multiple methods to examine the robustness of CITS designs in practice. First, the researchers examined the validity of CITS estimates under different conditions by conducting five within-study comparisons (WSCs) of school-level cluster randomized controlled trials (RCTs). The WSCs compare experimental and CITS estimates of interventions’ impacts to assess the bias of the CITS estimates. Statewide data sets were used to study aspects of school performance over time that are important for the power and validity of CITS designs, such as the intraclass correlation and functional form. The researchers used simulation studies to evaluate the performance of decision rules for selecting CITS models and explore the potential impacts of internal validity threats to the amount of bias in CITS estimates.

Products and Publications

Hallberg, K., Williams, R., & Swanlund, A. (2020). Improving the use of aggregate longitudinal data on school performance to assess program effectiveness: Evidence from three within study comparisons. Journal of Research on Educational Effectiveness, 13(3), 518–545.

Hallberg, K., Williams, R., Swanlund, A., & Eno, J. (2018). Short comparative interrupted time series using aggregate school-level data in education research. Educational Researcher, 47(5), 295–306.


Back