Skip to main content

Breadcrumb

Home arrow_forward_ios Information on IES-Funded Research arrow_forward_ios Robustness of Comparative Interrupt ...
Home arrow_forward_ios ... arrow_forward_ios Robustness of Comparative Interrupt ...
Information on IES-Funded Research
Grant Closed

Robustness of Comparative Interrupted Time Series Designs in Practice

NCER
Program: Statistical and Research Methodology in Education
Program topic(s): Early Career
Award amount: $196,968
Principal investigator: Kelly Hallberg
Awardee:
American Institutes for Research (AIR)
Year: 2014
Project type:
Methodological Innovation
Award number: R305D140030

Purpose

Much research in education is designed to address causal questions. By randomly assigning schools, classrooms, or students to treatment conditions, randomized controlled trials (RCTs) ensure that the treatment and control groups are equivalent in expectation, but RCTs are not always feasible to implement. When RCTs are not possible, education researchers rely on quasi-experimental research designs to address causal questions, but it is only appropriate to do so when these designs produce trustworthy estimates of causal effects. This project team examined the conditions under which comparative interrupted time series (CITS) study designs can yield trustworthy estimates of causal program effects. In addition, the research team for this project provided concrete guidance to education researchers using this study design as to the best procedures to follow in order to select comparison groups and modeling approaches.

Project Activities

This study used multiple methods to examine the robustness of CITS designs in practice. First, the researchers examined the validity of CITS estimates under different conditions by conducting five within-study comparisons (WSCs) of school-level cluster randomized controlled trials (RCTs). The WSCs compare experimental and CITS estimates of interventions’ impacts to assess the bias of the CITS estimates. Statewide data sets were used to study aspects of school performance over time that are important for the power and validity of CITS designs, such as the intraclass correlation and functional form. The researchers used simulation studies to evaluate the performance of decision rules for selecting CITS models and explore the potential impacts of internal validity threats to the amount of bias in CITS estimates.

People and institutions involved

IES program contact(s)

Allen Ruby

Associate Commissioner for Policy and Systems
NCER

Products and publications

Hallberg, K., Williams, R., & Swanlund, A. (2020). Improving the use of aggregate longitudinal data on school performance to assess program effectiveness: Evidence from three within study comparisons. Journal of Research on Educational Effectiveness, 13(3), 518-545.

Hallberg, K., Williams, R., Swanlund, A., & Eno, J. (2018). Short comparative interrupted time series using aggregate school-level data in education research. Educational Researcher, 47(5), 295-306.

Supplemental information

Co-Principal Investigator: Jared Eno

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

Tags

MathematicsData and Assessments

Share

Icon to link to Facebook social media siteIcon to link to X social media siteIcon to link to LinkedIn social media siteIcon to copy link value

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

You may also like

Zoomed in IES logo
Workshop/Training

Data Science Methods for Digital Learning Platform...

August 18, 2025
Read More
Zoomed in IES logo
Workshop/Training

Meta-Analysis Training Institute (MATI)

July 28, 2025
Read More
Zoomed in Yellow IES Logo
Workshop/Training

Bayesian Longitudinal Data Modeling in Education S...

July 21, 2025
Read More
icon-dot-govicon-https icon-quote