Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Statistical Properties of Regression Discontinuity Analysis and Comparative Interrupted Time Series Analysis for Estimating Impacts
Center: NCER Year: 2009
Principal Investigator: Bloom, Howard Awardee: MDRC
Program: Statistical and Research Methodology in Education      [Program Details]
Award Period: 2 years Award Amount: $446,205
Type: Methodological Innovation Award Number: R305D090008
Description:

Purpose: This project explored the circumstances under which two promising quasi-experimental designs can produce estimates of program impacts that are valid, precise and generalizable.

  • Part I of the project focused on the regression discontinuity design (RDD), which estimates program impacts by taking advantage of situations in which applicants are assigned to an education program based on whether they fall above or below some exogenous cutoff on a pre-determined assignment variable. Under certain conditions, a comparison of average outcomes for applicants just above and below the cutoff can be used to estimate the program's impact. The primary purpose of Part I was to explore how far the sample can be expanded beyond the cutoff and still obtain internally valid results and the implications that such expansion has for statistical power and generalizability.
  • Part II of the project focused on the comparative interrupted time series (CITS) design, which estimates the effect of a program by comparing the treatment group's deviation from its historical trend on the key outcome to a comparing group's corresponding deviation from its trend. The primary purpose of Part II was to examine whether the CITS research design can be used to obtain an unbiased estimate of program impacts, and if so, under what conditions this can be achieved.
  • Part III of the project examined the properties of a "combined approach" that integrated the elements of both the RDD and CITS research designs and that should in principle yield the benefits of both methods.

Project Activities: The properties of these designs will be explored in the context of estimating the impact of the federal Reading First program. The analyses will be based on a time-series cross-sectional dataset of one large state's 766 elementary schools, for the 1998–99 to 2005–06 school years. This school-level dataset will include third grade reading achievement scores on state tests (the target outcome of Reading First), the rating scores of the 199 schools that applied for Reading First funding in the state's first year of allocation decisions, and information on school characteristics and their local context. School year 2004–2005 was the first year in which Reading First funds were used by schools. Therefore, the dataset will include 6 years of baseline pre-intervention data and 2 years of follow-up post-intervention data.

Part I of the study will use the 199 schools in the state that received Reading First ratings to study the statistical trade-offs of expanding the sample in a RDD study. The research will start with a subset of schools whose ratings cluster around the cutoff point and then gradually expand the sample to include more schools on both sides of the cutoff point. This gradual expansion of sample size will allow the observation of changes in the functional form of the rating variable (the precision) as well as the magnitude of estimated impacts of Reading First. Part II of the study will use the estimated impact from the RDD analysis in Part I as an unbiased benchmark against which to evaluate the validity and precision of the CITS results. The impact of Reading First will be estimated based on various approaches for identifying matched comparisons groups (different matching covariates, different lengths of baseline period, different selection methods). Part III of the project will examine what can be gained by conducting an RDD analysis of deviations from expected trend during the intervention, such as whether the sample can be expanded further from the cutoff point when deviations from trends in reading scores are used rather than observed levels.

Products and Publications

Journal article, monograph, or newsletter

Jacob, R., Somers, M. A., Zhu, P., and Bloom, H. (2016). The Validity of the Comparative Interrupted Time Series Design for Evaluating the Effect of School-Level Interventions. Evaluation Review, 40(3), 167–198.

Nongovernment report, issue brief, or practice guide

Jacob, R.T., Zhu, P., Somers, M.A., and Bloom, H.S. (2012). A Practical Guide to Regression Discontinuity. New York: MDRC.

Somers, M. A., Zhu, P., Jacob, R., and Bloom, H. (2013). The Validity and Precision of the Comparative Interrupted Time Series Design and the Difference-in-Difference Design in Educational Evaluation. MDRC.


Back