Project Activities
The properties of these designs will be explored in the context of estimating the impact of the federal Reading First program. The analyses will be based on a time-series cross-sectional dataset of one large state's 766 elementary schools, for the 1998–99 to 2005–06 school years. This school-level dataset will include third grade reading achievement scores on state tests (the target outcome of Reading First), the rating scores of the 199 schools that applied for Reading First funding in the state's first year of allocation decisions, and information on school characteristics and their local context. School year 2004–2005 was the first year in which Reading First funds were used by schools. Therefore, the dataset will include 6 years of baseline pre-intervention data and 2 years of follow-up post-intervention data.
Part I of the study will use the 199 schools in the state that received Reading First ratings to study the statistical trade-offs of expanding the sample in a RDD study. The research will start with a subset of schools whose ratings cluster around the cutoff point and then gradually expand the sample to include more schools on both sides of the cutoff point. This gradual expansion of sample size will allow the observation of changes in the functional form of the rating variable (the precision) as well as the magnitude of estimated impacts of Reading First. Part II of the study will use the estimated impact from the RDD analysis in Part I as an unbiased benchmark against which to evaluate the validity and precision of the CITS results. The impact of Reading First will be estimated based on various approaches for identifying matched comparisons groups (different matching covariates, different lengths of baseline period, different selection methods). Part III of the project will examine what can be gained by conducting an RDD analysis of deviations from expected trend during the intervention, such as whether the sample can be expanded further from the cutoff point when deviations from trends in reading scores are used rather than observed levels.
People and institutions involved
IES program contact(s)
Products and publications
Journal article, monograph, or newsletter
Jacob, R., Somers, M. A., Zhu, P., and Bloom, H. (2016). The Validity of the Comparative Interrupted Time Series Design for Evaluating the Effect of School-Level Interventions. Evaluation Review, 40(3), 167-198.
Nongovernment report, issue brief, or practice guide
Jacob, R.T., Zhu, P., Somers, M.A., and Bloom, H.S. (2012). A Practical Guide to Regression Discontinuity. New York: MDRC.
Somers, M. A., Zhu, P., Jacob, R., and Bloom, H. (2013). The Validity and Precision of the Comparative Interrupted Time Series Design and the Difference-in-Difference Design in Educational Evaluation. MDRC.
Questions about this project?
To answer additional questions about this project or provide feedback, please contact the program officer.