Skip Navigation
The Enhanced Reading Opportunities Study: Findings from the Second Year of Implementation

NCES 2009-4036
November 2008

The Relationship Between Impacts and Second-Year Implementation

This report also includes an exploratory analysis that investigates the relationship between school-level impacts and various aspects of implementation in the second year of the study. Specifically, this analysis examines whether there are differences in impacts between subgroups of schools defined by teachers’ experience with the ERO program (that is, schools whose ERO teacher taught two full years of the program versus schools whose ERO teacher did not teach two full years of the program); overall implementation fidelity during the spring site visit (that is, very well-aligned, well-aligned, moderately aligned, or poorly aligned implementation); and the number of weeks between the start of the school year and ERO program startup (schools that started operating their ERO program within two weeks versus those whose program startup was delayed by two weeks or more). The exploratory analysis also examines whether there are differences in impacts between schools whose implementation of the programs was particularly exemplary (that is, schools that started operating their programs within two weeks and whose implementation was very well aligned to the program models) and schools that did not meet these two criteria.15 Based on these exploratory analyses, one cannot conclude that the programs were more effective in schools with more experienced ERO teachers, with implementation better aligned with the program models, or with early program startup. That is, one cannot infer with certainty that these particular implementation characteristics are related to program impacts because the difference in impacts between the groups of schools within each of the three measured categories of implementation — teacher experience teaching the ERO classes, the alignment of the programs as implemented to the program models, and the efficiency of program startup — is not statistically significant. Impacts for the groups of schools with the most promising implementation characterizations are positive and statistically significant (that is, for the 25 schools whose ERO teacher returned in the second year, having taught the entire first year of the program; the 13 schools where the ERO programs were rated as very well aligned to the program models; and the 23 schools where the ERO programs began within the first two weeks of school).16 Impacts for the related groups of schools with less promising implementation characterizations are smaller and not statistically significant (that is, for the 9 schools whose teachers taught ERO for less than two full years, the 21 schools where there was weaker implementation fidelity, and the 11 schools with program startup that took longer than two weeks). The difference in impacts between the groups of schools within each of the three categories of implementation is not statistically significant.

Top


15 It is important to note that these analyses are exploratory and are not able to establish causal links between these aspects of implementation and variation in program impacts across sites, because other school characteristics and implementation factors may confound the association between school-level impacts and the implementation factors included in the exploratory analysis.
16 The impacts on reading comprehension test scores for each of these three groups of schools are as follows: in the 25 schools whose ERO teacher had returned having taught all of the first year of the program, the effect size is 0.09 standard deviation (p-value = 0.050); in the 13 schools where implementation was rated as very well aligned to the program models, the effect size is 0.13 standard deviation (p-value = 0.047); and in the 23 schools where the programs began within the first two weeks of school, the effect size is 0.10 standard deviation (p-value = 0.048).