The basic strategy for the impact analysis was to estimate the difference in outcomes between the treatment and control groups, adjusting for the blocking used in random assignment and for teacher- and student-level covariates. Because random assignment was conducted separately within each of the six school districts participating in the second year of the study, the study comprised six separate random assignment experiments. To obtain the impact estimates, we pooled the data for all six study districts in a single analysis, treating the districts as fixed effects.10
The impact estimates provide an "intent to treat" analysis of the impact of the program; that is, the estimates reflect the program impact on all teachers and students in the targeted classrooms in the study schools, even though some of those teachers and students were not present for the full duration of the study and some of the teachers did not take full advantage of the opportunity to participate in the study-provided PD even though they were present. Separate program impact estimates were obtained for each district and then averaged across the six districts, weighting each district's estimate in proportion to the number of treatment schools from the district in the study sample. Findings in this report therefore represent the impact on the performance of teachers and students in the average treatment school in the 6 two-year study districts. The results do not necessarily reflect what the treatment effect would be in the wider population of districts from which those in the study were selected.
A common way to represent statistical precision is as a minimum detectable effect size (MDES), which is the smallest true effect that an estimator has a good chance of detecting (Bloom 1995). The second year of the study was powered to detect an effect size of 0.59 for teacher knowledge and 0.20 for student achievement.