The key implementation events in the evaluation of each curriculum included randomization of classrooms or programs, consent gathering, teacher training in the use of a treatment curriculum, implementation of the curriculum in the classroom, training the assessors, and collection of the baseline student and classroom measures and the post-intervention measures in preschool and kindergarten. As research teams independently implemented the curricula and as the schools followed different calendars, the dates and sometimes the order of these events differed between teams and sites within teams.
Randomization for the seven teams working with RTI occurred in the pilot year (starting in the fall of 2002) and mostly carried over into the 2003-04 evaluation year. For the five teams working with MPR, there was no pilot year and their time of randomization ranged from July through September of 2003.
The consent process followed randomization, except for two teams, for which it occurred concurrently. The start of implementation of the curricula in the classroom ranged from August through October 2003. The RTI and MPR data collection teams attempted to collect baseline data close to the beginning of school to avoid student exposure to the treatment curricula before pre-testing. Twelve teams began implementation before baseline data collection and two teams began implementation concurrently with collection. The lag between the start of implementation and the collection of baseline data ranged from 8 to 49 days (appendix A discusses additional analyses to adjust for possible early treatment effects that might result from these cases). Baseline data collection followed the consent process for the teams working with MPR and ran concurrently for the teams working with RTI. Baseline data collection took 6 to 8 weeks between September and November 2003. Assessors were trained the week of August 4, 2003 for the teams working with RTI and the week of September 8, 2003 for the teams working with MPR.
The amount and timing of teacher training varied by team. The teams working with RTI provided most of the training during the 2002 pilot year, then gave refresher training during the 2003 evaluation year. The teams working with MPR provided initial training at the beginning of the evaluation year, and then follow-up training throughout the year. The students’ exposure to the treatment curriculum and their teachers’ training in its use was confined to preschool for all teams except in the case of the Success for All (SFA) team; in this case, some children entered SFA kindergarten classrooms where the SFA Kinder Corners curriculum was in use.
Pre-kindergarten post-test data were collected in the spring, from April to June 2004, depending on school calendars. Student assessments, teacher interviews, teacher reports on behavior, and classroom observations were completed over a 6- to 8-week period. Parent interviews were completed over a 12-week period. Kindergarten post-test data (student assessments, teacher reports, teacher surveys, and parent interviews but no classroom observations) were collected in the spring and summer of 2005 between March and July.
The research teams collected data on the fidelity of implementation for the treatment and control curricula using both a team-specific measure and a global implementation rating that can be used for between-curricula comparisons. The global ratings use a four-point scale representing High, Medium, Low, or No Implementation. The fidelity of implementation for both the treatment and control curricula was rated as Medium.
The research teams monitored treatment and control classrooms to ensure that treatment group teachers were not sharing curriculum information or materials with teachers in the control group. At research sites with classroom-level random assignment to the treatment and control groups (treatment and control classrooms in the same school or center), the teams’ classroom observations indicated that there was little or no evidence of contamination. There was minimal risk of contamination at sites where pre-kindergarten programs (child care, Head Start centers, or all pre-kindergarten classrooms in an elementary school) were randomly assigned to the treatment or control condition.
The baseline data were collected in fall 2003 from the original sample, with an average response rate of 98 percent for the child assessments, 97 percent for the teacher reports, and 84 percent for the parent interviews. For the first follow-up data collection in spring 2004, attrition reduced the percentage of children for whom data were collected to 93 percent of students completing the child assessments, 90 percent having a teacher report, and 79 percent having a parent interview. Further attrition led to an additional decline in the second follow-up data collection in spring 2005, with 85 percent of the original sample completing the child assessments, 72 percent having a teacher report, and 75 percent having a parent interview. Overall, 15 percent of all the students sampled (426 students) were not included in the analyses: 2 percent non-responders during baseline data collection and 13 percent through later attrition. For the individual research teams, the percentage of students sampled who were not included in the analysis ranged from 3 to 34 percent. There was no evidence of differential sample attrition across the treatment and control groups at each research site.