Skip Navigation

An Evaluation of Number Rockets: A Tier 2 Intervention for Grade 1 Students At Risk for Difficulties in MathematicsAn Evaluation of Number Rockets: A Tier 2 Intervention for Grade 1 Students At Risk for Difficulties in Mathematics

Study design

For this effectiveness study one district was selected from each of four Southwest Region states (Arkansas, Louisiana, New Mexico, and Texas) to evaluate the Fuchs et al. (2005) intervention in multiple populations. The study was conducted in 76 schools (38 intervention, 38 control) in four urban districts during the 2008/09 academic year. All grade 1 students receiving core math instruction in English in the general classroom were eligible. Parental consent was obtained for approximately 75 percent of eligible students, who were then enrolled in the study. Of the nearly 3,000 students screened, some 994 were identified as at risk (615 intervention, 379 control)7.

Before random assignment, schools in districts were matched—based on percentage of students eligible for free or reduced-price lunch (a proxy for family income) and overall school achievement (as measured by state assessments averaged across the last three years)—to ensure that schools with similar student achievement and family incomes were compared. One school in each pair was then randomly assigned to the intervention condition and the other to the control condition.

Intervention schools provided tutoring sessions in addition to the regular core math instruction. Control schools followed business as usual, conducting their regular core math curriculum and classroom activities. District leaders agreed that no formal supplemental math programs other than the small group tutoring intervention would be used outside the classroom in study schools. Beginning in December 2008, at-risk students in intervention schools met in groups of two or three students for three or four 40-minute sessions a week, for a total of 48 lessons over approximately 17 weeks of instruction.

In mid-November 2008, before the intervention began, tutors received training from staff experienced in curriculum development and in training others to deliver tutoring services. In the day-long training session tutors received an overview of the program and key components, particularly the need to adhere strictly to the scripted lessons. Training consisted of explicit instruction followed by iterative role play and peer review. Follow-up coaching was provided by the initial trainers in two subsequent meetings in January and February and by phone and email throughout the intervention period. Tutors were licensed elementary school teachers, often retired teachers or substitute teachers from the district. Tutors met with multiple student groups, though the students and tutor to whom students were assigned did not change throughout the study. Tutored students did not miss their regular classroom instruction in math, but did miss instruction in other disciplines.

This study employs a pre-post design. Students were screened before the tutoring began and will be assessed when it ends to determine whether the intervention group outperformed the control group on a measure of math achievement (the Test of Early Mathematics Ability–Third Edition; TEMA-3).

In October and November 2008 all students in grade 1 who were receiving core math instruction in English and whose parents provided signed consent were individually administered a math screening test. This screening test included six subtests taken from four sources:

Students were ranked using the composite score from the six subtests. The lowest performing 35 percent of students were identified as at risk; the rest of the students, who were not identified as such for this study, did not participate further. In the intervention schools students identified as at risk received the small group tutoring intervention beginning in December 2008 and lasting until April and May 2009.

In empirical studies such as this one the smaller the sample size, the larger the difference must be on the final assessment between the control and intervention groups in order to attribute the difference to the intervention rather than chance. In this study 60 schools were targeted for participation; however, to ensure that enough data were collected to detect even smaller impacts of the tutoring, another 14 schools were added. This provided a wider variety of participating schools regionally, reduced the risk that the school attrition rate would jeopardize the successful completion of the study, and accounted for the possibility that some tutors would not implement the intervention with fidelity. The statistical probability (power) that the final analysis will find a meaningful difference (minimal detectable effect size) of 0.27 standard deviation is 80 percent if such an effect at that level or greater exists. Adding 14 schools lowers the minimal detectable effect size to 0.23. The lower the minimal detectable effect size, the more likely the study is to detect the intervention's impact. The sample size of the current study is large enough to allow a smaller effect to be detected than in the Fuchs et al. (2005) study.

7 The unbalanced sample sizes between the intervention and control groups was unexpected. Although schools were not matched on enrollment, there was an expectation, everything else remaining the same, that approximately equal numbers of at-risk students would be identified in each of the experimental conditions. Consent forms were identical for both intervention and control schools, and parents were not informed of the assignment status of their child's school. School-level random assignment was conducted before the consent process was initiated; therefore, school personnel knew the assignment status of their school. Because students in control schools would not receive tutoring, the study team believes there was less incentive for school staff to ensure consent forms were returned, leading to lower return rates in control schools. Despite the difference in sample size by condition, no differences between the two groups were found on the baseline screening test scores. In addition, logistic regression analyses indicated that school assignment was not a predictor of whether a student was identified as at risk. Both findings suggest that differential consent form return rates by condition did not introduce differences between the groups in terms of mathematics ability before the start of the intervention.

Return to Index