Setting
This study takes place in a school district in a U.S. southeastern metropolitan city.
Study sample
The Super Solvers group included 52% female, 43% African-Americans, 23% white students, 25% Hispanic students, 9% classified as other, 9% who received special education services, 27% who were classified as English learners, and 52% who qualified for subsidized lunch.
The Super Solvers + Error Analysis group included 50% female, 46% African-Americans, 20% white students, 24% Hispanic students, 10% classified as other, 14% who received special education services, 28% who were classified as English learners, and 52% who qualified for subsidized lunch.
The comparison group included 47% female, 43% African-Americans, 29% white students, 21% Hispanic students, 7% classified as other, 13% who received special education services, 23% who were classified as English learners, and 58% who qualified for subsidized lunch.
Intervention Group
For this aggregated contrast, the Super Solvers and the Super Solvers + Error Analysis (EA) conditions were combined. Students in both conditions received intervention in dyads in 40-minute sessions, 3 times a week for 13 weeks. The intervention focused on two components of the Super Solvers program: Fraction Action (focused on fraction magnitude understanding) and Calculations Quest (focused on fraction calculation). During the first 21 lessons, Fraction Action comprises 17 minutes of the first 21 lessons, and then beginning with Lesson 22, decreases to just 10 minutes of review. Calculations Quest comprises only five minutes of Lessons 13 to 21 but then increases to 15 minutes in Lesson 22. Lessons 4 to 13 also include five minutes focused on whole-number multiplication, and Lesson 22 includes a brief two- minute activity focused on multiplication fluency. Each lesson includes three minutes of discussion on self-regulated learning and three minutes focused on building magnitude fluency. Lessons 4 to 39 conclude with seven minutes of independent practice, though during some weeks, progress monitoring in the form of curriculum-based measurement tasks replaces the practice. The Super Solvers + Error Analysis (EA) group (45 students) received the same instruction as the Super Solvers "Only" group, except that the EA condition added an instructional strategy focused on the conceptual and strategic error analysis of fraction calculations. Error analysis related to checking fraction calculations, which occurred during Calculations Quest in Lessons 13 to 39 for this group.
Comparison Group
The comparison condition for this contrast was business-as-usual math instruction, which involved regular mathematics instruction, and for some students, supplemental math intervention. Regular mathematics instruction occurred during a 60- to 90-minute block five days per week. Approximately 15% of comparison group students received the school’s regular supplemental math intervention (M = 149.38 minutes per week, SD = 91.20). The researchers analyzed the fraction components of the math standards and administered a survey to the 36 teachers who taught math in the 49 participating classrooms to learn about the fourth- and fifth-grade fraction instruction. They concluded that the intervention conditions placed greater emphasis on fraction magnitude using number lines, benchmarking fractions, and understanding the meaning of numerator and denominator, while the comparison group condition placed greater emphasis on shaded pictures, procedural methods, and pictorial representations.
Support for implementation
Training of tutors (most of whom were graduate students) involved two stages. During the first stage, tutors participated in 20 hours of overview, demonstration, and practice in pairs. Once they achieved 95 percent implementation accuracy in practice, they began tutoring with students. The second phase of training involved weekly meetings with research staff to solve any problems that had arisen and to train on upcoming content. All implementation sessions were audio-recorded, and researchers listened to the recordings and conducted live observations to provide feedback on fidelity of implementation. Test administrators were graduate student research assistants who received training on testing procedures and passed fidelity checks. Two RAs, blind to study conditions, scored each test, and any discrepancies were discussed and resolved. Testing sessions were audio-recorded, and 20 percent of the recordings were randomly selected for accuracy checks.