
Evaluation of Rocketship Education’s use of DreamBox Learning’s online mathematics program.
Wang, H., & Woodworth, K. (2011). Menlo Park, CA: SRI International. Retrieved from http://www.dreambox.com/.
-
examining557Students, gradesK-1
DreamBox Learning Intervention Report - Elementary School Mathematics
Review Details
Reviewed: December 2013
- Randomized Controlled Trial
- Meets WWC standards without reservations
This review may not reflect the full body of research evidence for this intervention.
Evidence Tier rating based solely on this study. This intervention may achieve a higher tier when combined with the full body of evidence.
Please see the WWC summary of evidence for DreamBox Learning.
Findings
Outcome measure |
Comparison | Period | Sample |
Intervention mean |
Comparison mean |
Significant? |
Improvement index |
Evidence tier |
---|---|---|---|---|---|---|---|---|
Northwest Evaluation Association (NWEA) Measures of Academic Progress (MAP): Mathematics |
DreamBox Learning vs. None |
Posttest |
Grades K and 1;
|
159.00 |
156.20 |
Yes |
|
|
Evidence Tier rating based solely on this study. This intervention may achieve a higher tier when combined with the full body of evidence.
Sample Characteristics
Characteristics of study sample as reported by study author.
-
81% English language learners -
Female: 53%
Male: 47% -
Urban
-
- B
- A
- C
- D
- E
- F
- G
- I
- H
- J
- K
- L
- P
- M
- N
- O
- Q
- R
- S
- V
- U
- T
- W
- X
- Z
- Y
- a
- h
- i
- b
- d
- e
- f
- c
- g
- j
- k
- l
- m
- n
- o
- p
- q
- r
- s
- t
- u
- v
- x
- w
- y
California
-
Ethnicity Hispanic 87% Not Hispanic or Latino 13%
Study Details
Setting
The study was conducted in three Rocketship Education charter schools located in San Jose, California.
Study sample
The study sample included all kindergarten and first-grade students at the three schools that participated in the study, a total of 557 students after attrition from a sample of 583 who were randomly assigned. The number of classrooms included in the study is not specified. Within grade levels, students were randomly assigned to either the intervention or comparison groups at a 4 to 1 ratio. In the baseline sample, 53% of students were female, 87% were Hispanic, 81% were English language learners, 88% were eligible for free or reduced-price meals, 4% were classified as special education, and 10% participated in Response to Intervention (RtI) services.
Intervention Group
The experiment was conducted from mid-October through mid-February during the 2010–11 school year. Intervention students were scheduled to receive 20 to 40 minutes of DreamBox Learning mathematics instruction per day; usage statistics show that students averaged 21.8 hours of usage over the course of the study, or approximately 16 minutes per day. Instructional sessions were conducted in a computer lab. The authors noted that the low-achieving students who were assigned to receive RtI services were scheduled to receive 45 minutes of DreamBox Learning instruction in their after-school RtI programming, regardless of intervention status. For the 42 intervention group students who were assigned to RtI services, this 45 minutes was in addition to the DreamBox Learning instruction provided during the school day, for a total of 26.5 hours of usage over the course of the study on average. Progress and use information provided by the DreamBox Learning software was not used to modify face-to-face mathematics instruction for either the intervention or comparison group.
Comparison Group
Students in the comparison condition received no additional mathematics instruction. However, they received additional literacy instruction via an online program during the time and in the same location as intervention group students using the DreamBox Learning software. The 11 students in the comparison condition who were assigned to RtI services were scheduled to receive 45 minutes of DreamBox Learning instruction in their after-school RtI programming; the authors found that these comparison condition students averaged 5.1 hours of program usage over the course of the study.
Outcome descriptions
The study used math test scores from the MAP assessment developed by the Northwest Evaluation Association (NWEA). The study reports the overall math score, as well as five subtest scores, for problem solving, number sense, computation, measurement and geometry, and statistics and probability. Scores were scaled using the RIT scale, “which is scaled using the Item Response Theory (IRT) and has the same meaning regardless of the grade of the student” (as cited in Wang & Woodworth, 2011, p. 3). The schools administered the assessment in September 2010 (pretest) and January/February 2011 (posttest). For a more detailed description of this outcome measure, see Appendix B.
Support for implementation
DreamBox Learning “does not prescribe a specific role for teachers” (Wang & Woodworth, 2011, p. 3). The computer labs in which students received DreamBox Learning instruction were run by lab coordinators, noncredentialed hourly staff who played a minimal role in instruction. The authors noted that lab coordinators sometimes may have been out of the computer lab, at which times the students would be supervised by support staff.
Additional Sources
In the case of multiple manuscripts that report on one study, the WWC selects one manuscript as the primary citation and lists other manuscripts that describe the study as additional sources.
-
Wang, H., & Woodworth, K. (2011). A randomized controlled trial of two online mathematics curricula. Evanston, IL: Society for Research on Educational Effectiveness.
An indicator of the effect of the intervention, the improvement index can be interpreted as the expected change in percentile rank for an average comparison group student if that student had received the intervention.
For more, please see the WWC Glossary entry for improvement index.
An outcome is the knowledge, skills, and attitudes that are attained as a result of an activity. An outcome measures is an instrument, device, or method that provides data on the outcome.
A finding that is included in the effectiveness rating. Excluded findings may include subgroups and subscales.
The sample on which the analysis was conducted.
The group to which the intervention group is compared, which may include a different intervention, business as usual, or no services.
The timing of the post-intervention outcome measure.
The number of students included in the analysis.
The mean score of students in the intervention group.
The mean score of students in the comparison group.
The WWC considers a finding to be statistically significant if the likelihood that the finding is due to chance alone, rather than a real difference, is less than five percent.
The WWC reviews studies for WWC products, Department of Education grant competitions, and IES performance measures.
The name and version of the document used to guide the review of the study.
The version of the WWC design standards used to guide the review of the study.
The result of the WWC assessment of the study. The rating is based on the strength of evidence of the effectiveness of the intervention. Studies are given a rating of Meets WWC Design Standards without Reservations, Meets WWC Design Standards with Reservations, or >Does Not Meet WWC Design Standards.
A related publication that was reviewed alongside the main study of interest.
Study findings for this report.
Based on the direction, magnitude, and statistical significance of the findings within a domain, the WWC characterizes the findings from a study as one of the following: statistically significant positive effects, substantively important positive effects, indeterminate effects, substantively important negative effects, and statistically significant negative effects. For more, please see the WWC Handbook.
The WWC may review studies for multiple purposes, including different reports and re-reviews using updated standards. Each WWC review of this study is listed in the dropdown. Details on any review may be accessed by making a selection from the drop down list.
Tier 1 Strong indicates strong evidence of effectiveness,
Tier 2 Moderate indicates moderate evidence of effectiveness, and
Tier 3 Promising indicates promising evidence of effectiveness,
as defined in the
non-regulatory guidance for ESSA
and the regulations for ED discretionary grants (EDGAR Part 77).