Skip Navigation
Effectiveness of Reading and Mathematics Software Products

NCEE 2009-4041
February 2009

Effects of Individual Products

Another objective of the study’s second year is to report effects of software products separately. As done in the analysis of experience effects, the study used statistical models to estimate product effects on student test scores that accounted for student fall test scores, age, and gender, and teacher experience and education. Data for all students, teachers, and schools who participated in the study either in the first or second year were used in the analysis. Models were estimated separately for each of the 10 products.

Figure 3 presents the results for six reading products, with the product effect displayed in the middle of its 95 percent confidence interval. The product effect in Figure 3 is the estimated difference in student test scores between classrooms using products and classrooms not using products in the two years of the study. For example, the effect shown for Destination Reading means that an average first grade student in a classroom that used Destination Reading is estimated to have a spring test score that is higher by 1.91 NCE units than if the student were in a classroom not using that product. This effect is equivalent to moving an average student from the 50th percentile on the test score to the 54th percentile. A positive and statistically significant effect was found for one of the six reading products (Leap Track, fourth grade). The remaining five product effects were not statistically significant. Of these, four were positive and one was negative.

Figure 4 presents analogous results for the four math products. None of the effects is statistically significant. Three of the estimated effects were negative and one was positive.

Presenting product effects on test scores in this way does not mean that the study results indicate that products with larger estimated effects are more desirable than products with smaller estimated effects. Characteristics of districts and schools that volunteered to implement the products differ, and these differences may relate to product effects in important ways. The findings do not adjust for differences in schools and districts that go beyond measured characteristics but may be related to outcomes.