Skip Navigation
Effectiveness of Reading and Mathematics Software Products

NCEE 2009-4041
February 2009


The second year of the study examined whether an additional year of teaching experience using the software products increased the estimated effects of software products on student test scores. The evidence for this hypothesis is mixed. For reading, there were no statistically significant differences between the effects that products had on standardized student test scores in the first year and the second year. For sixth grade math, product effects on student test scores were statistically significantly lower (more negative) in the second year than in the first year, and for algebra I, effects on student test scores were statistically significantly higher in the second year than in the first year.

The study also tested whether using any of the 10 software products increased student test scores. One product had a positive and statistically significant effect. Nine did not have statistically significant effects on test scores. Five of the insignificant effects were negative and four were positive.

The study’s findings should be interpreted in the context of its design and objectives. It examined a range of reading and math software products in a range of diverse school districts and schools. But it did not study many forms of educational technology and it did not include many types of software products. How much information the findings provide about the effectiveness of products that are not in the study is an open question. Products in the study also were implemented in a specific set of districts and schools, and other districts and schools may have different experiences with the products. The findings should be viewed as one element within a larger set of research studies that have explored the effectiveness of software products.