Skip Navigation
Effectiveness of Reading and Mathematics Software Products

NCEE 2009-4041
February 2009

Does Experience Increase Product Effects?

The first hypothesis addressed in the second year of the study is whether product effects on student test scores are larger in the second year than the first, after teachers have had one year to use products in their classrooms. To test the hypothesis, the study created a merged data file that was restricted to 115 teachers who continued with the study for a second year (27 percent of the number that participated in the first year). Teachers who moved to other schools or grade levels, or left teaching, did not continue with the study. The merged file included 5,345 students combined across the first year and the second year for the 115 teachers.

The study estimated statistical models in which student test scores were related to treatment status (whether the teacher was assigned to use a product). To test the effect of experience, the models estimated product effects on student test scores in each of the two years, and then tested statistically to determine if the two differed by more than what would be expected due to sampling variance. The models also included student fall test scores, age, and gender; and teacher experience and education level. Effects of individual products are not reported.

Figure 1 shows experience effects, which are the difference between the second-year effect of products on test scores and the first-year effect, for the reading products used in first and fourth grades. Figure 2 shows the experience effects for the math products used in sixth grade and algebra I. These figures show product effects in each of the two years, and the arrow between the product effects represents the experience effect (the difference between second-year and first-year effects).

Evidence is mixed for the hypothesis that an additional year of experience using the software products improves product effects on test scores. In first grade, the measured product effect in the second year is not statistically significantly different from the product effect in the first year. Similarly, in fourth grade, the measured product effect in the second year is not statistically significantly larger than the effect in the first year. In sixth grade, the product effect in the second year is more negative than in the first year (the effect is negative in both years) and the difference between the two negative effects is statistically significant. In algebra I, the product effect in the second year is larger than in the first year and the difference is statistically significant.

The study investigated the relationship between product usage and product effects in the two years. Usage data were gathered from product records and are accurate to the extent that student logged-in time represents product usage. (If students used other materials related to the product while not being logged on, the additional time is not reflected in the usage data.) Average first grade student usage went from 2,556 minutes in the first year to 1,182 minutes in the second year. Average fourth grade student usage went from 720 minutes in the first year to 936 minutes in the second year. Average sixth grade student usage went from 852 minutes in the first year to 678 minutes in the second year. Average algebra I student usage went from 1,308 minutes in the first year to 1,452 minutes in the second year. All differences between years were statistically significant. The relationship between changes in effects between the two years and changes in usage was not statistically significant.

Because the study did not observe classrooms or interview teachers in the second year, it has no information about how teachers may have modified their use of products from one year to the next beyond examining usage times that are captured by the products being studied. For the same reason, the study has no information about whether control group teachers modified their use of other software products in their classrooms.

Top