Search Results: (1-6 of 6 records)
|The Effectiveness of the Alabama Math, Science, and Technology Initiative (AMSTI)
For report NCEE 2012-4008 Evaluation of the Effectiveness of the Alabama Math, Science, and Technology Initiative (AMSTI) http://ies.ed.gov/ncee/edlabs/projects/project.asp?ProjectID=69
This data file contains data from a cluster randomized trial examined the impact of the Alabama Math, Science, and Technology Initiative (AMSTI) on studentís mathematical problem solving and science achievement. The study also examined the effects on teacher's classroom practice and active learning instructional strategies. The study found AMSTI had a positive and statistically significant effect on classroom practices in mathematics and science after one year. The study found small, but statistically significant gains in student achievement in mathematics, but no effect in science achievement. The sample includes 82 schools, with about 780 teachers and 30,000 students in grades 4Ė8.
|Effectiveness of a Program to Accelerate Vocabulary Development in Kindergarten (VOCAB)
For report NCEE 2012-4005 Effectiveness of a Program to Accelerate Vocabulary Development in Kindergarten (VOCAB) http://ies.ed.gov/ncee/edlabs/projects/project.asp?ProjectID=67 and NCEE 2012-4009 Effectiveness of a Program to Accelerate Vocabulary Development in Kindergarten (VOCAB): First Grade Follow-up Impact Report and Exploratory Analyses of Kindergarten Impacts http://ies.ed.gov/ncee/edlabs/projects/project.asp?ProjectID=289
This data file contains data from a cluster randomized trial that examined the K-PAVE programís effectiveness support the acquisition of vocabulary in young students. The study found that students who received the K-PAVE intervention were one month ahead of students in the control group in academic knowledge at the end of kindergarten, but did not find any statistically significant impacts of K-PAVE at the end of grade 1 on expressive vocabulary, academic knowledge, or passage comprehension. The final sample included 65 schools, including the 33 schools with complete written consent as of July, 24, 2008, and the 32 schools that had complete written consent as of August 7, 2008. The sample included 31 intervention and 34 control schools.
|Impact of the Thinking Reader Software Program on Grade 6 Reading Vocabulary, Comprehension, Strategies, and Motivation
For report NCEE 2010-4035 Impact of the Thinking Reader Software Program on Grade 6 Reading Vocabulary, Comprehension, Strategies, and Motivation http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=REL20104035
This data file contains data from a cluster randomized trial examined the impact of the Thinking Reader program on student vocabulary and reading comprehension. The study found no direct causal evidence supporting Thinking Readerís effectiveness. The final analysis included 90 teachers and a minimum of 2,140 students (89% of the overall baseline sample, 90% of the intervention group, and 88% of the control group).
|The Effects of Connected Mathematics 2 on Math Achievement in Grade 6 in the Mid-Atlantic Region
For report NCEE 2012-4017 The Effects of Connected Mathematics 2 on Math Achievement in Grade 6 in the Mid-Atlantic Region http://ies.ed.gov/ncee/edlabs/projects/project.asp?ProjectID=25.
This data file contains data from a cluster randomized to evaluate the effect of CMP2 on the mathematics achievement of grade 6 students. The study found no statistically significant impact on TerraNova posttest scores. The final analysis included 65 schools, including 5,677 students for the TerraNova and 5,584 for the PTV. This was 82 percent of the eligible students (students enrolled in a regular grade 6 mathematics class in a study school at the time of pretest) for the TerraNova at posttest and 80 percent of the eligible students for the PTV at pretest.
|Do Typical RCTs of Education Interventions Have Sufficient Statistical Power for Linking Impacts on Teacher Practice and Student Achievement Outcomes
For RCTs of education interventions, it is often of interest to estimate associations between student and mediating teacher practice outcomes, to examine the extent to which the study's conceptual model is supported by the data, and to identify specific mediators that are most associated with student learning. This paper develops statistical power formulas for such exploratory analyses under clustered school-based RCTs using ordinary least squares (OLS) and instrumental variable (IV) estimators, and uses these formulas to conduct a simulated power analysis. The power analysis finds that for currently available mediators, the OLS approach will yield precise estimates of associations between teacher practice measures and student test score gains only if the sample contains about 150 to 200 study schools. The IV approach, which can adjust for potential omitted variable and simultaneity biases, has very little statistical power for mediator analyses. For typical RCT evaluations, these results may have design implications for the scope of the data collection effort for obtaining costly teacher practice mediators.
|The Estimation of Average Treatment Effects for Clustered RCTs of Education Interventions
Reports in this series are designed for use by researchers, methodologists, and evaluation specialists to provide guidance in resolving or advancing challenges to evaluation methods. This paper examines the estimation of two-stage clustered RCT designs in education research using the Neyman causal inference framework that underlies experiments. The key distinction between the considered causal models is whether potential treatment and control group outcomes are considered to be fixed for the study population (the finite-population model) or randomly selected from a vaguely-defined universe (the super-population model). Appropriate estimators are derived and discussed for each model. Using data from five large-scale clustered RCTs in the education area, the empirical analysis estimates impacts and their standard errors using the considered estimators. For all studies, the estimators yield identical findings concerning statistical significance. However, standard errors sometimes differ, suggesting that policy conclusions from RCTs could be sensitive to the choice of estimator. Thus, a key recommendation is that analysts test the sensitivity of their impact findings using different estimation methods and cluster-level weighting schemes.
1 - 6