Skip Navigation

Home Products Comparing Estimates of Teacher Value-Added Based on Criterion- and Norm-Referenced Tests

Comparing Estimates of Teacher Value-Added Based on Criterion- and Norm-Referenced Tests

by David Stuit, Megan Austin, Mark Berends and R. Dean Gerdeman

Recent changes to state laws on accountability have prompted school districts to design teacher performance evaluation systems that incorporate student achievement (student growth) as a major component. As a consequence, some states and districts are considering teacher value-added models as part of teacher performance evaluations. Value-added models use statistical techniques to estimate teachers' (or schools') contributions to growth in student achievement over time. Designers of new performance evaluation systems need to understand the factors that can affect the validity and reliability of value-added results or other measures based on student assessment data used to evaluate teacher performance. This study provides new information on the degree to which value-added estimates of teachers differ by the assessment used to measure their students' achievement growth. The study used three analytic strategies to quantify the similarities and differences in estimates of teacher value-added from the ISTEP+ and MAP: correlations of value-added estimates based on the two assessments, comparisons of the quintile rankings of value-added estimates on the two assessments, and comparisons of the classifications of value-added estimates on the two assessments according to whether their 95 percent confidence intervals were above, below, or overlapping the sample average. Overall, the findings indicate variability between the estimates of teacher value-added from two different tests administered to the same students in the same years. Specific sources of the variability across assessments could not be isolated because of limitations in the data and research design. However, the research literature points to measurement error as an important contributor. The findings indicate that incorporating confidence intervals for value-added estimates reduces the likelihood that teachers' performance will be misclassified based on measurement error. Three appendices present: (1) Literature review; (2) About the data and the value-added model; and (3) Supplemental analysis of correlations of students' scores on the Indiana Statewide Testing for Educational Progress Plus and Measures of Academic Progress. (Contains 13 notes, 2 boxes, and 11 tables.) [For the summary of this report, see ED544673.]

Online Availability


Connect with REL Midwest