Skip to main content

Breadcrumb

Home arrow_forward_ios Resource Library Search arrow_forward_ios Comparing Estimates of Teacher Valu ...
Home arrow_forward_ios ... arrow_forward_ios Comparing Estimates of Teacher Valu ...
Resource Library Search
Report Descriptive Study

Comparing Estimates of Teacher Value-Added Based on Criterion- and Norm-Referenced Tests

REL Midwest
Author(s):
David Stuit,
Megan Austin,
Mark Berends,
Dean Gerdeman
Publication date:
January 2014

Summary

Recent changes to state laws on accountability have prompted school districts to design teacher performance evaluation systems that incorporate student achievement (student growth) as a major component. As a consequence, some states and districts are considering teacher value-added models as part of teacher performance evaluations. Value-added models use statistical techniques to estimate teachers' (or schools') contributions to growth in student achievement over time. Designers of new performance evaluation systems need to understand the factors that can affect the validity and reliability of value-added results or other measures based on student assessment data used to evaluate teacher performance. This study provides new information on the degree to which value-added estimates of teachers differ by the assessment used to measure their students' achievement growth. The study used three analytic strategies to quantify the similarities and differences in estimates of teacher value-added from the ISTEP+ and MAP: correlations of value-added estimates based on the two assessments, comparisons of the quintile rankings of value-added estimates on the two assessments, and comparisons of the classifications of value-added estimates on the two assessments according to whether their 95 percent confidence intervals were above, below, or overlapping the sample average. Overall, the findings indicate variability between the estimates of teacher value-added from two different tests administered to the same students in the same years. Specific sources of the variability across assessments could not be isolated because of limitations in the data and research design. However, the research literature points to measurement error as an important contributor. The findings indicate that incorporating confidence intervals for value-added estimates reduces the likelihood that teachers' performance will be misclassified based on measurement error. Three appendices present: (1) Literature review; (2) About the data and the value-added model; and (3) Supplemental analysis of correlations of students' scores on the Indiana Statewide Testing for Educational Progress Plus and Measures of Academic Progress. (Contains 13 notes, 2 boxes, and 11 tables.) [For the summary of this report, see ED544673.]

Download, view, and print

Descriptive Study
REL Midwest

Comparing Estimates of Teacher Value-Added Based on Criterion- and Norm-Referenced Tests

By: David Stuit, Megan Austin, Mark Berends, Dean Gerdeman
Download and view this document Summary

Share

Icon to link to Facebook social media siteIcon to link to X social media siteIcon to link to LinkedIn social media siteIcon to copy link value

Tags

Academic Achievement, Educators

You may also like

Zoomed in IES logo
Workshop/Training

Meta-Analysis Training Institute (MATI)

July 28, 2025
Read More
Rectangle Blue 1 Pattern 1
Workshop/Training

Summer Research Training Institute on Cluster-Rand...

July 14, 2025
Read More
Zoomed in IES logo
Workshop/Training

Data Science for Education (DS4EDU)

April 01, 2025
Read More
icon-dot-govicon-https icon-quote