Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Value-Added Models and the Measurement of Teacher Quality: Tracking or Causal Effects?
Center: NCER Year: 2008
Principal Investigator: Rothstein, Jesse Awardee: National Bureau of Economic Research (NBER)
Program: Teaching, Teachers, and the Education Workforce      [Program Details]
Award Period: 2 years Award Amount: $294,295
Type: Measurement Award Number: R305A080560
Description:

Purpose: Teacher quality is increasingly recognized as a crucial component of the educational process. If schools are to hire, promote, or compensate on the basis of quality, they need accurate measures of teacher quality. One measure of a high-quality teacher is one who improves his or her students' achievement. However, this assertion presumes that we know how to measure a teacher's effect on student achievement. What is needed is a measure of the causal effect, distinct from the potentially confounding influences of students, families, friends, neighborhoods, and individual abilities. Typical value-added models (VAMs) measure teacher effects by student gain scores (the change in students' achievement test scores from one academic year to the next). The purpose of this project is to validate the use of value-added models for the assessment of teachers' impacts on student achievement, with a particular focus on the bias in value-added measures that can be induced by non-random assignment of students to teachers, such as ability tracking. If high- and low-ability students are tracked into different classes, the teacher of the high-ability class may appear to have higher value-added than the teacher of the low-ability class even if the teachers are of the same quality. In addition, the project will attempt to identify specific value-added models that are plausibly free of this bias.

Project Activities: Recent value-added models attempt to deal with the bias introduced by student assignment by using "student fixed effects" models, but those only avoid bias if tracking is based solely on students' permanent characteristics. If classroom assignments are updated based on each year's performance, student fixed effects value-added models are unable to eliminate bias in value-added measures. This study will perform a formal statistical test that can distinguish the two forms of tracking. The first component of the proposed project will be to implement this test, using various samples and various measures of annual academic achievement. The project will carry out secondary analyses of data containing longitudinal records on all students attending North Carolina public schools that are linked to teacher data. Since preliminary evidence suggests that the permanent ability tracking restriction is unlikely to hold, a second component of the project will involve the development of alternative models that can identify teachers' causal effects in the presence of plausible real-world tracking.

Products: The products of this project will be a fully tested and validated approach to value-added analysis that reduces the bias in existing models due to non-random assignment of students.

Structured Abstract

Setting: The study will analyze North Carolina state public school data.

Population: The sample includes all available 5th grade students in 2000–01 who could be matched to teachers, and for whom reading test gain scores since fourth grade can be computed. After excluding all students for whom a gain score could not be computed, those for whom the teacher could not be identified reliably, those whose class was a self-contained special education or honors class, students who were grade repeaters, those who were in a class with twelve or fewer students, and students who were in a class where there was only one fifth grade teacher in the school, the final sample is based on scores of 59,104 students (about 60% of the total fifth grade students), and data on their 3,013 teachers.

Dataset: The state data set for these analyses is assembled and distributed (in anonymous form) by the North Carolina Education Research Data Center (NCERDC).

Research Design and Methods: The data analyses will test several value-added statistical models using varying assumptions and corresponding variables. The notion that ability tracking might be confounding typical value-added analyses will be demonstrated by regressing 4th grade student gain scores on 5th grade and 6th grade teacher characteristics — a non-causal sequence of factors that should show no relation to each other if true teacher effects are being estimated in the model, i.e., future teachers cannot have causal effects on past student achievement. Additional analyses will test whether tracking is based on more permanent student characteristics, "ability tracking," or on more transient changes, "performance tracking." The study will then test alternative assumption value-added models that more correctly specify the causal models. Finally, the study will attempt to show how these alternative models can be used for the various kinds of policy decisions about teacher quality to which value-added models are being applied.

Key Measures: Key measures include end-of-year tests in reading, student demographic characteristics (e.g., race, free lunch), and teacher demographic and professional background characteristics (e.g., gender, race, majors, degrees, certifications).

Data Analytic Strategy: The analyses will involve basic and alternative value-added modeling (linear regression).

Publications

Journal article, monograph, or newsletter

Rothstein, J. (2009). Student Sorting and Bias in Value–Added Estimation: Selection on Observables and Unobservables. Education Finance and Policy, 4(4): 537–571.

Rothstein, J. (2010). Teacher Quality in Educational Production: Tracking, Decay, and Student Achievement. The Quarterly Journal of Economics, 125(1): 175–214.

** This project was submitted to and funded under Teacher Quality: Reading and Writing in FY 2008.


Back