Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Constructing Value-Added Indicators of Teacher and School Effectiveness that We Can Trust
Center: NCER Year: 2010
Principal Investigator: Guarino, Cassandra Awardee: Michigan State University
Program: Statistical and Research Methodology in Education      [Program Details]
Award Period: 3 years Award Amount: $1,194,064
Goal: Methodological Innovation Award Number: R305D100028
Description:

Co-Principal Investigators: Mark Reckase (Michigan State University) and Jeffrey Wooldridge (Michigan State University)

This project seeks to improve value-added models by addressing two central issues involved in establishing the validity of inferences based on value-added models (VAMs). The first issue concerns the accuracy of measures of students' achievement growth. Educational tests that under-represent the full range of desired skills and knowledge, and that substantially shift emphasis on particular constructs from year to year, will underestimate students' growth in achievement and may lead to statistical bias in indicators estimated using VAMs. Moreover, different methodological approaches to scaling the responses to tests from achievement can lead to different estimates of value-added outcomes for schools and teachers. The second issue is whether VAMs effectively isolate the true contribution of teachers and schools to measured growth in achievement, or instead confound these effects with the effects of other factors that may or may not be within the control of teachers and schools. Given that students are not randomly assigned to schools or to teachers within schools, disentangling the causal effects of schooling from other factors influencing is not straightforward.

To address these two issues, the project will (a) develop statistical methods that can be used to test for violations of assumptions that threaten the validity of VAM-based inferences; (b) develop methods to improve the statistical characteristics of estimates obtained from VAMs; (c) investigate how conditions threatening the validity of inference using VAMs vary across different subpopulations of students; and (d) suggest effective ways to structure VAM-related policies.

The project will occur in three phases. Phase 1 includes diagnosis, development and demonstration aspects. Under diagnosis, tests of assumptions needed to support inference using VAMs will be developed. These tests will address assumptions embedded in both scaling (for example, assumptions of unidimensionality or that year-to-year achievement can be measured through vertical scaling) and VAM estimation strategies (for example, assumptions regarding decay and exogeneity). Under development, the project will investigate the use of advanced multidimensional scaling approaches that may more accurately represent the growth in students' achievement and address problems of underestimation of student achievement and teacher effects that have been found with unidimensional item response theory (IRT) models. Under demonstration, evidence of the strengths and weaknesses of different approaches to scaling and VAM estimation will be gathered through a series of simulations and the application of the techniques to real data from school districts.

Phase 2 of the project will apply the findings from Phase 1 to a state-level data set allowing an investigation of the sensitivity of scaling assumptions and estimates derived from VAMs to various contexts—i.e., across different subpopulations of students. This part of the project will operate under the assumption that scaling methods and VAMs differ in their ability to produce causal estimates of performance for teachers and schools serving different types of students. Differences in performance indicators for teachers, schools, and programs will be examined across low versus high socioeconomic status students, minority versus white students, urban versus suburban and rural students, and special needs versus general student populations.

Phase 3 of the project will focus on the development and dissemination of policy guidelines and recommendations for testing regimes and scaling, data requirements and collection, and estimation methodologies. These recommendations will be adapted to whether teacher, school, or program effects are being considered.


Back