Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Improving Education Policy Analysis and Practice by Addressing Test Score Measurement Error
Center: NCER Year: 2014
Principal Investigator: Lockwood, J. R. Awardee: Educational Testing Service (ETS)
Program: Statistical and Research Methodology in Education      [Program Details]
Award Period: 3 years (7/1/14–6/30/17) Award Amount: $751,674
Type: Methodological Innovation Award Number: R305D140032
Description:

Co-Principal Investigator: Daniel McCaffrey

Test scores are instrumental in education research and decision making. They are being used for a growing number of high-stakes decisions about students, teachers, and schools, and to measure the effectiveness of education policies and interventions. In observational studies where treatment and control groups of students typically differ on attributes related to achievement, test scores are often the key control variables used to adjust for pre-existing differences among groups. When test scores are used as covariates, test score measurement error can bias parameter estimates from statistical models. Observational analyses in education applications that use test scores as control variables, whether for the estimation of treatment effects, teacher value-added effects, student growth distributions, or peer effects, risk potentially large biases if measurement error is not addressed.

Tools and guidance for adjusting properly for measurement error are lacking. In this project, researchers will fill this gap in a number of ways. The research team will conduct case studies of teacher value-added estimation that apply different measurement error correction approaches to understand their behavior in real data and to answer outstanding questions about the properties and feasibility of these approaches The case studies will lead to new knowledge and methodological recommendations that will allow analysts to make more informed decisions during analyses of observational data. For the popular statistical software package R, researchers will construct a library that provides functions to support implementation of all common approaches to measurement error correction using test scores and conditional standard error of measure (CSEM) information. Education researchers will therefore have relatively user-friendly access to using the models and incorporating the recommendations derived from this project.

Publications

Journal article, monograph, or newsletter

Lockwood, J.R., and Castellano, K.E. (2017). Estimating True Student Growth Percentile Distributions Using Latent Regression Multidimensional IRT Models. Educational and Psychological Measurement, 77(6), 917–944.

Lockwood, J.R., and Castellano, K.E. (2015). Alternative Statistical Frameworks for Student Growth Percentile Estimation. Statistics and Public Policy, 2(1): 1–9.

Lockwood, J.R., and McCaffrey, D.F. (2017). Simulation-Extrapolation With Latent Heteroskedastic Error Variance. Psychometrika, 82(3), 717–736.

Lockwood, J.R., and McCaffrey, D.F. (2016). Matching and Weighting With Functions of Error-Prone Covariates for Causal Inference. Journal of the American Statistical Association, 111(516), 1831–1839.

Lockwood, J.R., and McCaffrey, D.F. (2015). Simulation-Extrapolation for Estimating Means and Causal Effects With Mismeasured Covariates. Observational Studies, 1: 241–290.

Weiss, M.J., Lockwood, J.R., and McCaffrey, D.F. (2016). Estimating the Standard Error of the Impact Estimator in Individually Randomized Trials With Clustering. Journal of Research on Educational Effectiveness, 9(3), 421–444.


Back