Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Non-Linear Multilevel Latent Variable Modeling with a Metropolis-Hastings Robbins-Monro Algorithm
Center: NCER Year: 2010
Principal Investigator: Cai, Li Awardee: University of California, Los Angeles
Program: Statistical and Research Methodology in Education      [Program Details]
Award Period: 3 years Award Amount: $994,000
Type: Methodological Innovation Award Number: R305D100039
Description:

Co-Principal Investigator: Seltzer, Michael

Purpose: The goal of this project was to bring together the benefits of multilevel modeling and latent variable modeling. To do so, the project proposed a flexible nonlinear multilevel latent variable modeling framework under which: (1) random effects and latent variables are treated synonymously because both represent unobserved heterogeneity; (2) a nonlinear random effect regression model permits the specification and testing of important structural relations (e.g. mediation or moderation effects) in latent variables; and (3) both the outcome variable and the predictors (at any level) can be latent variables measured with fallible indicators.

This nonlinear model provides flexibility as: (1) the measurement models for latent variables are derived directly from multidimensional item response theory (IRT) allowing multiple types of observed variables: continuous, ordinal, nominal, count, etc.; and (2) general nonlinear functional form is allowed at each level, including product interactions, polynomial effects, and nonlinear regression functions involving latent variables. It seeks to provide a systematic solution to measurement and modeling issues (e.g., attenuation problems connected with measurement error in predictors) routinely encountered in cluster-based experimental and quasi-experimental studies, and studies of schooling based on large-scale longitudinal and cross-sectional data sets (e.g., studies in which cross-level interactions and contextual effects are of particular interest).

The project also contributed to the speed of statistical computation in multilevel modeling through the use of a computationally efficient Metropolis-Hastings Robbins-Monro algorithm (MH-RM) to tackle the high-dimensional integration problem inherent in likelihood based estimation and inference for such a general model. The MH-RM algorithm combines elements of Markov chain Monte Carlo (MCMC), widely used Bayesian statistics, with Stochastic Approximation, an optimization method used in engineering.

Project Activities: To reach its goal, the project team carried out the following steps. First, the team derived theoretical properties of the proposed model with a focus on identification and substantive interpretability of the parameters. The modeling framework was implemented in the C++ programming language. Second, they extended the MH-RM algorithm and optimize it for use with the nonlinear multilevel latent variable model. The algorithm was implemented in C++. Third, the project team conducted simulation studies to test the performance of the algorithm and define the conditions under which the model can be applied. Fourth, the team developed new model checking diagnostic procedures targeted at model-data fit. Fifth, the developed software and methods were used to analyze large-scale educational data sets (e.g., ECLS-K, LSAY, and PISA) to empirically illustrate them and to contrast the results with those from analyses using observed predictors. In addition, the efficiency of the MH-RM based program was compared with other available programs (e.g., WinBUGS, Mplus, or Gllamm).

Publications and Products

Book chapter

Cai, L. (2013). Factor Analysis of Tests and Items. In K.F. Geisinger, B.A. Bracken, J.F. Carlson, J.C. Hansen, N.R. Kuncel, S.P. Reise, and M.C. Rodriguez (Eds.), APA Handbook of Testing and Assessment in Psychology, Volume 1: Test Theory and Testing and Assessment in Industrial and Organizational Psychology (pp. 85–100). Washington, DC: American Psychological Association.

Cai, L. (2018). Two-Tier Item Factor Analysis Modeling. In W.J. van der Linden (Ed.), Handbook of Modern Item Response Theory (2nd ed.). New York: Chapman and Hall.

Gibbons, R., and Cai, L. (2018). Dimensionality Assessment. In W.J. van der Linden (Ed.), Handbook of Modern Item Response Theory (2nd ed.). New York: Chapman and Hall.

Thissen, D., and Cai, L. (2018). Nominal Categories Models. In W. J. van der Linden (Ed.), Handbook of Modern Item Response Theory. New York: Chapman and Hall.

Journal article, monograph, or newsletter

Cai, L. (2010). A Two-Tier Full-Information Item Factor Analysis Model With Applications. Psychometrika, 75(4): 581–612.

Cai, L. (2015). Lord–Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing. Psychometrika, 80(2): 535–559.

Cai, L., and Hansen, M. (2013). Limited-Information Goodness-of-Fit Testing of Hierarchical Item Factor Models. British Journal of Mathematical and Statistical Psychology, 66(2): 245–276.

Cai, L., Yang, J., and Hansen, M. (2011). Generalized Full-Information Item Bifactor Analysis. Psychological Methods, 16(3): 221–248.

Cole, D.A., Cai, L., Martin, N.C., Findling, R.L., Youngstrom, E.A., Garber, J., Curry, J.F., Hyde, J.S., Essex, M.J., Compas, B.E., Goodyer, I.M., Rohde, P., Stark, K.D., Slattery, M.J., and Forehand, R. (2011). Structure and Measurement of Depression in Youths: Applying Item Response Theory to Clinical Data. Psychological Assessment, 23(4): 819–833.

Falk, C. F., and Cai, L. (2016). Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis. Psychometrika, 81(2): 434–460.

Lee, T., and Cai, L. (2012). Alternative Multiple Imputation Inference for Mean and Covariance Structure Modeling. Journal of Educational and Behavioral Statistics, 37(6): 675–702.

Preston, K., Reise, S., Cai, L., and Hays, R.D. (2011). Using the Nominal Response Model to Evaluate Response Category Discrimination in the PROMIS Emotional Distress Item Pools. Educational and Psychological Measurement, 71(3): 523–550.

Tian, W., Cai, L., and Thissen, D. (2013). Numerical Differentiation Methods for Computing Error Covariance Matrices in Item Response Theory Modeling: An Evaluation and a new Proposal. Educational and Psychological Measurement, 73(3): 412–439.

Woods, C.M., Cai, L., and Wang, M. (2013). The Langer-Improved Wald Test for DIF Testing With Multiple Groups: Evaluation and Comparison to Two-Group IRT. Educational and Psychological Measurement, 73(3): 532–547.

Yang, J., Hansen, M., and Cai, L. (2012). Characterizing Sources of Uncertainty in Item Response Theory Scale Scores. Educational and Psychological Measurement, 72(2): 264–290.


Back