Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Non-Linear Multilevel Latent Variable Modeling with a Metropolis-Hastings Robbins-Monro Algorithm
Center: NCER Year: 2010
Principal Investigator: Cai, Li Awardee: University of California, Los Angeles
Program: Statistical and Research Methodology in Education      [Program Details]
Award Period: 3 years Award Amount: $994,000
Goal: Methodological Innovation Award Number: R305D100039
Description:

Co-Principal Investigator: Michael Seltzer (UCLA)

The goal of this project is to bring together the benefits of multilevel modeling and latent variable modeling. To do so, the project proposes a flexible nonlinear multilevel latent variable modeling framework under which: (1) random effects and latent variables are treated synonymously because both represent unobserved heterogeneity; (2) a nonlinear random effect regression model permits the specification and testing of important structural relations (e.g. mediation or moderation effects) in latent variables; and (3) both the outcome variable and the predictors (at any level) can be latent variables measured with fallible indicators.

This nonlinear model provides flexibility as: (1) the measurement models for latent variables are derived directly from multidimensional item response theory (IRT) allowing multiple types of observed variables: continuous, ordinal, nominal, count, etc.; and (2) general nonlinear functional form is allowed at each level, including product interactions, polynomial effects, and nonlinear regression functions involving latent variables. It seeks to provide a systematic solution to measurement and modeling issues (e.g., attenuation problems connected with measurement error in predictors) routinely encountered in cluster-based experimental and quasi-experimental studies, and studies of schooling based on large-scale longitudinal and cross-sectional data sets (e.g., studies in which cross-level interactions and contextual effects are of particular interest).

The project also seeks to contribute to the speed of statistical computation in multilevel modeling through the use of a computationally efficient Metropolis-Hastings Robbins-Monro algorithm (MH-RM) to tackle the high-dimensional integration problem inherent in likelihood based estimation and inference for such a general model. The MH-RM algorithm combines elements of Markov chain Monte Carlo (MCMC), widely used Bayesian statistics, with Stochastic Approximation, an optimization method used in engineering.

To reach its goal, the project will carry out the following steps. First, it will derive theoretical properties of the proposed model with a focus on identification and substantive interpretability of the parameters. The modeling framework will be implemented in the C++ programming language. Second, it will extend the MH-RM algorithm and optimize it for use with the nonlinear multilevel latent variable model. The algorithm will be implemented in C++. Third, it will conduct simulation studies to test the performance of the algorithm and define the conditions under which the model can be applied. Fourth, it will develop new model checking diagnostic procedures targeted at model-data fit. Fifth, the developed software and methods will be used to analyze large-scale educational data sets (e.g., ECLS-K, LSAY, and PISA) to empirically illustrate them and to contrast the results with those from analyses using observed predictors. In addition, the efficiency of the MH-RM based program will be compared with other available programs (e.g., WinBUGS, Mplus, or Gllamm).

Publications from this project:

Cai, L. (2010). A Two-Tier Full-Information Item Factor Analysis Model With Applications. Psychometrika, 75 (4): 581–612.

Cai, L. (2013). Factor Analysis Of Tests and Items. In K. F. Geisinger, B. A. Bracken, J. F. Carlson, J. C. Hansen, N. R. Kuncel, S. P. Reise, M. C. Rodriguez (Eds.), APA Handbook Of Testing and Assessment In Psychology, Vol. 1: Test Theory and Testing and Assessment In Industrial and Organizational Psychology (pp.85–100). Washington, DC US: American Psychological Association.

Cai, L., and Hansen, M. (2013). Limited-Information Goodness-Of-Fit Testing Of Hierarchical Item Factor Models. British Journal Of Mathematical and Statistical Psychology, 66 (2): 245–276.

Cai, L., Yang, J., and Hansen, M. (2011). Generalized Full-Information Item Bifactor Analysis. Psychological Methods, 16 (3): 221–248.

Cole, D. A., Cai, L., Martin, N. C., Findling, R. L., Youngstrom, E. A., Garber, J., and . . . Forehand, R. (2011). Structure and Measurement Of Depression In Youths: Applying Item Response Theory To Clinical Data. Psychological Assessment, 23 (4): 819–833.

Gibbons, R., and Cai, L. (In Press). Dimensionality Assessment. In W. J. Van Der Linden and R. K. Hambleton (Eds.), Handbook Of Modern Item Response Theory (2nd Ed.). New York, NY: Chapman and Hall.

Lee, T., and Cai, L. (2012). Alternative Multiple Imputation Inference For Mean and Covariance Structure Modeling. Journal Of Educational and Behavioral Statistics, 37 (6): 675–702.

Preston, K., Reise, S., Cai, L., and Hays, R.D. (2011). Using The Nominal Response Model To Evaluate Response Category Discrimination In The PROMIS Emotional Distress Item Pools. Educational and Psychological Measurement, 71 (3): 523–550.

Thissen, D., and Cai, L. (In Press). Nominal Categories Models. In W. J. Van Der Linden and R. K. Hambleton (Eds.), Handbook Of Modern Item Response Theory (2nd Ed.). New York, NY: Chapman and Hall.

Tian, W., Cai, L., Thissen, D., and Xin, T. (2013). Numerical Differentiation Methods For Computing Error Covariance Matrices In Item Response Theory Modeling: An Evaluation and A New Proposal. Educational and Psychological Measurement, 73 (3): 412–439.

Woods, C. M., Cai, L., and Wang, M. (2013). The Langer-Improved Wald Test For DIF Testing With Multiple Groups: Evaluation and Comparison To Two-Group IRT. Educational and Psychological Measurement, 73 (3): 532–547.

Yang, J., Hansen, M., and Cai, L. (2012). Characterizing Sources Of Uncertainty In Item Response Theory Scale Scores. Educational and Psychological Measurement, 72 (2): 264–290.


Back