|Title:||Generalized Dimensionality Assessment for Multidimensional Psychometric Models|
|Principal Investigator:||Levy, Roy||Awardee:||Arizona State University|
|Program:||Statistical and Research Methodology in Education [Program Details]|
|Award Period:||2 years||Award Amount:||$251,476|
|Type:||Methodological Innovation||Award Number:||R305D100021|
This project develops statistical procedures for conducting dimensionality analysis in the context of multidimensional item response theory that are generalizable to a broad class of related latent variable statistical models. Psychometric models are motivated by the fact that what is of most interest cannot be directly observed and so use latent variables to represent constructs. Observed variables then serve as indicators of the underlying constructs. Popular item response theory (IRT) and related models typically structure the observable variables (i.e., scored item responses) as stochastically dependent on the latent variables.
Unidimensional models in which a single latent variable underlies a set of observable variables have been predominantly employed in practice. Multidimensional models, in which two or more latent variables may be related and may underlie some of the same observable variables, have been developed to address theories that multiple skills or proficiencies are brought to bear in responding to observable items. The combination of Bayesian modeling strategies and modern computational capabilities have facilitated the estimation of complex, multidimensional models and with their emergence comes the need for appropriate model checking and model criticism procedures. Past work has shown that a model with an improperly specified dimensional structure may lead to (a) incorrect estimates of the values of variables; (b) incorrect estimates of the precision of the estimates; and (c) associated errors concerning (1) estimates of information and measurement precision; (2) linking of latent scales; (3) constructing measures and instruments; and (4) the interpretations of the latent variables and the inferences and decisions based on the model.
Standard dimensionality assessment techniques for IRT models are often applicable only to unidimensional models. This project develops statistical procedures for conducting dimensionality analysis for multidimensional IRT models. The goals of the project are to (1) develop a set of statistical procedures, including the creation of new discrepancy measures, for dimensionality analysis at the test, subtest, and item levels of analysis; (2) study and evaluate the performance of these procedures; and (3) create and freely distribute software to conduct such analyses in the free R software environment.
The development of new discrepancy measures will build off of past work by using the model-based covariance (MBC) for item-pairs as a building block to construct a generalized dimensionality discrepancy measure (GDDM). Further both MBC and GDDM will be standardized to produce more interpretable discrepancy measures for assessing dimensionality. In addition, MBC, GDDM, and their standardized versions will be modified to accommodate missing data. The project will attempt to develop a comprehensive strategy to dimensionality assessment that includes investigation at the multiple levels of analysis through a combination of results at the test, subtest, and item-pair levels.
The use of the above discrepancy measures to support inferences requires a framework for evaluating the values of the statistics. Posterior predictive model-checking (PPMC), a flexible Bayesian approach to model-checking that can be employed in a wide variety of settings, will serve as the statistical model-checking framework for this work.
A simulation study will be done to examine the proposed dimensionality analysis procedures in contexts with dichotomous and polytomous data, the possibility of guessing (warranted for multiple-choice formats), and missing data. The simulation study will also provide evidence regarding the utility of PPMC with the discrepancy measures. The simulation study will also help prepare for an examination of the dimensionality analysis procedures using data from the National Assessment of Educational Progress Science Assessment.
Journal article, monograph, or newsletter
Levy, R., Xu, Y., Yel, N., and Svetina, D. (2015). A Standardized Generalized Dimensionality Discrepancy Measure and a Standardized Model-Based Covariance for Dimensionality Assessment for Multidimensional Models. Journal of Educational Measurement, 52(2), 144–158.
Svetina, D., and Levy, R. (2014). A Framework for Dimensionality Assessment for Multidimensional Item Response Models. Educational Assessment, 19(1), 35–57.
Svetina, D., and Levy, R. (2012). An Overview of Software for Conducting Dimensionality Assessment in Multidimensional Models. Applied Psychological Measurement, 36(8): 659–669.