Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Practical Tools for Multilevel Hierarchical Modeling in Education Research
Center: NCER Year: 2010
Principal Investigator: Gelman, Andrew Awardee: Columbia University
Program: Statistical and Research Methodology in Education      [Program Details]
Award Period: 3 years Award Amount: $1,125,301
Type: Methodological Innovation Award Number: R305D100017
Description:

Co-Principal Investigators: Liu, Jingchen; Rabe-Hesketh, Sophia

Purpose: In this project, the researchers developed and tested new approaches to support applied researchers' use of Bayesian modal estimation (also called maximum a posteriori estimation) for multilevel models. The goal of this methodological innovation was to make it easier for applied researchers to avoid nonsensical results when they use multilevel models.

Multilevel data structures are common in education research in studies that range from descriptive analyses of children nested within classrooms and schools to more formal cluster randomized field experiments. A problem frequently encountered when fitting such models using standard software (such as HLM, Stata, R, or SPSS) is that the (restricted) maximum likelihood estimates are on the boundary of parameter space or that convergence fails. In either case, this can result in estimates that do not make sense. Examples include variance components estimated as zero, correlations between random intercepts and random slopes estimated as 1 or -1, and non-convergence due to sparseness or separation (when a linear combination of the predictors is perfectly predictive of the outcome). These problems are particularly common in small studies of educational interventions that often involve fewer than 10 or 15 classrooms. In larger studies, these concerns arise with grouping factors such as school district or study site that typically have only a small number of levels or when specifying random slopes for variables that do not always vary substantially within groups. Consequences include understated standard errors of regression coefficients and unrealistic inferences regarding the random effects.

In these situations, researchers may use Bayesian methods to guarantee sensible estimates. Bayesian inference methods use prior distributions to provide such guarantees. In this project, as the researchers were developing a Bayesian estimation algorithm, they created weakly informative priors to test their approach and ensure the algorithm avoided estimating correlations of 1 or variances of 0. "Weakly informative" priors means that when the data provide inadequate information on a parameter to be estimated, meaningless estimates will be avoided, but when the data provide sufficient information on that parameter, the priors will have essentially no effect on the estimates. The prior distributions developed by this project were designed to meet three other quality conditions. First, they are friendly for computation by having (i) a density function that is easy to evaluate and (ii) a functional form (and its derivatives) that is "analytic" enough so that iterative optimization algorithms can be applied. The second condition they meet is that they are parameterized and have enough flexibility to be tuned to meet various a priori beliefs. The third condition is that the tuning parameters have direct practical interpretations so that it is clear to applied modelers that they will have minimal influence on parameters that are well identified from data.

Project Activities: A practical obstacle to using Bayesian methods is that current methods (Markov chain Monte Carlo) require considerable expertise, both for model specification and convergence checking, and that they can be prohibitively slow. This project team developed and implemented efficient algorithms for Bayes modal estimation of multilevel models with weakly informative priors in standard software (R and Stata). By doing so, they demonstrated how to reduce computational time by using starting values from related models and stopping the iterations sooner. In addition, they extended their approach to handle complex problems using standard multilevel models by supporting models that can include including high-order or deep interactions, allowing treatment effects to differ between subgroups of students classified by age, sex, and ethnicity.

Products and Publications

ERIC Citations: Find available citations in ERIC for this award here.

Select Publications:

Adler, R.J., Blanchet, J.H., & Liu, J. (2012). Efficient Monte Carlo for high excursions of gaussian random fields. The Annals of Applied Probability, 22(3): 1167–1214.

Chung, Y., Gelman, A., Rabe-Hesketh, S., Liu, J., & Dorie, V. (2015). Weakly informative prior for point estimation of covariance matrices in hierarchical models. Journal of Educational and Behavioral Statistics, 40(2), 136–157.

Chung, Y., Rabe-Hesketh, S., & Choi, I.H. (2013). Avoiding zero between study variance estimates in random effects meta analysis. Statistics in Medicine, 32: 4071–4089.

Chung, Y., Rabe-Hesketh, S., Dorie, V., Gelman, A., & Liu, J. (2013). A nondegenerate penalized likelihood estimator for variance parameters in multilevel models. Psychometrika, 78(4): 685–709.

Liu, J. (2012). Tail approximations of integrals of gaussian random fields. The Annals of Probability, 40(3): 1069–1104.

Liu, J., and Xu, G. (2012a). Rare-event simulations for exponential integrals of smooth gaussian processes. In Proceedings of the Winter Simulation Conference (pp. 36). Berlin, Germany: Institute of Electrical and Electronics Engineers (IEEE).

Liu, J., & Xu, G. (2012b). Some asymptotic results of gaussian random fields with varying mean functions and the associated processes. The Annals of Statistics, 40(1): 262–293.

Liu, J., & Xu, G. (2013). On the density functions of integrals of gaussian random fields. Advances in Applied Probability, 45(2): 398–424.

Liu, J., Xu, G., & Ying, Z. (2012). Data-driven learning of Q-matrix. Applied Psychological Measurement, 36(7): 548–564.


Back