Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Assessing Generalizability and Variability of Single-Case Design Effect Sizes Using Multilevel Modeling Including Moderators
Center: NCER Year: 2019
Principal Investigator: Moeyaert, Mariola Awardee: State University of New York (SUNY), Albany
Program: Statistical and Research Methodology in Education–Early Career      [Program Details]
Award Period: 2 Years (07/01/19–06/30/21) Award Amount: $224,997
Type: Methodological Innovation Award Number: R305D190022
Description:

Purpose: The increasing number of published single-case experimental design (SCED) studies in education sciences can be used to inform policy, research and practice decisions. Such decisions, with a large impact, should be based on scientific knowledge. For this purpose, large bodies of literature in the SCED field can be summarized in a standardized, objective, reliable and valid manner. One technique that is developed and can be applied to serve this purpose is multilevel meta-analysis. Because in a single SCED study multiple cases can be involved and for each case an effect size is calculated, A three-level meta-analysis is recommended as this takes the hierarchical structure of the SCED data into account: namely, effect sizes are clustered within cases and cases in turn are clustered within studies. The goal of this research project was to contribute to evidence-based decisions, research and practice in education through designing multilevel meta-analysis.

Project Activities: The research team empirically validated the multilevel meta-analytic model including moderators in order to explain variability among effect sizes at the case and at the study level. They also developed and empirically validated power calculations to detect meaningful moderator effects. To accomplish these validation goals, the researchers conducted large scale Monte Carlo simulation studies so as to provide guidance for study design. Finally, researchers evaluated the current What Works Clearinghouse (WWC) standards for combining SCED studies (including moderators) and made recommendations based on the results of the empirical validations described above.

Related IES Projects: Multilevel Modeling of Single-subject Experimental Data: Handling Data and Design Complexities (R305D150007)

Products and Publications

ERIC Citations: Find available citations in ERIC for this award here.

Select Publications:

Fingerhut, J., & Moeyaert, M. (2022). Training individuals to implement discrete trials with fidelity: A meta-analysis. Focus on Autism and Other Developmental Disabilities, 37(4), 239–250.

Fingerhut, J., Moeyaert, M., Manolov, R., Xu, X., & Park, K. H. (2023). Systematic Review of Descriptions and Justifications Provided for Single-Case Quantification Techniques. Behavior Modification, 01454455231178469.

Fingerhut, J., Xu, X., & Moeyaert, M. (2021a). Selecting the proper Tau-U measure for single-case experimental designs: Development and application of a decision flowchart. Evidence-Based Communication Assessment and Intervention, 15(3), 99–114.

Fingerhut, J., Xu, X., & Moeyaert, M. (2021b). Impact of within-case variability on Tau-U indices and the hierarchical linear modeling approach for multiple-baseline design data: A Monte Carlo simulation study. Evidence-Based Communication Assessment and Intervention, 15(3), 115–141.

Miocevic, M., Klaassen, F., Geuke, G., Moeyaert, M., & Maric, M. (2020). Using Bayesian methods to test mediators of intervention outcomes in single-case experimental designs. Evidence-Based Communication Assessment and Intervention, 14(1–2), 52–68.

Miocevic, M., Klaassen, F., Moeyaert, M., & Geuke, G. G. (2023). Optimal Practices for Mediation Analysis in AB Single Case Experimental Designs. The Journal of Experimental Education, 1–18.

Moeyaert, M., & Dehghan-Chaleshtori, M. (2022). Evaluating the design and evidence of single-case experimental designs using the What Works Clearinghouse standards. Evidence-Based Communication Assessment and Intervention, 1–19.

Moeyaert, M., & Fingerhut, J. (2022). Quantitative synthesis of personalized trials studies: Meta-analysis of aggregated data versus individual patient data. Harvard Data Science Review, 202212, 101162.

Moeyaert, M., & Yang, P. (2021). Assessing generalizability and variability of single-case design effect sizes using two-stage multilevel modeling including moderators. Behaviormetrika, 48(2), 207–229.

Moeyaert, M., Akhmedjanova, D., Ferron, J., Beretvas, S. N., & Noortgate, W. V. D. (2020). Effect size estimation for combined single-case experimental designs. Evidence-Based Communication Assessment and Intervention, 14(1–2), 28–51.

Moeyaert, M., Bursali, S., & Ferron, J. M. (2021). SCD-MVA: A mobile application for conducting single-case experimental design research during the pandemic. Human Behavior and Emerging Technologies, 3(1), 75–96.

Moeyaert, M., Yang, P., & Xue, Y. (2023). Individual Participant Data Meta-Analysis Including Moderators: Empirical Validation. The Journal of Experimental Education, 1–18.

Moeyaert, M., Yang, P., & Xu, X. (2022). The power to explain variability in intervention effectiveness in single-case research using hierarchical linear modeling. Perspectives on Behavior Science, 45(1), 13–35.

Moeyaert, M., Yang, P., Xu, X., & Kim, E. (2021). Characteristics of moderators in meta-analyses of single-case experimental design studies. Behavior Modification, 01454455211002111.


Back