Skip to main content

Breadcrumb

Home arrow_forward_ios Information on IES-Funded Research arrow_forward_ios Design Comparable Incidence Rate Ra ...
Home arrow_forward_ios ... arrow_forward_ios Design Comparable Incidence Rate Ra ...
Information on IES-Funded Research
Grant Open

Design Comparable Incidence Rate Ratio (IRR) Effect Sizes for Count Data in Single-Case Experimental Designs

NCER
Program: Statistical and Research Methodology in Education
Program topic(s): Core
Award amount: $891,958
Principal investigator: Wen Luo
Awardee:
Texas A&M University
Year: 2024
Project type:
Methodological Innovation
Award number: R305D240019

Purpose

The purpose of this project is to support research synthesis of single-case experimental designs, where experimental results are represented as counts. The project aims to address the need for design-comparable effect size measures in single-case experimental designs (SCEDs). It recognizes the limitations of existing measures like between-case standardized mean difference (BC-SMD) and the absence of suitable design-comparable effect sizes for count outcomes in SCEDs.

Project Activities

The primary objective is to develop and evaluate methods for estimating between-case incidence rate ratio (BC-IRR) effect sizes for count data in SCEDs. Additionally, the project seeks to establish domain-specific IRR benchmarks through meta-analyses of SCED studies and create user-friendly tools such as an R package and a web-based Shiny application to enhance accessibility and simplify the process of obtaining these effect sizes for researchers in the SCED field.

Structured Abstract

Research design and methods

To investigate the performance of the proposed BC-IRR method, a comprehensive Monte Carlo simulation study will be conducted. Count data based on multiple baseline design will be generated. Different series length, number of cases, baseline trend, treatment effect, between-case variance, and degree of overdispersion will be considered in the data generation. A BC-IRR will be estimated for each generated dataset using the generalized linear mixed model (GLMM) and the generalized estimating equation (GEE) approach, and the statistical inferences will be conducted using a Wald test. The performance of the estimators and their statistical inferences will be evaluated based on bias, mean squared error (MSE), coverage rate of the 95% confidence interval, and empirical Type I error rate. To develop the IRR benchmarks, the project team will adopt the "across-studies comparison" approach in which a distribution of empirical effect sizes is established through meta-analysis. The work will focus on two common outcome domains in SCEDs: reading and behavior. For the behavioral outcome domain, the analysis will further distinguish between behaviors with positive and negative valence. In total, three empirical distributions of IRR effect sizes will be established and each will consist of 200 effect size estimates. For each study, the project team will estimate a within-case IRR and a between-case IRR effect size for the immediate treatment effect adjusting for baseline trend. After obtaining the distributions of the effect size estimates, ranges of effect size values will be obtained according to the quantiles (e.g., bottom 25%, 25–50%, 50–75%, and top 25%) to serve as the empirical benchmarks.

Products and publications

Products: In order to enhance the accessibility of the products for both methodologists and applied researchers, the products of this project will be disseminated in multiple venues, including academic journals, professional conferences, workshops, open science framework, and social media.

ERIC Citations: Find available citations in ERIC for this award here.

Related projects

Improving the Accessibility of Effect Size and Synthesis Methods for Single-Case Research

R324U190002

Multilevel Modeling of Single-subject Experimental Data: Handling Data and Design Complexities

R305D150007

Supplemental information

Co-Principal Investigators: Li, Haoran; Baek, Eunkyeng

User Testing: The developed R package and web-based calculator will be tried out by the advisory board and graduate students recruited from the Co-PI's SCED courses. The advisory board will inform necessary improvements regarding the functionality, utility, and overall user-friendliness of these tools. Students will actively use the web-based calculator throughout the semester. Their feedback will be gathered through a multi-faceted approach such as surveys, focus groups, or individual interviews. Any issues related to functionality, clarity, or missing information reported by the students will be investigated further to identify the underlying causes. Based on the feedback, the initial versions of the R package and web-based calculator will undergo revisions and updates as needed. Any improvements made to the R package will be seamlessly integrated into the web-based calculator to ensure consistency and coherence.

Use in Applied Education Research: The design comparable BC-IRRs can be used in research syntheses and meta-analyses involving group designs and SCED studies that deal with count outcomes, such as the frequency counts of target behaviors in a fixed period of time and the number of words correct per minute in reading. Researchers can effectively use the R package and the web-based calculator developed by the research team to compute IRR effect sizes for their SCEDs. After obtaining the IRR effect sizes, researchers can utilize the empirical benchmarks to interpret the magnitude of the effects.

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

Share

Icon to link to Facebook social media siteIcon to link to X social media siteIcon to link to LinkedIn social media siteIcon to copy link value

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

icon-dot-govicon-https icon-quote