Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: A d-Estimator for Single Case Designs
Center: NCER Year: 2010
Principal Investigator: Shadish, William Awardee: University of California, Merced
Program: Statistical and Research Methodology in Education      [Program Details]
Award Period: 3 years Award Amount: $974,524
Type: Methodological Innovation Award Number: R305D100046
Description:

Co-Principal Investigator: Rindskopf, David

Purpose: Although a number of methods exist for analyzing data from single-case designs, none yields an effect size estimator that is comparable to the commonly used effect size statistics in between-groups designs like the standardized mean difference statistic. This project developed such a d-statistic (represented as d) for single-case designs that is comparable to and in the same metric as the d-statistic from a between-groups experiment. This statistic will allow researchers to assess effects from both single-case and between-groups designs on comparable metrics in systematic reviews of effective educational interventions.

Project Activities: The project had three goals. For its first goal, the project developed a firm statistical foundation for this new estimator, and appropriate distribution theory and conditional variance for it. In the standard application of the d-statistic to between-groups research, the statistic was standardized by between-subjects standard deviations. In single-case designs on only one individual, this cannot be done. The project capitalized on the fact that the vast majority of reports of single-case designs essentially include replications of the same study on different individuals, allowing the computation of the needed between-subjects variation. The project took the following steps. First, for the case of a stationary time series trend with a simple first-order autocorrelation and equal numbers of observations for baseline and treatment phases, a model was developed that links the effect size index that might be computed in a single subject design to a standard one that might be computed in between subjects designs. Second, the population effect size for that model was defined. Third, the process for estimating that effect size using sample data was demonstrated. Fourth, the sampling distribution (variance and bias) of the estimator was derived.

For the second goal, the project turned the material developed under the first goal into a reliable body of methods for practical application. This work included: (a) developing empirical evidence on the range of autocorrelations, cases per study, the number of baseline and treatment periods per case, and observations per period based on data through a random sample of the literature, which will then guide both computer simulations and empirically generated hypothesis testing of the effects of variability in these matters; (b) extending the results to cases where baseline and treatment phase sample sizes are unequal; (c) extending the results to time series with trends (which may itself involve developing more than one estimator of effect size each with its own sampling distribution), (d) developing and checking the accuracy of approximations to these estimators when the model is known to be correct; (e) using simulations and analytic methods to test robustness of results to violations of the model; (f) addressing the uncertainty in the estimate of the autocorrelation; and (g) extending the new effect size statistic to Bayesian models.

Under the third goal, the project translated these basic statistical developments into practical tools that single-case researchers at various levels of statistical sophistication can use. The primary focus of this work was the creation of an SPSS macro to estimate d and its standard error, along with a detailed manual for the use of the macro, preparation of presentations and workshops on these matters, and writing manuscripts that show how to use these developments in applied work (including meta-analyses that use d). In addition, the project developed a manual on how to produce d when estimating multilevel models using HLM and WinBUGS (a Bayesian software).

Products and Publications

Journal article, monograph, or newsletter

Hedges, L.V., Pustejovsky, J.E., and Shadish, W.R. (2012). A Standardized Mean Difference Effect Size for Single Case Designs. Research Synthesis Methods, 3(3): 224–239.

Pustejovsky, J.E., Hedges, L.V., and Shadish, W.R. (2014). Design-Comparable Effect Sizes in Multiple Baseline Designs: A General Modeling Framework. Journal of Educational and Behavioral Statistics, 39(5), 368–393.

Rindskopf, D. (2014). Bayesian Analysis of Data From Single Case Designs. Neuropsychological Rehabilitation, 24(3–4), 572–589.

Rindskopf, D. (2014). Nonlinear Bayesian Analysis for Single Case Designs. Journal of School Psychology, 52(2), 179–189.

Rindskopf, D., and Ferron, J. (2014). Using Multilevel Models to Analyze Single-Case Design Data. Single-Case Intervention Research: Methodological and Statistical Advances, 221–246.

Shadish, W.R. (2014). Statistical Analyses of Single-Case Designs: The Shape of Things to Come. Current Directions in Psychological Science, 23(2), 139–146.

Shadish, W.R. (2014). Analysis And Meta-Analysis of Single-Case Designs: An Introduction. Journal of School Psychology, 52(2), 109–122.

Shadish, W.R. (2011). Randomized Controlled Studies and Alternative Designs in Outcome Studies: Challenges and Opportunities. Research On Social Work Practice, 21(6): 636–643.

Shadish, W.R., Hedges, L.V., and Pustejovsky, J.E. (2014). Analysis and Meta-Analysis of Single-Case Designs With a Standardized Mean Difference Statistic: A Primer and Applications. Journal of School Psychology, 52(2), 123–147.

Shadish, W.R., Hedges, L.V., Pustejovsky, J.E., Boyajian, J.G., Sullivan, K.J., Andrade, A., and Barrientos, J.L. (2014). A D-Statistic for Single-Case Designs That is Equivalent to the Usual Between-Groups D-Statistic. Neuropsychological Rehabilitation, 24(3–4), 528–553.

Shadish, W.R., Kyse, E.N., and Rindskopf, D.M. (2013). Analyzing Data From Single-Case Designs Using Multilevel Models: New Applications and Some Agenda Items for Future Research. Psychological Methods, 18(3), 385.

Shadish, W.R., Rindskopf, D.M., Hedges, L.V., and Sullivan, K J. (2013). Bayesian Estimates of Autocorrelations in Single-Case Designs. Behavior Research Methods, 45(3), 813–821.

Shadish, W.R., and Sullivan, K.J. (2011). Characteristics of Single-Case Designs Used to Assess Intervention Effects in 2008. Behavior Research Methods, 43(4): 971–980.

Shadish, W.R., Zelinsky, N.A., Vevea, J.L., and Kratochwill, T.R. (2016). A Survey Of Publication Practices Of Single-Case Design Researchers When Treatments Have Small Or Large Effects. Journal Of Applied Behavior Analysis, 49(3), 656–673.

Shadish, W.R., Zuur, A.F., and Sullivan, K.J. (2014). Using Generalized Additive (Mixed) Models To Analyze Single Case Designs. Journal Of School Psychology, 52(2), 149–178.

Sullivan, K.J., Shadish, W.R., and Steiner, P.M. (2015). An Introduction To Modeling Longitudinal Data With Generalized Additive Models: Applications To Single-Case Designs. Psychological Methods, 20(1), 26.


Back