Project Activities
People and institutions involved
IES program contact(s)
Products and publications
ERIC Citations: Find available citations in ERIC for this award here.
Journal article, monograph, or newsletter
Hedges, L.V., Pustejovsky, J.E., and Shadish, W.R. (2012). A Standardized Mean Difference Effect Size for Single Case Designs. Research Synthesis Methods, 3(3): 224-239.
Pustejovsky, J.E., Hedges, L.V., and Shadish, W.R. (2014). Design-Comparable Effect Sizes in Multiple Baseline Designs: A General Modeling Framework. Journal of Educational and Behavioral Statistics, 39(5), 368-393.
Rindskopf, D. (2014). Bayesian Analysis of Data From Single Case Designs. Neuropsychological Rehabilitation, 24(3-4), 572-589.
Rindskopf, D. (2014). Nonlinear Bayesian Analysis for Single Case Designs. Journal of School Psychology, 52(2), 179-189.
Rindskopf, D., and Ferron, J. (2014). Using Multilevel Models to Analyze Single-Case Design Data. Single-Case Intervention Research: Methodological and Statistical Advances, 221-246.
Shadish, W.R. (2014). Statistical Analyses of Single-Case Designs: The Shape of Things to Come. Current Directions in Psychological Science, 23(2), 139-146.
Shadish, W.R. (2014). Analysis And Meta-Analysis of Single-Case Designs: An Introduction. Journal of School Psychology, 52(2), 109-122.
Shadish, W.R. (2011). Randomized Controlled Studies and Alternative Designs in Outcome Studies: Challenges and Opportunities. Research On Social Work Practice, 21(6): 636-643.
Shadish, W.R., Hedges, L.V., and Pustejovsky, J.E. (2014). Analysis and Meta-Analysis of Single-Case Designs With a Standardized Mean Difference Statistic: A Primer and Applications. Journal of School Psychology, 52(2), 123-147.
Shadish, W.R., Hedges, L.V., Pustejovsky, J.E., Boyajian, J.G., Sullivan, K.J., Andrade, A., and Barrientos, J.L. (2014). A D-Statistic for Single-Case Designs That is Equivalent to the Usual Between-Groups D-Statistic. Neuropsychological Rehabilitation, 24(3-4), 528-553.
Shadish, W.R., Kyse, E.N., and Rindskopf, D.M. (2013). Analyzing Data From Single-Case Designs Using Multilevel Models: New Applications and Some Agenda Items for Future Research. Psychological Methods, 18(3), 385.
Shadish, W.R., Rindskopf, D.M., Hedges, L.V., and Sullivan, K J. (2013). Bayesian Estimates of Autocorrelations in Single-Case Designs. Behavior Research Methods, 45(3), 813-821.
Shadish, W.R., and Sullivan, K.J. (2011). Characteristics of Single-Case Designs Used to Assess Intervention Effects in 2008. Behavior Research Methods, 43(4): 971-980.
Shadish, W.R., Zelinsky, N.A., Vevea, J.L., and Kratochwill, T.R. (2016). A Survey Of Publication Practices Of Single-Case Design Researchers When Treatments Have Small Or Large Effects. Journal Of Applied Behavior Analysis, 49(3), 656-673.
Shadish, W.R., Zuur, A.F., and Sullivan, K.J. (2014). Using Generalized Additive (Mixed) Models To Analyze Single Case Designs. Journal Of School Psychology, 52(2), 149-178.
Sullivan, K.J., Shadish, W.R., and Steiner, P.M. (2015). An Introduction To Modeling Longitudinal Data With Generalized Additive Models: Applications To Single-Case Designs. Psychological Methods, 20(1), 26.
Supplemental information
Co-Principal Investigator: Rindskopf, David
For the second goal, the project turned the material developed under the first goal into a reliable body of methods for practical application. This work included: (a) developing empirical evidence on the range of autocorrelations, cases per study, the number of baseline and treatment periods per case, and observations per period based on data through a random sample of the literature, which will then guide both computer simulations and empirically generated hypothesis testing of the effects of variability in these matters; (b) extending the results to cases where baseline and treatment phase sample sizes are unequal; (c) extending the results to time series with trends (which may itself involve developing more than one estimator of effect size each with its own sampling distribution), (d) developing and checking the accuracy of approximations to these estimators when the model is known to be correct; (e) using simulations and analytic methods to test robustness of results to violations of the model; (f) addressing the uncertainty in the estimate of the autocorrelation; and (g) extending the new effect size statistic to Bayesian models.
Under the third goal, the project translated these basic statistical developments into practical tools that single-case researchers at various levels of statistical sophistication can use. The primary focus of this work was the creation of an SPSS macro to estimate d and its standard error, along with a detailed manual for the use of the macro, preparation of presentations and workshops on these matters, and writing manuscripts that show how to use these developments in applied work (including meta-analyses that use d). In addition, the project developed a manual on how to produce d when estimating multilevel models using HLM and WinBUGS (a Bayesian software).
Questions about this project?
To answer additional questions about this project or provide feedback, please contact the program officer.