Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: A User-Friendly Tool for Designing Cluster Randomized Trials with Power and Relevance
Center: NCER Year: 2017
Principal Investigator: Tipton, Elizabeth Awardee: Northwestern University
Program: Statistical and Research Methodology in Education      [Program Details]
Award Period: 3 years (07/01/17 – 06/30/20) Award Amount: $893,737
Type: Methodological Innovation Award Number: R305D180002
Description:

Previous Award Number: R305D170024
Previous Awardee: Teachers College, Columbia University

Co-Principal Investigator: Spybrook, Jessaca

Purpose: The purpose of this project was to develop a user-friendly webtool for planning cluster randomized trials (CRTs) based on generalizability and statistical power for testing moderator effects, not just average treatment effects. Funding institutions and statistical developments have started to shift the focus of causal impact studies beyond the detection of average treatment impact to understanding moderators of these impacts and the detection of effects for specific inference populations, but the tools available to applied researchers for designing studies based on these factors are limited. The findings and products from this study will help improve the capacity of education researchers to design CRTs with both high internal and external validity, thereby increasing the relevance of applied education research results.

Project Activities: The research team gathered data on practices used in designs of Efficacy, Replication, and Effectiveness studies funded through the Education Research Grants Program at the Institute of Education Sciences (IES). These data came from interviews with 37 principal investigators from grants funded by IES between 2010–2015. The researchers developed new methods to bridge gaps between what are typically treated as three separate design considerations: generalizability, power to detect the average treatment effect, and power to detect moderator effects. Additionally, the researchers developed software so that applied researchers could address these considerations. The software went through multiple stages of usability testing, and the final product is free and publicly available. The researchers disseminated their findings through peer-reviewed journal manuscripts, conference presentations, and workshops.

Key Outcomes: The main findings of this project are as follows:

  • The project team developed new methods for designing studies including methods for planning for adequate power for tests of hypotheses regarding moderators, and methods for planning studies with multiple target estimands, such as multiple populations or moderator effects (Dong, Kelcey & Spybrook, 2018; Zhang, Spybrook, & Unlu, 2020; Tipton, 2022).
  • The study team found the following results from the information collected on common practices regarding recruitment, research designs, and moderator analyses in efficacy studies in education from a sample of projects funded by IES between 2010 to 2015 (Spybrook, et. al. 2020; Titpon, et. al. 2021):
    • Studies did not clearly report the population they were trying to represent or for whom their results would generalize.
    • Most study samples did not represent populations of schools other than those in the district they fell within.
    • Schools from large school districts were overrepresented relative to the population of public schools nationwide.
    • The samples included were more homogeneous than the population.
    • The research designs had adequate power for tests of individual level moderators but not school level moderators.

Products and Publications

ERIC Citations: Find available citations in ERIC for this award here or here.

Publicly Available Data:

  • The project team developed an R package, 'generalizeR', which is available at Northwestern and will be submitted to CRAN.

Additional online resources and information:

  • The project team developed a user-friendly webtool for planning randomized trials in education that integrates planning for generalizability and planning for statistical power to support the design of RCTs with both high internal and external validity. This free webtool is available at The Generalizer (www.thegeneralizer.org).
  • Papers and resources related to the project can be found on the STEPP Center website at https://steppcenter.northwestern.edu/.

Select Publications:

Dong, N., Kelcey, B., & Spybrook, J. (2018). Power analyses for moderator effects in three-level cluster randomized trials. The Journal of Experimental Education, 86(3), 489–514.

Spybrook, J., Zhang, Q., Kelcey, B., & Dong, N. (2020). Learning from cluster randomized trials in education: An assessment of the capacity of studies to determine what works, for whom, and under what conditions. Educational Evaluation and Policy Analysis, 42(3), 354–374.

Tipton, E. (2021). Beyond generalization of the ATE: Designing randomized trials to understand treatment effect heterogeneity. Journal of the Royal Statistical Society Series A, 184(2), 504–521.

Tipton, E., Spybrook, J., Fitzgerald, K. G., Wang, Q., & Davidson, C. (2021). Toward a system of evidence for all: Current practices and future opportunities in 37 randomized trials. Educational Researcher, 50(3), 145–156.

Tipton, E. (2022). Sample selection in randomized trials with multiple target populations. American Journal of Evaluation, 43(1), 70–89.

Zhang, Q., Spybrook, J., & Unlu, F. (2020). Examining design and statistical power for planning cluster randomized trials aimed at improving student science achievement and science teacher outcomes. AERA Open, 6(3).


Back