Project Activities
Key outcomes
People and institutions involved
IES program contact(s)
Products and publications
ERIC Citations: Find available citations in ERIC for this award here.
Project Website: https://www.edreplication.org/
Additional Online Resources and Information:
- Scurlock, A., McCharen, K., Hasan, L., & Congxin, X. (2021). TranscriptSim: Automated NLP Document Similarity. (Version 1.1). https://github.com/congxinxu0116/TranscriptSim
- Machita, J., Rohrich, T. Jiang, Y. & Zhen, Y. (2021). Designing a Replicable Data Infrastructure for Education Research. https://github.com/taylorrohrich/MSDS_SERA_capstone
- Erickson, S. (2023). Replication Code for Experimental Evidence on Robustness of Coaching Supports in Teacher Education. https://github.com/steffenerickson/collab_rep_lab/tree/a3a9112cdf3e6b7e9cfbd419150a29b3857b8963/nsf_v2
- Erickson, S. (2023). Correspondence Test in STATA. https://github.com/steffenerickson/correspondence_test
Select Publications:
Anglin K.L., Wong V.C., Boguslav A. (2021). A natural language processing approach to measuring treatment adherence and consistency using semantic similarity. AERA Open, 7, 23328584211028615.
Boguslav, A., & Cohen, J. (2024). Different Methods for Assessing Preservice Teachers' Instruction: Why Measures Matter. Journal of Teacher Education, 75(2), 168-185.
Cohen, J., Wong, V. C., Krishnamachari, A., & Erickson, S. (2024). Experimental evidence on the robustness of coaching supports in teacher education. Educational Researcher, 53(1), 19-35.
Steiner, P.M., Wong, V.C. & Anglin, K. (2019). A causal replication framework for designing and assessing replication efforts. Zeitschrift fur Psychologie, (226)3.
Steiner, P. M., Sheehan, P., & Wong, V. C. (2023). Correspondence measures for assessing replication success. Psychological Methods. Advance online publication.
Wong V.C., Anglin K., Steiner P.M. (2022). Design-based approaches to causal replication studies. Prevention Science, 23(5), 723-738.
Project website:
Related projects
Supplemental information
Co-Principal Investigator(s): Steiner, Peter
- Demonstrated how multiple research designs for replication can be combined to identify systematic sources of effect variation
- Examined methods and approaches for assessing replication design assumptions and developed approaches for how researchers may demonstrate the extent to which their studies meet replication design assumptions (using balance tables) and have treatment stability across studies replicating a treatment
- Examined the most used statistical methods for assessing replication success and compared their performance in different contexts and introduced a new measure for assessing replication success (the correspondence test)
- Provided closed formulas for power calculations that can be used to determine the minimum detectable effect size (and sample sizes) for each study to help researchers plan prospective replication efforts
- Many existing replication efforts yield uninterpretable results. This is due to the fact that multiple replication assumptions are violated simultaneously and that most replication efforts underpowered for detecting replication success (Steiner, Wong, & Anglin 2019; Wong, Anglin, & Steiner, 2021; Steiner, Sheehan, & Wong, 2023).
- Systematic replication designs can be used to identify sources of effect heterogeneity (Steiner, Wong, & Anglin 2019; Wong, Anglin, & Steiner, 2021). These approaches are ethical, feasible, and desirable in field settings.
- Evaluating replication success requires well-defined metrics for assessing replication success. These metrics should be defined in advance through pre-registration and have different statistical properties and power requirements for achieving replication success. The correspondence test provides a unified approach for assessing the statistical test of difference and equivalence of results in the same framework. Most existing replication efforts are underpowered for detecting the replicability of results (Steiner, Sheehan, & Wong, 2023).
Statistical/Methodological Product: The project team introduced methodological theory (the Causal Replication Framework); case study examples; code for designing, implementing, and analyzing systematic replication designs; and metrics for assessing replication success (including the correspondence test).
Development/Refinement Process: The project team's primary methodologies for refining the replication designs were through case study approaches to design, conduct, and analyze systematic replication studies, and simulation studies to examine the statistical properties for different metrics for assessing replication success.
User Testing: Substantive researchers in the fields of teacher preparation, reading, and special education worked in collaboration with the project team to design the studies that would be used in the replication efforts, to carry and conduct the studies, and to interpret results with the research team.
Questions about this project?
To answer additional questions about this project or provide feedback, please contact the program officer.