NCEE Blog

National Center for Education Evaluation and Regional Assistance

The WWC Evidence Standards: A Valuable and Accessible Resource for Teaching Validity Assessment of Causal Inferences to Identify What Works

by Herbert Turner, Ph.D., President and Principal Scientist, ANALYTICA, Inc.

 

The WWC Evidence Standards (hereafter, the Standards) provide a detailed description of the criteria used by the WWC to review studies. The standards were first developed in 2002 by leading methodological researchers using initial concepts from the Study Design and Implementation Assessment Device (DIAD), an instrument for assessing the correspondence between the methodological characteristics and implementation of social science research and using this research to draw inferences about causal relationships (Boruch, 1997; Valentine and Cooper, 2008).  During the past 16 years, the Standards have gone through four iterations of improvement, to keep pace with advances in methodological practice, and have been through rigorous peer review. The most recent of these is now codified in the WWC Standards Handbook 4.0 (hereafter, the Handbook).

 

Across the different versions of the Handbook, the methodological characteristics of an internally valid study, designed to causally infer the effect of an intervention on an outcome, have stood the test of time. These characteristics can be summarized as follows: A strong design starts with how the study groups are formed. It continues with use of reliable and valid measures of outcomes, has low attrition if a randomized controlled trial (RCT), shows baseline equivalence (in the analysis sample) if a quasi-experimental design (QED), and has no confounds.

 

These elements are the critical components of any strong research design – and are the cornerstones of all versions of the WWC’s standards. That fact, along with the transparent description of their logical underpinning, is what motivated me to use Standards 4.0 (for Group Designs) as the organizing framework for understanding study validity in a graduate-level Program Evaluation II course I taught at Boston College’s Lynch School of Education.

 

In spring 2017, nine Master and four Doctoral students participated in this semester-long course. The primary goal was to teach students how to organize their thinking and logically derive internal validity criteria using Standards 4.0—augmented with additional readings from the methodological literature. Students used the Standards (along with the supplemental readings) to design, implement, analyze, and report impact evaluations to determine what interventions work, harm, or have no discernible effect (Mosteller and Boruch, 2002). The Standards Handbook 4.0 along with online course modules were excellent resources to augment the lectures and provide Lynch School students with hands on learning.

 

At the end of the course, students were offered the choice to complete the WWC Certification Exam for Group Design or take the instructor’s developed final exam. All thirteen students chose to complete the WWC Certification Exam. Approximately half of the students became certified. Many emailed me personally to express their appreciation for the (1) opportunity to learn a systematic approach to organizing their thinking about assessing the validity of causal inference using data generated by RCTs and QEDs, and (2) developing design skills that can be used in other graduate courses and beyond. The WWC Evidence Standards and related online resources are a valuable, accessible, and free resource that have been rigorously vetted for close to two decades. The Standards have few equals as a resource to help students think systematically, logically, and clearly about designing (and evaluating) a valid research study to make causal inferences about what interventions work in education and related fields.

 

References

Boruch, R. F. (1997). Randomized experiments for planning and evaluation: A practical guide. Thousand Oaks, CA: Sage Publications.

Valentine, J.C., & Cooper, H. (2008), A systematic and transparent approach for assessing the methodological quality of intervention effectiveness research: The Study Design and Implementation Assessment Device (Study DIAD). Psychological Methods, 13(2), 130-149.

Mosteller, F., & Boruch, R. F. (2002). Evidence matters: Randomized trials in education research. Washington, D.C.: Brookings Institution Press.