Skip Navigation
Technical Assistance Materials for Conducting Rigorous Impact Evaluations

Conducting strong impact studies requires an understanding of numerous technical topics. The materials on this page are designed for evaluators who want basic resources on the design, implementation, analysis, and reporting of findings from impact studies.

Most of these resources were developed under technical assistance contracts that provided support for required evaluations under the U.S. Department of Education's Investing in Innovation (i3) or First in the World programs. As a courtesy, IES makes these resources available for wide benefit. However, these materials do not necessarily represent IES' views about best scientific practices.

Tools for Drafting an Evaluation Plan and Contrast Model

Evaluation Plan Template: The Evaluation Plan Template identifies the key components of an evaluation plan and provides guidance about the information typically included in each section of a plan for evaluating both the effectiveness and implementation of an intervention. Evaluators can use this tool to help develop their plan for a rigorous evaluation, with a focus on meeting What Works Clearinghouse™ (WWC) evidence standards. The template can be used in combination with the Contrast Tool, a tool for documenting each impact that the evaluation will estimate to test program effectiveness.

Contrast Tool: The Contrast Tool is an Excel spreadsheet designed to support and supplement the development of an evaluation plan. The spreadsheet is a tool for documenting impacts that the evaluation will estimate to test program effectiveness. Evaluators can use this tool to align planned analyses with research questions and to outline key information about the plan's outcome measures, baseline measures, and analytic samples. The Contrast Tool can be used in combination with the Evaluation Plan Template, a Microsoft Word template for developing a detailed plan for evaluating the effectiveness and implementation of an intervention.

Completed Examples

Student-Level Randomized Controlled Trials

Example Evaluation Plan for a Student-Level Randomized Controlled Trial: This document provides an example of a detailed evaluation plan for evaluating the effectiveness of an intervention. Developed using the Evaluation Plan Template, the plan is for a randomized control trial (RCT) in which students are randomly assigned to an intervention or a control condition. This example illustrates the information that an evaluator should include in each section of an evaluation plan, as well as provides tips and highlights key information to consider when writing an evaluation plan for a student-level RCT.

Example Contrast Tool for a Student-Level Randomized Controlled Trial: This document provides an example of a completed Contrast Tool to accompany the Example Evaluation Plan for a Quasi-Experimental Design. This example illustrates how the Contrast Tool can be used in combination with the Evaluation Plan Template to list each impact that will be estimated to test program effectiveness, by using details from the example evaluation. The Example Contrast Tool and the Example Evaluation Plan should be reviewed side-by-side.

Cluster Randomized Controlled Trials

Example Evaluation Plan for a Cluster Randomized Controlled Trial: This document provides an example of a detailed evaluation plan for evaluating the effectiveness of an intervention. Developed using the Evaluation Plan Template, the plan is for a randomized control trial (RCT) in which clusters are randomly assigned to an intervention or a control condition. This example illustrates the information that an evaluator should include in each section of an evaluation plan, as well as provides tips and highlights key information to consider when writing an evaluation plan for a cluster RCT. Accompanying this example evaluation plan is the Example Contrast Tool for a Cluster RCT, which lists each impact that the example evaluation will estimate to test program effectiveness. The example Evaluation Plan and the example Contrast Tool can be reviewed side-by-side.

Example Contrast Tool for a Cluster Randomized Controlled Trial: This document provides an example of a completed Contrast Tool to accompany the Example Evaluation Plan for a Cluster Randomized Controlled Trial. This example illustrates how the Contrast Tool can be used in combination with the Evaluation Plan Template to list each impact that will be estimated to test program effectiveness, by using details from the example evaluation. The Example Contrast Tool and the Example Evaluation Plan should be reviewed side-by-side.

Quasi-Experiments

Example Evaluation Plan for a Quasi-Experimental Design: This document provides an example of a detailed evaluation plan for a quasi-experiment. The example illustrates the information that an evaluator should include in each section of an evaluation plan, provides tips, and highlights key information to consider. Accompanying this example plan is the Example Contrast Tool for a QED, which lists each impact that the example evaluation will estimate to test program effectiveness. The example Evaluation Plan and the example Contrast Tool can be reviewed side-by-side.

Example Contrast Tool for a Quasi-Experimental Design: This document provides an example of a completed Contrast Tool to accompany the Example Evaluation Plan for a Quasi-Experimental Design. This example illustrates how the Contrast Tool can be used in combination with the Evaluation Plan Template to list each impact that will be estimated to test program effectiveness, by using details from the example evaluation. The Example Contrast Tool and the Example Evaluation Plan should be reviewed side-by-side.

Articulating the Intervention Model

Logic models for program design, implementation, and evaluation: Workshop toolkit: This Logic Model Workshop Toolkit is designed to help practitioners learn the overall purpose of a logic model, the different elements of a logic model, and the appropriate steps for developing and using a logic model for program evaluation. This toolkit includes a facilitator workbook, a participant workbook, and a slide deck.

Specifying the Assignment/Selection Procedure

Random Assignment Designs

Designing Strong Studies: Developing Studies Consistent with What Works Clearinghouse (WWC) Evidence Standards: This WWC webinar shows how to design strong studies that test the impact of interventions in schools and classrooms, with specific examples for studying interventions targeting teachers.

First in the World Project Directors Meeting: Understanding Challenges in Random Assignment: : For projects that are using or considering a random assignment design, this presentation discusses how to implement a random assignment design—including building relationships/getting buy-in, integrating randomization into existing processes, and suggested sequence of steps—and how to maintain the integrity of random assignment throughout the evaluation.

Quasi-Experimental Designs

Designing Quasi-Experiments: Meeting What Works Clearinghouse (WWC) Standards Without Random Assignment: This webinar provides an overview of the WWC and their standards, QEDs, related WWC resources, and tips on staying informed.

Methods for Minimizing Differences Between Groups in Quasi-Experimental Designs: This webinar discusses approaches to help improve causal inference in quasi-experimental evaluations including (1) methods for forming similar treatment and comparison groups, (2) deciding on the best matching approach for your study, (3) selecting sample characteristics for matching; and (4) meeting WWC standards.

First in the World Project Directors Meeting: Implementing a Successful QED: Using a quasi-experimental design? This presentation provides information on how to select a comparison sample, including considerations for minimizing selection bias, establishing baseline equivalence, and pitfalls to avoid in order to successfully meet WWC standards with reservations.

Sample Size, Expected Size of Effect, and Minimum Detectable Effect Size

Statistical Power Analysis in Education Research: This paper provides a guide to calculating statistical power for the complex multilevel designs that are used in most field studies in education research.

Analysis and Reporting

Testing baseline equivalence

WWC Standards Brief: Baseline Equivalence: What Works Clearinghouse (WWC) Standards Briefs explain the rules the WWC uses to evaluate the quality of studies for practitioners, researchers, and policymakers. As part of the WWC review process for certain types of studies, reviewers assess whether the intervention group (those that received the intervention of interest) and the comparison group (those that did not receive the intervention) were similar at the start of the study.

Assessing Attrition

WWC Standards Brief: Attrition: What Works Clearinghouse (WWC) Standards Briefs explain the rules the WWC uses to evaluate the quality of studies for practitioners, researchers, and policymakers. Attrition (loss of sample) occurs when individuals initially included in a study are not included in the final study analysis. Attrition is a common issue in education research and can occur for many reasons.

Reporting findings

Translating the Statistical Representation of the Effects of Education Interventions into More Readily Interpretable Forms: The purpose of this guide is to encourage researchers to go a step beyond reporting the statistics that represent group differences. Those statistical representations, often with minimal effort, can be translated into forms that allow their magnitude and practical significance to be more readily understood.

What Works Clearinghouse™ Reporting Guide for Study Authors: This document provides support for reporting findings in a way that is clear, complete, and transparent.