Skip Navigation
Identifying and Implementing Educational Practices Supported By Rigorous Evidence: A User Friendly Guide
December 2003

Purpose and Executive Summary

This Guide seeks to provide educational practitioners with user-friendly tools to distinguish practices supported by rigorous evidence from those that are not.

The field of K-12 education contains a vast array of educational interventions - such as reading and math curricula, schoolwide reform programs, after-school programs, and new educational technologies - that claim to be able to improve educational outcomes and, in many cases, to be supported by evidence. This evidence often consists of poorly-designed and/or advocacy-driven studies. State and local education officials and educators must sort through a myriad of such claims to decide which interventions merit consideration for their schools and classrooms. Many of these practitioners have seen interventions,introduced with great fanfare as being able to produce dramatic gains, come and go over the years, yielding little in the way of positive and lasting change - a perception confirmed by the flat achievement results over the past 30 years in the National Assessment of Educational Progress long-term trend.

The federal No Child Left Behind Act of 2001, and many federal K-12 grant programs, call on educational practitioners to use "scientifically-based research" to guide their decisions about which interventions to implement. As discussed below, we believe this approach can produce major advances in the effectiveness of American education. Yet many practitioners have not been given the tools to distinguish interventions supported by scientifically-rigorous evidence from those which are not. This Guide is intended to serve as a user-friendlyresource that the education practitioner can use to identify and implement evidence-based interventions, so as to improve educational and life outcomes for the children they serve.

If practitioners have the tools to identify evidence-based interventions, they may be able to spark major improvements in their schools and, collectively, in American education.

As illustrative examples of the potential impact of evidence-based interventions on educational outcomes, the following have been found to be effective in randomized controlled trials - research's "gold standard" for establishing what works:

  • One-on-one tutoring by qualified tutors for at-risk readers in grades 1-3 (the average tutored student reads more proficiently than approximately 75% of the untutored students in the control group).1
  • Life-Skills Training for junior high students (low-cost, replicable program reduces smoking by 20% and serious levels of substance abuse by about 30% by the end of high school, compared to the control group).2
  • Reducing class size in grades K-3 (the average student in small classes scores higher on the Stanford Achievement Test in reading/math than about 60% of students in regular-sized classes).3
  • Instruction for early readers in phonemic awareness and phonics (the average student in these interventions reads more proficiently than approximately 70% of students in the control group).4
  • in addition, preliminary evidence from randomized controlled trials suggests the effectiveness of:

  • High-quality, educational child care and preschool for low-income children (by age 15, reduces special education placements and grade retentions by nearly 50% compared to controls; by age 21, more than doubles the proportion attending four-year college and reduces the percentage of teenage parents by 44%).5
  • Further research is needed to translate this finding into broadly-replicable programs shown effective in typical classroom or community settings.

The fields of medicine and welfare policy show that practice guided by rigorous evidence can produce remarkable advances.

Life and health in America has been profoundly improved over the past 50 years by the use of medical practices demonstrated effective in randomized controlled trials. These research-proven practices include: (i) vaccines for polio, measles, and hepatitis B; (ii) interventions for hypertension and high cholesterol, which have helped bring about a decrease in coronary heart disease and stroke by more than 50 percent over the past half-century; and (iii) cancer treatments that have dramatically improved survival rates from leukemia, Hodgkin's disease, and many other types of cancer.

Similarly, welfare policy, which since the mid-1990s has been remarkably successful in moving people from welfare into the workforce, has been guided to a large extent by scientifically-valid knowledge about "what works" generated in randomized controlled trials.6

Our hope is that this Guide, by enabling educational practitioners to draw effectively on rigorous evidence, can help spark similar evidence-driven progress in the field of education.

The diagram on the next page summarizes the process we recommend for evaluating whether an educational intervention is supported by rigorous evidence.

In addition, appendix B contains a checklist to use in this process.

How to evaluate whether an educational intervention is supported by rigorous evidence: An overview


Step 1. Is the intervention backed by "strong" evidence of effectiveness?

Quality of studies needed to establish "strong" evidence:
  • Randomized controlled trials (defined on page 1) that are well-designed and implemented (see pages 5-9).
Quantity of evidence needed:

Trials showing effectiveness in

  • Two or more typical school settings,
  • Including a setting similar to that of your schools/classrooms.
    (see page 10)


Step 2. If the intervention is not backed by "strong" evidence, is it backed by
"possible" evidence of effectiveness?

Types of studies that can comprise "possible" evidence:
  • Randomized controlled trials whose quality/quantity are good but fall short of "strong" evidence (see page 11); and/or
  • Comparison-group studies (defined on page 3) in which the intervention and comparison groups are very closely matched in academic achievement, demographics, and other characteristics (see pages 11-12).
Types of studies that do not comprise "possible" evidence:
  • Pre-post studies (defined on page 2).
  • Comparison-group studies in which the intervention and comparison groups are not closely matched
    (see pages 12-13).
  • "Meta-analyses" that include the results of such lower-quality studies (see page 13).

Step 3. If the answers to both questions above are "no," one may conclude that the intervention is not supported by meaningful evidence.

1 Evidence from randomized controlled trials, discussed in the following journal articles, suggests that one-on-one tutoring of at-risk readers by a well-trained tutor yields an effect size of about 0.7. This means that the average tutored student reads more proficiently than approximately 75 percent of the untutored students in the control group. Barbara A. Wasik and Robert E. Slavin, "Preventing Early Reading Failure With One-To-One Tutoring: A Review of Five Programs,"Reading Research Quarterly, vol. 28, no. 2, April/May/June 1993, pp. 178-200 (the three programs evaluated in randomized controlled trials produced effect sizes falling mostly between 0.5 and 1.0). Barbara A. Wasik, "Volunteer Tutoring Programs in Reading: A Review," Reading Research Quarterly, vol. 33, no. 3, July/August/September 1998, pp. 266-292 (the two programs using well-trained volunteer tutors that were evaluated in randomized controlled trials produced effect sizes of 0.5 to 1.0, and .50, respectively). Patricia F. Vadasy, Joseph R. Jenkins, and Kathleen Pool, "Effects of Tutoring in Phonological and Early Reading Skills on Students at Risk for Reading Disabilities, Journal of Learning Disabilities, vol. 33, no. 4, July/August 2000, pages 579-590 (randomized controlled trial of a program using well-trained nonprofessional tutors showed effect sizes of 0.4 to 1.2).

2 Gilbert J. Botvin et. al., "Long-Term Follow-up Results of a Randomized Drug Abuse Prevention Trial in a White, Middle-class Population," Journal of the American Medical Association, vol. 273, no. 14, April 12, 1995, pp. 1106-1112. Gilbert J. Botvin with Lori Wolfgang Kantor, "Preventing Alcohol and Tobacco Use Through Life Skills Training: Theory, Methods, and Empirical Findings," Alcohol Research and Health, vol. 24, no. 4, 2000, pp. 250-257.

3 Frederick Mosteller, Richard J. Light, and Jason A. Sachs, "Sustained Inquiry in Education: Lessons from Skill Grouping and Class Size," Harvard Education Review, vol. 66, no. 4, winter 1996, pp. 797-842. The small classes averaged 15 students; the regular-sized classes averaged 23 students.

4 These are the findings specifically of the randomized controlled trials reviewed in "Teaching Children To Read: An Evidence-Based Assessment of the Scientific Research Literature on Reading and Its Implications for Reading Instruction," Report of the National Reading Panel, 2000.

5 Frances A. Campbell et. al., "Early Childhood Education: Young Adult Outcomes From the Abecedarian Project," Applied Developmental Science, vol. 6, no. 1, 2002, pp. 42-57. Craig T. Ramey, Frances A. Campbell, and Clancy Blair, "Enhancing the Life Course for High-Risk Children: Results from the Abecedarian Project," in Social Programs That Work, edited by Jonathan Crane (Russell Sage Foundation, 1998), pp. 163-183.

6 For example, randomized controlled trials showed that (i) welfare reform programs that emphasized short-term job-search assistance and encouraged participants to find work quickly had larger effects on employment, earnings, and welfare dependence than programs that emphasized basic education; (ii) the work-focused programs were also much less costly to operate; and (iii) welfare-to-work programs often reduced net government expenditures. The trials also identified a few approaches that were particularly successful. See, for example, Manpower Demonstration Research Corporation, National Evaluation of Welfare-to-Work Strategies: How Effective Are Different Welfare-to-Work Approaches? Five-Year Adult and Child Impacts for Eleven Programs (U.S. Department of Health and Human Services and U.S. Department of Education, November 2001). These valuable findings were a key to the political consensus behind the 1996 federal welfare reform legislation and its strong work requirements, according to leading policymakers - including Ron Haskins, who in 1996 was the staff director of the House Ways and Means Subcommittee with jurisdiction over the bill.

TopImage of an arrow pointing up