Skip Navigation

Standards for Excellence in Education Research

Rigorous education research that is transparent, actionable, and focused on consequential outcomes has the potential to dramatically improve student achievement. Since 2002, the Institute of Education Science has supported rigorous evidence-building about education policy, programs, and practices. IES's SEER Principles are meant to complement the WWC's focus on internal validity by supporting a distinct, IES-wide effort that emphasizes additional factors that can make research transformative.

The SEER Principles were first introduced by IES Director Mark Schneider in two blogs. You can read those blogs here and here. IES has iterated frequently on SEER and expects to continue to do so as we receive feedback from researchers, practitioners, and policymakers. To provide feedback, please contact NCEE.Feedback@ed.gov.

The SEER Principles encourage researchers to:

  • Pre-register studies
    • Did the researcher execute the research and analysis activities as originally proposed in a recognized study registry?
    • Did the researcher describe key elements of the study protocol, including a limited number of primary outcomes plans for their analysis, in their registration?
    • Did the researcher clearly explain any deviations from those plans and offer a reasonable rationale for doing so?
    • Did the researcher report on each of the primary outcomes registered at the study's outset?
  • Make findings, methods, and data open
    • Did the researcher make research publications freely available to others via ERIC?
    • Did the researcher provide access to the final research data of all publications, while protecting the rights and privacy of human subjects?
  • Identify interventions' core components
    • Did the researcher document the core components of an intervention, including its essential practices, structural elements, and the contexts in which it was implemented and tested?
    • Did the researcher offer a clear description of how the core components of an intervention are hypothesized to affect outcomes?
    • Did the researcher's analysis help us understand which components are most important in achieving impact?

      What We're Reading
      Blase, K. & Fixsen, D. (2013). Core intervention components: Identifying and operationalizing what makes programs work. ASPE Research Brief. Washington, DC: Office of the Assistant Secretary for Planning and Evaluation, Office of Human Services Policy, U.S. Department of Health and Human Services. Retrieved from https://aspe.hhs.gov/report/core-intervention-components-identifying-and-operationalizing-what-makes-programs-work

      Ferber, T., Wiggins, M. E., & Sileo, A. (2019). Advancing the use of core components of effective programs. Washington, DC: The Forum for Youth Investment. Retrieved from https://forumfyi.org/knowledge-center/advancing-core-components/
  • Document treatment implementation and contrast
    • Did the researcher document the process by which the treatment was implemented, including:
      • change management approaches used to deploy the treatment;
      • how the treatment was integrated into related programs and practices; and
      • potential accelerants of, or barriers to, instantiating the treatment by researchers and practitioners?
    • Did the researcher measure the fidelity of an intervention's implementation?
    • Did the researcher document, and identify opportunities to learn from, adaptations of the intervention that were observed during implementation?
    • Did the researcher document the counterfactual condition(s), including measures of essential elements of the treatment contrast between participants in the treatment and control conditions as identified by an intervention's core components?

      What We're Reading
      Hamilton, G. & Scrivener, S. (2018). Measuring treatment contract in randomized controlled trials. MDRC Working Paper. Retrieved from https://www.mdrc.org/sites/default/files/MTC_Paper_MDRC_WEBSITE_VERSION.pdf

      Weiss, M., Bloom H. S., & Brock, T. (2014). A conceptual framework for studying the sources of variation in program effects. Journal of Policy Analysis and Management, 33(3), 778-808. https://doi.org/10.1002/pam.21760
  • Analyze interventions' costs
    • Did the researcher measure the cost of components of the intervention relative to the control or comparison condition?

      What We're Reading
      Hollands, F.M., Kieffer, M.J., Shand, R., Pan, Y., Cheng, H, & Levin, H.M. (2016). Cost-effectiveness analysis of early reading programs: A demonstration with recommendations for future research. Journal of Research on Educational Effectiveness, 9, 30-53. https://doi.org/10.1080/19345747.2015.1055639

      Levin, H.M., & Belfield, C. (2015). Guiding the development and use of cost-effectiveness analysis in education. Journal of Research on Educational Effectiveness, 8, 400-418. https://doi.org/10.1080/19345747.2014.915604
  • Focus on meaningful outcomes
    • Did the researcher explore outcomes that are broadly recognized as useful to measure students' learning, opportunities in education, or success from education and appropriate to the goals and audiences of the research?
    • Did the researcher explore variation in outcomes by subpopulations of interest?
    • Did the researcher explore whether treatment effects were observed over time, particularly on distal measures an intervention might be thought to address?
  • Facilitate generalization of study findings
    • Did the researcher, through intentional sampling or other means, design the study to permit ready generalization of its findings to populations of interest?
    • If not done a priori, did the researcher make statistical adjustments to their analytic sample to support generalizing study findings to populations of interest?

      What We're Reading Stuart, E. A., Bradshaw, C. P., & Leaf, P. J. Assessing the generalizability of randomized trial results to target populations. Prevention Science, 16, 475-485. https://doi.org/10.1007/s11121-014-0513-z

      Tipton, E. & Olsen, R. B. (2018). A review of statistical methods for generalizing from evaluations of educational interventions. Educational Researcher, 47, 516-524. https://doi.org/10.3102/0013189x18781522
  • Support scaling of promising results
    • Did the researcher execute the research in settings and with student populations such that it can inform extending the reach of promising interventions?
    • Did the researcher explore factors that can inform the efficacy and sustainability of the intervention at scale?
    • Did the researcher develop materials that could support the replication and/or scaling of an intervention by others, such as manuals, toolkits, or implementation guides?

      What We're Reading
      Coburn, C. E. (2003). Rethinking scale: Moving beyond numbers to deep and lasting change. Educational Researcher, 32, 3-12. https://doi.org/10.3102/0013189x032006003

      McDonald, S., Keesler, V. A., Kauffman, N. J., & Schneider, B. (2006). Scaling-up exemplary interventions. Educational Researcher, 35, 15-24. https://doi.org/10.3102/0013189x035003015