Skip to main content

Breadcrumb

Home arrow_forward_ios Information on ... arrow_forward_ios Statistical Met ...
Home arrow_forward_ios ... arrow_forward_ios Statistical Met ...
Information on ...
Grant Closed

Statistical Methods for Using Rigorous Evaluation Results to Improve Local Education Policy Decisions

NCER
Program: Statistical and Research Methodology in Education
Program topic(s): Core
Award amount: $896,361
Principal investigator: Elizabeth Stuart
Awardee:
Johns Hopkins University
Year: 2015
Award period: 3 years (07/01/2015 - 06/30/2018)
Project type:
Methodological Innovation
Award number: R305D150003

Purpose

The purpose of this project is to investigate methods for testing the generalizability of the results of a multi-site study to sites that were not in the study's sample. Results of a well-designed and well-implemented cluster-randomized trial of a new curriculum, for example, may accurately reflect the average effect of the tested curriculum in the evaluated sites and even the average impact it would have in the population. Any individual school district, however, is likely to differ materially from the average evaluation site, possibly in ways that would affect the impact of the new curriculum. The average results of the evaluation might therefore be misleading for any given site.

Using four real data sets and using simulations, the research team will test multiple approaches to predicting site-specific impacts from full-study results. The approaches they will investigate are: using the overall average impact; using subgroup-specific impact estimates; response surface modeling; and two ways of reweighting the original sample (via propensity score reweighting and kernel weighting) to make it like the target site. After comparing the performance of these approaches, the team will begin to disseminate the results in seminars, conference presentations, and peer-reviewed journal manuscripts. Depending on the computational complexity of the favored approaches, the researchers may also create and disseminate software to run the analyses.

People and institutions involved

IES program contact(s)

Allen Ruby

Project contributors

Robert Olsen

Co-principal investigator

Products and publications

Book chapter

Stuart, E. A. (2017). Generalizability of Clinical Trials Results. In S.C. Morton and C. Gatsonis (Eds.), Methods in Comparative Effectiveness Research (pp. 197-222). Chapman and Hall/CRC.

Journal article, monograph, or newsletter

Mercer, A.W., Kreuter, F., Keeter, S., and Stuart, E. A. (2017). Theory and Practice in Nonprobability Surveys: Parallels Between Causal Inference and Survey Inference. Public Opinion Quarterly, 81(S1), 250-271.

Stuart, E.A., Ackerman, B., and Westreich, D. (2017). Generalizability of Randomized Trial Results to Target Populations: Design and Analysis Possibilities. Research on Social Work Practice, 1049731517720730.

Stuart, E.A., and Rhodes, A. (2017). Generalizing Treatment Effect Estimates From Sample to Population: A Case Study in the Difficulties of Finding Sufficient Data. Evaluation Review, 41(4), 357-388.

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

Tags

Data and AssessmentsMathematics

Share

Icon to link to Facebook social media siteIcon to link to X social media siteIcon to link to LinkedIn social media siteIcon to copy link value

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

You may also like

Zoomed in IES logo
Workshop/Training

Summer Research Training Institute on Cluster-Rand...

July 06, 2026
Read More
Blue zoomed in IES logo
News

New Toolkit Supports Educators in Implementing the...

February 24, 2026 by
Read More
Zoomed in IES logo
Video

Unlock insights about your school with the school ...

Author(s): REL Mid-Atlantic
Read More
icon-dot-govicon-https icon-quote