Skip to main content

Breadcrumb

Home arrow_forward_ios Information on IES-Funded Research arrow_forward_ios Statistical Methods for Using Rigor ...
Home arrow_forward_ios ... arrow_forward_ios Statistical Methods for Using Rigor ...
Information on IES-Funded Research
Grant Closed

Statistical Methods for Using Rigorous Evaluation Results to Improve Local Education Policy Decisions

NCER
Program: Statistical and Research Methodology in Education
Program topic(s): Core
Award amount: $896,361
Principal investigator: Elizabeth Stuart
Awardee:
Johns Hopkins University
Year: 2015
Project type:
Methodological Innovation
Award number: R305D150003

People and institutions involved

IES program contact(s)

Allen Ruby

Associate Commissioner for Policy and Systems
NCER

Products and publications

Book chapter

Stuart, E. A. (2017). Generalizability of Clinical Trials Results. In S.C. Morton and C. Gatsonis (Eds.), Methods in Comparative Effectiveness Research (pp. 197-222). Chapman and Hall/CRC.

Journal article, monograph, or newsletter

Mercer, A.W., Kreuter, F., Keeter, S., and Stuart, E. A. (2017). Theory and Practice in Nonprobability Surveys: Parallels Between Causal Inference and Survey Inference. Public Opinion Quarterly, 81(S1), 250-271.

Stuart, E.A., Ackerman, B., and Westreich, D. (2017). Generalizability of Randomized Trial Results to Target Populations: Design and Analysis Possibilities. Research on Social Work Practice, 1049731517720730.

Stuart, E.A., and Rhodes, A. (2017). Generalizing Treatment Effect Estimates From Sample to Population: A Case Study in the Difficulties of Finding Sufficient Data. Evaluation Review, 41(4), 357-388.

Supplemental information

Co-Principal Investigator: Robert Olsen (ABT)

The purpose of this project is to investigate methods for testing the generalizability of the results of a multi-site study to sites that were not in the study's sample. Results of a well-designed and well-implemented cluster-randomized trial of a new curriculum, for example, may accurately reflect the average effect of the tested curriculum in the evaluated sites and even the average impact it would have in the population. Any individual school district, however, is likely to differ materially from the average evaluation site, possibly in ways that would affect the impact of the new curriculum. The average results of the evaluation might therefore be misleading for any given site.

Using four real data sets and using simulations, the research team will test multiple approaches to predicting site-specific impacts from full-study results. The approaches they will investigate are: using the overall average impact; using subgroup-specific impact estimates; response surface modeling; and two ways of reweighting the original sample (via propensity score reweighting and kernel weighting) to make it like the target site. After comparing the performance of these approaches, the team will begin to disseminate the results in seminars, conference presentations, and peer-reviewed journal manuscripts. Depending on the computational complexity of the favored approaches, the researchers may also create and disseminate software to run the analyses.

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

Tags

MathematicsData and Assessments

Share

Icon to link to Facebook social media siteIcon to link to X social media siteIcon to link to LinkedIn social media siteIcon to copy link value

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

You may also like

Zoomed in IES logo
Workshop/Training

Data Science Methods for Digital Learning Platform...

August 18, 2025
Read More
Zoomed in IES logo
Workshop/Training

Meta-Analysis Training Institute (MATI)

July 28, 2025
Read More
Zoomed in Yellow IES Logo
Workshop/Training

Bayesian Longitudinal Data Modeling in Education S...

July 21, 2025
Read More
icon-dot-govicon-https icon-quote