|Title:||Statistical Methods for Using Rigorous Evaluation Results to Improve Local Education Policy Decisions|
|Principal Investigator:||Stuart, Elizabeth||Awardee:||Johns Hopkins University|
|Program:||Statistical and Research Methodology in Education [Program Details]|
|Award Period:||3 years (07/01/15 – 06/30/18)||Award Amount:||$896,361|
|Type:||Methodological Innovation||Award Number:||R305D150003|
Co-Principal Investigator: Robert Olsen (ABT)
The purpose of this project is to investigate methods for testing the generalizability of the results of a multi-site study to sites that were not in the study's sample. Results of a well-designed and well-implemented cluster-randomized trial of a new curriculum, for example, may accurately reflect the average effect of the tested curriculum in the evaluated sites and even the average impact it would have in the population. Any individual school district, however, is likely to differ materially from the average evaluation site, possibly in ways that would affect the impact of the new curriculum. The average results of the evaluation might therefore be misleading for any given site.
Using four real data sets and using simulations, the research team will test multiple approaches to predicting site-specific impacts from full-study results. The approaches they will investigate are: using the overall average impact; using subgroup-specific impact estimates; response surface modeling; and two ways of reweighting the original sample (via propensity score reweighting and kernel weighting) to make it like the target site. After comparing the performance of these approaches, the team will begin to disseminate the results in seminars, conference presentations, and peer-reviewed journal manuscripts. Depending on the computational complexity of the favored approaches, the researchers may also create and disseminate software to run the analyses.
Stuart, E. A. (2017). Generalizability of Clinical Trials Results. In S.C. Morton and C. Gatsonis (Eds.), Methods in Comparative Effectiveness Research (pp. 197–222). Chapman and Hall/CRC.
Journal article, monograph, or newsletter
Mercer, A.W., Kreuter, F., Keeter, S., and Stuart, E. A. (2017). Theory and Practice in Nonprobability Surveys: Parallels Between Causal Inference and Survey Inference. Public Opinion Quarterly, 81(S1), 250–271.
Stuart, E.A., Ackerman, B., and Westreich, D. (2017). Generalizability of Randomized Trial Results to Target Populations: Design and Analysis Possibilities. Research on Social Work Practice, 1049731517720730.
Stuart, E.A., and Rhodes, A. (2017). Generalizing Treatment Effect Estimates From Sample to Population: A Case Study in the Difficulties of Finding Sufficient Data. Evaluation Review, 41(4), 357–388.