Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Understanding and Measuring Treatment Effect Heterogeneity in Large Scale Experiments and Pseudo-Experiments in Education
Center: NCER Year: 2015
Principal Investigator: Miratrix, Luke Awardee: Harvard University
Program: Statistical and Research Methodology in Education      [Program Details]
Award Period: 3 years (7/1/15–6/30/18) Award Amount: $803,246
Type: Methodological Innovation Award Number: R305D150040
Description:

The goal of this project is to create a framework that provides applied researchers with a set of practical tools and clearly lays out all the relevant assumptions for assessing treatment effect variation. When testing the impact of an intervention or treatment, education researchers are increasingly asking questions such as: (1) To what extent does the effect vary across units? (2) What accounts for this variation? These questions are closely related to understanding how treatments work, and, in particular, which aspects of a treatment's implementation are most strongly connected to its efficacy.

This project has three major goals. First, the team will develop the framework and accompanying theory in order to build accessible tools for applied researchers. Second, the researchers will use Monte Carlo simulation studies to investigate the performance of these tools under a wide range of practical settings. Third, the project team will apply these tools to real datasets in order to gauge their usefulness to applied researchers in terms of both interfacing with the software and making use of the analytical results. To disseminate the results, the researchers will prepare peer-reviewed conference presentations and journal manuscripts, and deliver a workshop and a short course for practitioners. Researchers will also make the user-friendly software available online.

Products and Publications

Journal article, monograph, or newsletter

Ding, P., and Lu, J. (2017). Principal Stratification Analysis Using Principal Scores. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79(3), 757–777.

Feller, A., Mealli, F., and Miratrix, L. (2017). Principal Score Methods: Assumptions, Extensions, and Practical Considerations. Journal of Educational and Behavioral Statistics, 42(6), 726–758.

Working paper

Ding, P., and Li, F. (2017). Causal Inference: A Missing Data Perspective. arXiv preprint arXiv:1712.06170.

Feller, A., Greif, E., Miratrix, L., and Pillai, N. (2016). Principal Stratificationi The Twilight Zone: Weakly Separated Components in Finite Mixture Models. arXiv preprint arXiv:1602.06595.

Miratrix, L.W., Sekhon, J.S., Theodoridis, A.G., and Campos, L.F. (2017). Worth Weighting? How to Think About and use Sample Weights in Survey Experiments. arXiv preprint arXiv:1703.06808.

Pashley, N.E., and Miratrix, L.W. (2017). Insights on Variance Estimation for Blocked and Matched Pairs Designs. arXiv preprint arXiv:1710.10342.

Yang, F., and Ding, P. (2018). Using Survival Information in Truncation by Death Problems Without the Monotonicity Assumption. arXiv preprint arXiv:1803.02024.

Yuan, L.H., Feller, A., and Miratrix, L.W. (2018). Identifying and Estimating Principal Causal Effects in Multi-site Trials. arXiv preprint arXiv:1803.06048.


Back