Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Sensitivity Analysis—If We're Wrong, How Far Are We from Being Right?
Center: NCER Year: 2011
Principal Investigator: Hill, Jennifer Awardee: New York University
Program: Statistical and Research Methodology in Education      [Program Details]
Award Period: 3 years Award Amount: $928,537
Type: Methodological Innovation Award Number: R305D110037
Description:

Co-Principal Investigator: Scott, Marc

Purpose: This project extended existing and develop new methods for sensitivity analyses that can be used to quantify the uncertainty about causal inferences made when strong and often assumptions are required that are not testable. An example of this is when observational data is used or when randomized experiments suffer from missing data or non-compliance with assignment. Although reasonable strategies for sensitivity analyses already exist to address some of these assumptions, most remain severely underdeveloped and, in some cases, difficult for education researchers to use. This project provided education researchers with accessible and user-friendly methods to test the key assumptions underlying their causal inferences. With these methods, researchers will be able to gauge how far their estimates might be from the truth.

Sensitivity analysis is an umbrella term that encapsulates any of a variety of different methods that assess the degree to which inferences might be altered by changes in structural or parametric assumptions. These strategies allow for an assessment of uncertainty surrounding the assumptions used and the resulting inferences. Even when analyses rely on assumptions that are either not testable or difficult to assess, there is still information available. This information can be found both in the data and in the utilization of substantive knowledge of the area. This information can be used to help gauge how reasonable these assumptions are and to estimate how inferences might change if these assumptions were violated in specific ways.

Project Activities: Specifically, the project extended and developed methods to explore the sensitivity of inferences to deviations from the required assumptions in the context of observational studies and randomized experiments. For observational studies, the project focused on methods to explore the sensitivity of inferences to: (1) one or more omitted confounders; (2) mis-specification of the model for the response surface; and (3) lack of overlap of covariate distributions across treatment and comparison groups. For randomized experiments or natural experiments, the project focused on methods to assess the sensitivity of inferences to: (1) deviations from pure randomization, and (2) departures from the exclusion restriction when attempting to correct for non-compliance using instrumental variables methods.

In addition, the project developed practical guidelines for using sensitivity analyses in applied settings by testing the efficacy of competing methods in empirical settings with known answers and identifying better benchmarks for the plausibility of sensitivity analysis parameters. User-friendly software to implement these strategies was developed to make them accessible to education researchers. Also, the project developed procedures for representing results graphically to facilitate interpretation and will build these procedures into the software.

Products and Publications

Journal article, monograph, or newsletter

Carnegie, N.B., Harada, M., and Hill, J.L. (2016). Assessing Sensitivity to Unmeasured Confounding Using a Simulated Potential Confounder. Journal of Research on Educational Effectiveness, 9(3), 395–420.

Dorie, V., Harada, M., Carnegie, N.B., aand Hill, J. (2016). A Flexible, Interpretable Framework for Assessing Sensitivity to Unmeasured Confounding. Statistics in Medicine, 35(20), 3453–3470.

Dorie, V., Hill, J., Shalit, U., Scott, M., and Cervone, D. (2017). Automated Versus Do-It-Yourself Methods for Causal Inference: Lessons Learned From a Data Analysis Competition. arXiv preprint arXiv:1707.02641.

Hill, J., & Hoggatt, K. J. (2018). The Tenability of Counterhypotheses: A comment on Bross' discussion of statistical criticism. Observational Studies, 4(2), 34–41.

Hill, J., and Su, Y.-S. (2013). Assessing Lack of Common Support in Causal Inference Using Bayesian Nonparametrics: Implications for Evaluating the Effect of Breastfeeding on Children's Cognitive Outcomes. Annals of Applied Statistics, 7(3): 1386–1420.

Hill, J., Linero, A., & Murray, J. (2020). Bayesian additive regression trees: A review and look forward. Annual Review of Statistics and Its Application, 7, 251–278.

Kern, H.L., Stuart, E.A., Hill, J., and Green, D.P. (2016). Assessing Methods for Generalizing Experimental Impact Estimates to Target Populations. Journal of Research on Educational Effectiveness, 9(1), 103–127.

Middleton, J. A., Scott, M. A., Diakow, R., & Hill, J. L. (2016). Bias amplification and bias unmasking. Political Analysis, 24(3), 307–323.

Scott, M. A., Diakow, R., Hill, J. L., & Middleton, J. A. (2018). Potential for bias inflation with grouped data: A comparison of estimators and a sensitivity analysis strategy. Observational Studies, 4(1), 111–149.


Back