Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Better Warranted Quasi-Experimental Practice for Evidence Based Practical Research
Center: NCER Year: 2010
Principal Investigator: Cook, Thomas Awardee: Northwestern University
Program: Statistical and Research Methodology in Education      [Program Details]
Award Period: 3 years Award Amount: $1,162,032
Goal: Methodological Innovation Award Number: R305D100033
Description:

This project extends work on improving four quasi-experimental methods that have potential for providing unbiased or minimally biased causal inference when random assignment is not possible. These four methods include: regression discontinuity designs, propensity score matching methods, short interrupted time-series designs, and pattern matching.

The project will address two issues that complicate the use of the regression discontinuity design: (1) when multiple assignment mechanisms are used; and (2) when deliberate manipulation of the assignment score by units is evident. Using both simulation and real data, the project will examine the validity and relative efficiency of three approaches to estimating treatment effects when using multiple assignment mechanisms: (1) use of a single assignment mechanism; (2) centering then collapsing the scores into a single assignment variable; and (3) using a regression model to model the response surface and applying treatment weights along each discontinuity frontier. A similar combination of simulation and real data are to be used to test three approaches (and identify the condition under which each may be used) to the estimation of treatment effects in a regression discontinuity design when the assignment score has been manipulated: (1) excluding a narrow band of cases around the cutoff point where manipulation is suspected; (2) using covariate adjustment to create statistical equivalence between the treatment and control groups; and (3) modeling the expected scores should no manipulation have occurred.

The project will use real data to research three areas in regards to propensity score matching: (1) the relative importance of the requirements for such matching (i.e., reliable measurement of all constructs that are correlated with both treatment selection and the outcome of interest, correct specification of the model, and the correct matching method) in estimating a causal treatment effect; (2) the level of matching given multilevel data; and (3) the role of pretest measures on the outcome, especially multiple pretest measure at different time points, in removing selection bias.

The project will explore improving short interrupted time series (SITS) analyses by: (1) examining how alternative SITS designs such as replication with multiple comparison groups, switching application design and transfer function modeling can improve causal inference; (2) examining changes and stability in power and functional form (along with efficiency and consistency of results) when the length of time series data are shortened and when data are aggregated at higher levels; and (3) comparing the advantages and disadvantages of different analytic methods available to model SITS such as modified generalized least squares, hierarchical linear modeling, latent growth modeling, repeated measure contrast approach, and propensity score analysis.

The project does not intend to develop additional design elements of pattern matching. Instead, it seeks to uncover case studies of the simultaneous use of multiple elements of pattern matching within and outside of education research and to use simulation studies to generate an analysis of the strengths and limitations of pattern-matching. The purpose is to identify elements of pattern matching that can be added to strengthen the basic non-equivalent control group design used when experiments, regression discontinuity, propensity score analysis, or short interrupted time series are not applicable in order to rule out alternative interpretations.

Publications from this project:

Cook, T.D., Wong, M. and Steiner, P.M. (2012). Evaluating National Programs: A Case Study Of The No Child Left Behind Program In The United States. In Bliesener, T., Beelmann, A., and Stemmler, M. (Eds.), Antisocial Behavior and Crime: Contributions Of Developmental and Evaluation Research To Prevention and Intervention. Cambridge, MA: Hogrefe Publishing.

Diamond, S.S., Bowman, L.E., Wong, M. and Patton, M.M. (2010) Efficiency and Cost: The Impact Of Videoconferenced Hearings On Bail Decisions. The Journal Of Criminal Law and Criminology, 100 (3).

Hallberg, K., Wing, C., Wong, V.C., and Cook, T.D. (2013). Experimental Design For Causal Inference: Clinical Trial and Regression-Discontinuity Designs. In T. Little (Ed.), The Oxford Handbook Of Quantitative Methods. Oxford, UK: Oxford University Press.

Hong, G., and Nomi, T. (2012). Weighting Methods For Assessing Policy Effects Mediated By Peer Change. Journal Of Research On Educational Effectiveness, 5: 261–289.

Marcus, S.M., Stuart, E.A., Wang, P., Shadish, W.R., and Steiner, P.M. (2012). Estimating The Causal Effect Of Randomization Versus Treatment Preference In A Doubly Randomized Preference Trial. Psychological Methods, 17 (2): 244–245.

Shadish, W., and Sullivan, K. (2012). Theories Of Causation In Psychological Science. In H. Cooper (Ed.-In-Chief), P. Camic, D. Long, A. Panter, D. Rindskopf, and K.J. Sher (Assoc. Eds.), APA Handbooks In Psychology: Vol. 1. APA Handbook Of Research Methods In Psychology: Psychological Research: Foundations, Planning, Methods, and Psychometrics. Washington, DC: American Psychological Association.

Shadish, W.R. (2011). Randomized Controlled Studies and Alternative Designs In Outcome Studies. Research On Social Work Practice, 21 (6): 636–643.

Shadish, W.R. (2011). Randomized Controlled Studies and Alternative Designs In Outcome Studies: Challenges and Opportunities. Research On Social Work Practice, 21 (6): 636–643.

Shadish, W.R., and Sullivan, K.J. (2011). Characteristics Of Single-Case Designs Used To Assess Intervention Effects In 2008. Behavior Research Methods, 43 (4): 971–980.

Steiner, P.M. (2012). Comments: Using Design Elements For Increasing The Severity Of Causal Mediation Tests. Journal Of Research On Educational Effectiveness, 5: 296–298.

Steiner, P.M. and Cook, D.L. (2013). Matching and Propensity Scores. In Little, T.D. (Ed.), The Oxford Handbook Of Quantitative Methods. Oxford, UK: Oxford University Press.

Steiner, P.M., and Cook, D. (2013). Matching and Propensity Scores. In T.D. Little (Ed.), The Oxford Handbook Of Quantitative Methods (Vol 1): Foundations (pp. 237–259). New York, NY US: Oxford University Press.

Wong, V.C., Steiner, P.M., and Cook, T.D. (2012). Analyzing Regression-Discontinuity Designs With Multiple Assignment Variables: A Comparative Study Of Four Estimation Methods. Journal Of Educational and Behavioral Statistics, 38 (2): 107–141.

Wong, V.C., Wing, C., Steiner, P.M., Wong, M., and Cook, T.D. (2013). Research Designs For Program Evaluation. In J.A. Schinka, W.F. Velicer, I.B. Weiner (Eds.), Handbook Of Psychology, Vol. 2: Research Methods In Psychology (2nd Ed.) (pp.316–341). Hoboken, NJ US: John Wiley and Sons Inc.

Wong, V.C., Wing, C., Steiner, P.M., Wong, M., and Cook, T.D. (2012). Research Methods In Psychology. (2nd Ed. ). Hoboken, NJ: Wiley and Sons.


Back