Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Better Warranted Quasi-Experimental Practice for Evidence Based Practical Research
Center: NCER Year: 2010
Principal Investigator: Cook, Thomas Awardee: Northwestern University
Program: Statistical and Research Methodology in Education      [Program Details]
Award Period: 3 years Award Amount: $1,162,032
Type: Methodological Innovation Award Number: R305D100033
Description:

Purpose: This project extended work on improving four quasi-experimental methods that have potential for providing unbiased or minimally biased causal inference when random assignment is not possible. These four methods include: regression discontinuity designs, propensity score matching methods, short interrupted time-series designs, and pattern matching.

Project Activities: The project addressed two issues that complicate the use of the regression discontinuity design: (1) when multiple assignment mechanisms are used; and (2) when deliberate manipulation of the assignment score by units is evident. Using both simulation and real data, the project examined the validity and relative efficiency of three approaches to estimating treatment effects when using multiple assignment mechanisms: (1) use of a single assignment mechanism; (2) centering then collapsing the scores into a single assignment variable; and (3) using a regression model to model the response surface and applying treatment weights along each discontinuity frontier. A similar combination of simulation and real data were used to test three approaches (and identify the condition under which each may be used) to the estimation of treatment effects in a regression discontinuity design when the assignment score has been manipulated: (1) excluding a narrow band of cases around the cutoff point where manipulation is suspected; (2) using covariate adjustment to create statistical equivalence between the treatment and control groups; and (3) modeling the expected scores should no manipulation have occurred.

The project used real data to research three areas in regards to propensity score matching: (1) the relative importance of the requirements for such matching (i.e., reliable measurement of all constructs that are correlated with both treatment selection and the outcome of interest, correct specification of the model, and the correct matching method) in estimating a causal treatment effect; (2) the level of matching given multilevel data; and (3) the role of pretest measures on the outcome, especially multiple pretest measure at different time points, in removing selection bias.

The project explored improving short interrupted time series (SITS) analyses by: (1) examining how alternative SITS designs such as replication with multiple comparison groups, switching application design and transfer function modeling can improve causal inference; (2) examining changes and stability in power and functional form (along with efficiency and consistency of results) when the length of time series data are shortened and when data are aggregated at higher levels; and (3) comparing the advantages and disadvantages of different analytic methods available to model SITS such as modified generalized least squares, hierarchical linear modeling, latent growth modeling, repeated measure contrast approach, and propensity score analysis.

The project did not intend to develop additional design elements of pattern matching. Instead, it sought to uncover case studies of the simultaneous use of multiple elements of pattern matching within and outside of education research and to use simulation studies to generate an analysis of the strengths and limitations of pattern-matching. The purpose was to identify elements of pattern matching that can be added to strengthen the basic non-equivalent control group design used when experiments, regression discontinuity, propensity score analysis, or short interrupted time series are not applicable in order to rule out alternative interpretations.

Products and Publications

Book chapter

Cook, T.D., Wong, M., and Steiner, P.M. (2012). Evaluating National Programs: A Case Study of the No Child Left Behind Program in the United States. In T. Bliesener, A. Beelmann, and M. Stemmler (Eds.), Antisocial Behavior and Crime: Contributions of Developmental and Evaluation Research to Prevention and Intervention (pp. 333–356). Cambridge, MA: Hogrefe Publishing.

Hallberg, K., Wing, C., Wong, V.C., and Cook, T.D. (2013). Experimental Design for Causal Inference: Clinical Trial and Regression-Discontinuity Designs. In T. Little (Ed.), The Oxford Handbook of Quantitative Methods, Volume 1: Foundations (pp. 223–236). New York: Oxford University Press.

Shadish, W., and Sullivan, K. (2012). Theories of Causation in Psychological Science. In H. Cooper, P. Camic, D. Long, A. Panter, D. Rindskopf, and K.J. Sher (Eds.), APA Handbook of Research Methods in Psychology, Volume 1: Foundations, Planning, Methods, and Psychometrics (pp. 23–52). Washington, DC: American Psychological Association.

Steiner, P.M., and Cook, D.L. (2013). Matching and Propensity Scores. In T.D. Little (Ed.), The Oxford Handbook of Quantitative Methods, Volume 1: Foundations (pp. 237–259). New York: Oxford University Press.

Book, edition specified

Wong, V.C., Wing, C., Steiner, P.M., Wong, M., and Cook, T.D. (2013). Research Designs for Program Evaluation. (2nd ed.). Hoboken, NJ: John Wiley and Sons, Inc.

Journal article, monograph, or newsletter

Cook, T.D. (2014). Generalizing Causal Knowledge in the Policy Sciences: External Validity as a Task of Both Multiattribute Representation and Multiattribute Extrapolation. Journal of Policy Analysis and Management, 33(2): 527–536.

Diamond, S.S., Bowman, L.E., Wong, M., and Patton, M.M. (2010). Efficiency and Cost: The Impact of Videoconferenced Hearings on Bail Decisions. Journal of Criminal Law and Criminology, 100(3): 869.

Marcus, S.M., Stuart, E.A., Wang, P., Shadish, W.R., and Steiner, P.M. (2012). Estimating the Causal Effect of Randomization Versus Treatment Preference in a Doubly Randomized Preference Trial. Psychological Methods, 17(2): 244–245.

Shadish, W.R. (2011). Randomized Controlled Studies and Alternative Designs in Outcome Studies: Challenges and Opportunities. Research On Social Work Practice, 21(6): 636–643.

Shadish, W.R., and Sullivan, K.J. (2011). Characteristics of Single-Case Designs Used to Assess Intervention Effects in 2008. Behavior Research Methods, 43(4): 971–980.

St. Clair, T., and Cook, T.D. (2015). Difference-in-Difference Methods in Public Finance. National Tax Journal, 68(2): 319–338.

St. Clair, T., Cook, T.D., and Hallberg, K. (2014). Examining the Internal Validity and Statistical Precision of the Comparative Interrupted Time Series Design by Comparison With a Randomized Experiment. American Journal of Evaluation, 35(3): 311–327.

Steiner, P.M. (2012). Comments: Using Design Elements for Increasing the Severity of Causal Mediation Tests. Journal of Research on Educational Effectiveness, 5(3): 296–298.

Wing, C., and Cook, T.D. (2013). Strengthening the Regression Discontinuity Design Using Additional Design Elements: A Within-Study Comparison. Journal of Policy Analysis and Management, 32(4): 853–877.

Wong, V.C., Steiner, P.M., and Cook, T.D. (2013). Analyzing Regression-Discontinuity Designs With Multiple Assignment Variables: A Comparative Study of Four Estimation Methods. Journal of Educational and Behavioral Statistics, 38(2): 107–141.

Working paper

Steiner, P.M. (2011). Propensity Score Methods for Causal Inference: On the Relative Importance of Covariate Selection, Reliable Measurement, and Choice of Propensity Score Technique (9). Bologna, Italy: AlmaLaurea Inter-University Consortium Working Paper.


Back