Project Activities
The project compared the four MRRD approaches in terms of their precision, potential threats to their internal validity (e.g., functional form misspecification), and the severity of the trade-off between bias and precision. Part I of the project used simulated datasets to examine the properties of the four MRRD approaches under flexible design and data conditions. These datasets were created using Monte Carlo simulation and reflect different design and data features with a focus on features that have been shown to affect the properties of single-rating RD designs (e.g., the correlation between the outcome and the ratings; the functional form of the relationship between the outcome and the ratings, etc.). Program impacts were estimated by applying the four MRRD approaches to each simulated dataset and comparing the precision of estimates. Also, for each approach, a team member who is blind to the simulation values used different functional form specification strategies (global and local) to see whether it is easier to recover the true functional form when a given MRRD approach is used.
Part II of the project examined the relative performance of the approaches in a real world setting, by creating a pseudo-MRDD design from a randomized experiment (the Enhanced Reading Opportunities study), and then comparing the impact estimates produced using the MRRD approaches to the (unbiased) impact estimate yielded by the experimental design.
Two or more baseline covariates in the experiment were selected for use as rating variables; the MRRD dataset was then created by retaining treatment group members who satisfy a cut-off on these rating variables and control group members who do not. The impact estimates obtained from the MRRD approaches (global and local versions) were compared to the benchmark experimental estimate from the random assignment study from which the RD dataset was created. The trade-off between bias and precision was assessed for the different MRRD approaches.
Part III of the project synthesized the lessons learned from Parts I and II into a "best practice" guide for researchers on the MRRD design; recommendations will be illustrated by estimating the effect of Adequate Yearly Progress (AYP) status on student achievement using a school-level dataset with information on AYP status, proficiency rates, school characteristics, and student achievement. The lessons learned from Parts I and II were applied to estimating the effect of AYP status on achievement.
People and institutions involved
IES program contact(s)
Project contributors
Products and publications
Journal article, monograph, or newsletter
Porter, K.E., Reardon, S.F., Unlu, F., Bloom, H.S., and Cimpian, J.R. (2017). Estimating Causal Effects of Education Interventions Using a Two-Rating Regression Discontinuity Design: Lessons From a Simulation Study and an Application. Journal of Research on Educational Effectiveness, 10(1), 138-167.
Reardon, S.F., and Robinson, J.P. (2012). Regression Discontinuity Designs With Multiple Rating-Score Variables. Journal of Research on Educational Effectiveness, 5(1): 83-104.
Questions about this project?
To answer additional questions about this project or provide feedback, please contact the program officer.