Skip Navigation
Evaluation of the DC Opportunity Scholarship Program: Second Year Report on Participation
NCEE 2006-4003
April 2006

Appendix A: Congressionally Mandated Evaluation

Section 309 of the District of Columbia School Choice Incentive Act of 2003 describes the requirements for an independent evaluation of the DC Opportunity Scholarship Program. The Secretary of Education is to ensure the following:

  • "The evaluation is conducted using the strongest possible research design for determining the effectiveness" of the school choice program; and


  • "The results of the evaluation regarding the impact of the program on the participating students and nonparticipating students and schools in the District are disseminated widely."

Early on, the Institute of Education Sciences determined that the foundation of the DC Opportunity Scholarship Program evaluation would be a randomized controlled trial (RCT) that compared outcomes of eligible applicants (students and their parents) randomly assigned to receive or not receive a scholarship.22 This decision was based on the mandate to use rigorous evaluation methods, the expectation that there would be more applicants than funds and private school spaces available, and the requirement to use random selection to determine who receives a scholarship. In addition, the law clearly specified that such a comparison in outcomes be made.23 This component represents the impact analysis and will provide evidence on the effectiveness of the Program.24

The law also called for the evaluation to track Program progress in other ways. For example, the evaluation was to compare the achievement of students participating in the scholarship Program to the achievement of students in the same grades in the DC Public Schools (DCPS). However, that performance reporting is no longer possible, since DCPS is in the process of changing its assessment away from the SAT-9 used when the OSP began and which the evaluation continues to administer. The evaluation will address other issues the statute requires, such as the experiences of DC schools during the period of Program implementation.

22 RCTs are commonly referred to as the "gold standard" for evaluating educational interventions; when mere chance determines which eligible applicants receive access to school choice, the students who apply but are not admitted make up an ideal "control group" for comparison with the school choice "treatment group." Both groups of participants are equally motivated to obtain new educational options, and nothing except a random draw distinguishes those who receive the opportunity from those who do not. Therefore, any differences in the two groups in subsequent years can be attributed to the impact of the Program. In contrast, the results of school choice studies that are not based on RCTs must be interpreted and used more cautiously because comparisons between the applicants and a group of students who chose not to apply will likely reflect not only the impact of the Program but also differences between the groups in motivation and other unmeasured characteristics.

23 See 309(a)(4)(A)(ii).

24 The RCT approach was also used by researchers conducting impact evaluations of the New York City; Dayton, Ohio; and Washington, DC, private scholarship programs.