The District of Columbia School Choice Incentive Act of 2003, passed by the Congress in
January 2004, established the first federally funded, private school voucher program in the United States.
As part of this legislation, the Congress mandated a rigorous evaluation of the impacts of the Program,
now called the DC Opportunity Scholarship Program (OSP). This report presents findings from the
evaluation on the impacts 2 years after families who applied were given the option to move from a public
school to a participating private school of their choice.
The evaluation is based on a randomized controlled trial design that compares the outcomes
of eligible applicants randomly assigned to receive (treatment group) or not receive (control group) a
scholarship through a series of lotteries. The main findings of the evaluation so far include:
- After 2 years, there was no statistically significant difference in test scores in
general between students who were offered an OSP scholarship and students who were not offered a scholarship. Overall, those in the treatment and control groups
were performing at comparable levels in mathematics and reading (table 3).
- The Program had a positive impact on overall parent satisfaction and parent
perceptions of school safety, but not on students’ reports of satisfaction and safety (tables 4 and
5). Parents were more satisfied with their child’s school and viewed the school as less dangerous if the child was offered a scholarship. Students had a different view of their schools than did their parents. Reports of dangerous incidents in school were comparable for students in the treatment and control groups. Overall, student satisfaction was unaffected by the Program.
- This same pattern of findings holds when the analysis is conducted to determine
the impact of using a scholarship rather than being offered a scholarship. Twentysix
percent of students who were randomly assigned by lottery to receive a scholarship
chose not to use it in either the first or second year. We use a common statistical
technique to take those “never users” into account; it assumes that the students had
zero impact from the OSP, but it does not change the statistical significance of the
original impact estimates. Therefore, the positive impacts on parent views of school
safety and satisfaction all increase in size, and there remains no impact on academic
achievement and no overall impact on students’ perceptions of school safety or
satisfaction from using an OSP scholarship.
- There were some impacts on subgroups of students, but adjustments for multiple
comparisons indicate that these findings may be due to chance. There were no
statistically significant impacts on the test scores of the high-priority subgroup of
students who had previously attended schools designated as in need of improvement
(SINI). However, being offered or using a scholarship may have improved reading test scores among three subgroups of students: those who had not attended a SINI school
when they applied to the OSP, those who had relatively higher pre-Program academic
performance, and those who applied in the first year of Program implementation. The
Program may also have had a positive impact on school satisfaction for students who
had previously attended SINI schools. However, these findings were no longer
statistically significant when subjected to a reliability test to adjust for the multiple
comparisons of treatment and control group students across 10 subgroups; the results
may be “false discoveries” and should therefore be interpreted and used with caution.
- The second year impacts are generally consistent with those from the first year.1
The main difference is that after 1 year, the non-SINI and higher performing groups of
students appeared to experience statistically significant positive impacts on math
achievement, while in the second year the impacts were on reading achievement.
Adjustments for multiple comparisons suggest that both sets of results may be false
1 See Wolf, Gutmann, Puma, Rizzo, Eissa, and Silverberg 2007.