Skip Navigation
Evaluation of the DC Opportunity Scholarship Program: Second Year Report on Participation
NCEE 2006-4003
April 2006

The Mandated Evaluation

The Act requires that this 5-year scholarship pilot Program be rigorously evaluated by an independent research team, using the "strongest possible research design for determining the effectiveness" of the Program and addressing a specific set of student comparisons and topics (Section 309):

  • Impact Analysis. Central to the evaluation is an impact analysis that compares outcomes of eligible applicants (students and their parents) from public schools randomly assigned to receive or not receive a scholarship through a lottery. Such random assignment experimental designs are widely viewed as the best methods for identifying the independent effect of programs on subsequent outcomes.3 Thus, the impact analysis will be the source of the reliable, causal evidence on Program effectiveness called for in the legislation (see appendix A for a more comprehensive description of the evaluation and its technical approach).


  • Performance Reporting. The Act also specifies a comparison of students participating in the scholarship Program with students in the same grades in the DCPS system, as a way of tracking general student progress and Program performance.4 Such a comparison would draw upon what we call the "OSP recipient sample," which comprises all students offered a scholarship, including students who were already attending private schools at the point of application and public school applicants who were automatically awarded scholarships. 5,6 However, since the passage of the legislation and the first year of OSP implementation, DCPS has been in transition to a new academic assessment that differs from its earlier test, which the evaluation is required to use for its main outcomes measurement.7 The divergence between the new DCPS assessment and the evaluation assessment means that comparing the academic performance of all scholarship recipients and other DCPS students is no longer possible, although this analysis was performed for students who participated in the Program's first year when the same assessment was used (see the first report to Congress).


  • Response of Schools. Through descriptive analyses, the evaluation will assess how DC public and private schools are changing during the implementation of the OSP, in part by examining the extent to which the schools are experiencing significant losses or gains in student enrollment during this period.


3 For examples, see the What Works Clearinghouse, WWC Study Review Standards, 7 (http://www.whatworks.ed.gov/ reviewprocess/studv_standards_final.pdf); Thomas D. Cook and Monique R. Payne, "Objecting to the Objections to Using Random Assignment in Educational Research," in Evidence Matters: Randomized Trials in Education Research, eds. Frederick Mosteller and Robert Baruch (Washington, DC: Brookings, 2002).

4 DCPS students who did not apply to the scholarship Program are likely to be quite different from those who applied and are participating in the OSP-in ways we can observe and ways we cannot. Comparing outcomes between participants and nonapplicants is, therefore, not a reliable measure of Program effects.

5 Automatic scholarship awards were given only in the first year of Program implementation to all students applying from public schools designated "in need of improvement" under the 2002 reauthorization of the Elementary and Secondary Education Act and to all public school applicants entering grades K–5 that were not oversubscribed and therefore not subject to award by lottery.

6 The "recipient sample" is different from the "impact sample," which is limited to public school applicants who were subject to scholarship award by lottery and thus were randomly assigned to the "treatment" (scholarship) or "control" (nonscholarship) group.

7 See Section 309(a)(3)(B) for the provision that stipulates that the evaluation use the same assessment as DCPS administered to public school students in the first year the OSP was operating. This requirement was intended to ensure that the impact analysis could be based on a consistent measure of student achievement and not subject to changes in the key outcome measure throughout the evaluation period.