Skip Navigation
Evaluation of the DC Opportunity Scholarship Program:

NCEE 2009-4050
March 2009

Mandated Evaluation of the OSP

In addition to establishing the OSP, Congress mandated an independent evaluation of it be conducted, with annual reports on the progress of the study. The legislation indicated the evaluation should analyze the effects of the Program on various academic and nonacademic outcomes of concern to policymakers and use "...the strongest possible research design for determining the effectiveness" of the Program. The current evaluation was developed to be responsive to these requirements. In particular, the foundation of the evaluation is a randomized controlled trial (RCT) that compares outcomes of eligible applicants (students and their parents) randomly assigned to receive or not receive a scholarship. This decision was based on the mandate to use rigorous evaluation methods, the expectation that there would be more applicants than funds and private school spaces available, and the statute's requirement that random selection be the vehicle for determining who receives a scholarship. An RCT design is widely viewed as the best method for identifying the independent effect of programs on subsequent outcomes (e.g., Boruch, de Moya, and Snyder 2002, p. 74). Random assignment has been used by researchers conducting impact evaluations of other scholarship programs in Charlotte, NC; New York City; Dayton, OH; and Washington, DC (Greene 2001; Howell et al. 2002; Mayer et al. 2002).

The recruitment, application, and lottery process conducted by WSF with guidance from the evaluation team created the foundation for the evaluation's randomized trial and determined the group of students for whom impacts of the Program are analyzed in this report. Because the goal of the evaluation was to assess both the short-term and longer term impacts of the Program, it was necessary to focus the study on early applicants to the Program (cohorts 1 and 2) whose outcomes could be tracked over at least 3 years during the evaluation period. During the first 2 years of recruitment, WSF received applications from 5,818 students. Of these, approximately 70 percent (4,047 of 5,818) were eligible to enter the Program (table 1). Of the total pool of eligible applicants, 2,308 students who were attending public schools or were rising kindergarteners entered lotteries (492 in cohort 1; 1,816 in cohort 2), resulting in 1,387 students assigned to the treatment condition and 921 assigned to the control condition. These students constitute the evaluation's impact analysis sample and represent three-quarters of all students in cohorts 1 and 2 who were not already attending a private school when they applied to the OSP.

Data are collected from the impact sample each year, starting with the spring in which students applied to the OSP (baseline) and each spring thereafter. These data include assessments of student achievement in reading and mathematics using the Stanford Achievement Test version 9 (SAT-9),4 surveys of parents, and surveys of students in grade 4 and above–administered by the evaluation team in central District of Columbia (DC) locations on Saturdays or weekday evenings because neither the public nor private schools would allow data collection on their campuses during the school day. In addition, the evaluation surveys all DC public and private schools each spring in order to address the statute's interest in understanding how the schools are responding to the OSP.

Top

4 Stanford Abbreviated Achievement Test (Form S), Ninth Edition. San Antonio, TX: Harcourt Educational Measurement, Harcourt Assessment, Inc., 1997.