Skip Navigation
Impact Evaluation of the U.S. Department of Education's Student Mentoring Program

NCEE 2009-4047
March 2009

Analytic Approach

The analysis strategy utilized a fixed-effects model to estimate the average treatment effect across all programs for students assigned to receive mentoring versus students assigned to an untreated control group.10 The fixed effects model was also used to examine five subgroup differences: (1) gender, (2) age (students 12 or older versus students less than 12 years old), (3) family structure (students from two-parent families versus students from other types of families), (4) presence of self-reported delinquent behaviors at baseline (theft, possession of a weapon, drug use, alcohol use, or gang activity), and (5) academic non-proficiency (in math, reading/English Language Arts (ELA), or both) at baseline. We obtained impact estimates for each of the selected subgroups using the same approach as in the main analysis. We then performed a t-test to identify any statistically significant differences in impacts between each paired set of subgroups – for example, to test whether the estimated impact of school-based student mentoring on boys was different from the impact on girls in our sample. To control for chance findings, a multiple comparisons procedure, known as the Benjamini-Hochberg (BH) correction, was employed within each outcome domain in analysis of the full sample and within each outcome domain in each of the five subgroup analyses.

Finally, given that characteristics of programs and their mentors varied considerably across sites, we wished to determine whether some sites or groups of sites could be characterized as more or less successful, and, if so, whether we could identify program characteristics associated with differences in impacts at the site level. Therefore, a series of exploratory analyses were also conducted to explore site-level differences in impacts.

10 We use the term “fixed-effects” within the dual perspectives of sampling and statistical inference. Because student mentoring programs were chosen purposively, not randomly into the study, results cannot be generalized to the full universe of programs. The fixed-effects model is therefore, appropriate, given our level of inference does not extend beyond our study sample of purposively chosen programs.