
Mentoring and retention at a commuter campus.
Servies, C. M. (1999). Purdue University.
-
examining60Students, gradePS
Practice Guide
Review Details
Reviewed: May 2021
- Practice Guide (findings for Mentoring program)
- Randomized Controlled Trial
- Meets WWC standards without reservations because it is a randomized controlled trial with low attrition.
This review may not reflect the full body of research evidence for this intervention.
Evidence Tier rating based solely on this study. This intervention may achieve a higher tier when combined with the full body of evidence.
Findings
Outcome measure |
Comparison | Period | Sample |
Intervention mean |
Comparison mean |
Significant? |
Improvement index |
Evidence tier |
---|---|---|---|---|---|---|---|---|
First Year Retention |
Mentoring program vs. Business as usual |
1 Semester |
Full sample;
|
57.00 |
57.00 |
No |
-- |
Evidence Tier rating based solely on this study. This intervention may achieve a higher tier when combined with the full body of evidence.
Sample Characteristics
Characteristics of study sample as reported by study author.
-
Female: 53%
Male: 47% -
Suburban
-
- B
- A
- C
- D
- E
- F
- G
- I
- H
- J
- K
- L
- P
- M
- N
- O
- Q
- R
- S
- V
- U
- T
- W
- X
- Z
- Y
- a
- h
- i
- b
- d
- e
- f
- c
- g
- j
- k
- l
- m
- n
- o
- p
- q
- r
- s
- t
- u
- v
- x
- w
- y
Indiana
-
Race Other or unknown 58% White 42%
Study Details
Setting
The study was conducted at Purdue University - Calumet, a commuter college campus.
Study sample
The sample was 58% minority (Black, Hispanic, and Other). All students were low-income and eligible for Perkins grant support. Just under half (47%) of the sample were male and 53% were female. The students in the study were described as two-year technology majors.
Intervention Group
Students (or protégés) were paired with a peer mentor. Peer mentors were academically successful upper-level students at the college and were paid. Peer mentors met with their protégés once a week starting before the first day of class and throughout the semester. The peer mentors started by asking each protégé if they would like a tour of the classrooms listed on their schedule. Part of this process included explaining how to differentiate day from evening classes, knowing how to tell if a classroom had been moved from the location stated on the schedule, learning how to plan study times between classes, and locating the book store, cafeteria, and other key points on campus. Each meeting included one of a large number of suggested activities: introducing the students to a club or organization, attending a social event on campus, viewing a free movie in the lounge, visiting the physical fitness center, looking into career testing and career research, having lunch together, playing a game in the arcade room, planning for study time together, talking casually, meeting and introducing the new students to the mentor’s friends, and meeting with and introducing the protégé to faculty and staff members. In addition to the one hour of required mentoring, the peer mentors were to conduct or arrange for tutoring sessions at the tutoring center when needed.
Comparison Group
The comparison group experienced business-as-usual and had access to the various supports offered by the college but without the upper-level peer directing them or guiding them to use these supports.
Support for implementation
Peer mentors were required to complete one session of training provided for them and faculty/staff volunteer mentors. There were two 3-hour mentor orientation training sessions. During these sessions, peer mentors were given mentor training manuals to assist them in their mentoring experiences with their student protégés.
An indicator of the effect of the intervention, the improvement index can be interpreted as the expected change in percentile rank for an average comparison group student if that student had received the intervention.
For more, please see the WWC Glossary entry for improvement index.
An outcome is the knowledge, skills, and attitudes that are attained as a result of an activity. An outcome measures is an instrument, device, or method that provides data on the outcome.
A finding that is included in the effectiveness rating. Excluded findings may include subgroups and subscales.
The sample on which the analysis was conducted.
The group to which the intervention group is compared, which may include a different intervention, business as usual, or no services.
The timing of the post-intervention outcome measure.
The number of students included in the analysis.
The mean score of students in the intervention group.
The mean score of students in the comparison group.
The WWC considers a finding to be statistically significant if the likelihood that the finding is due to chance alone, rather than a real difference, is less than five percent.
The WWC reviews studies for WWC products, Department of Education grant competitions, and IES performance measures.
The name and version of the document used to guide the review of the study.
The version of the WWC design standards used to guide the review of the study.
The result of the WWC assessment of the study. The rating is based on the strength of evidence of the effectiveness of the intervention. Studies are given a rating of Meets WWC Design Standards without Reservations, Meets WWC Design Standards with Reservations, or >Does Not Meet WWC Design Standards.
A related publication that was reviewed alongside the main study of interest.
Study findings for this report.
Based on the direction, magnitude, and statistical significance of the findings within a domain, the WWC characterizes the findings from a study as one of the following: statistically significant positive effects, substantively important positive effects, indeterminate effects, substantively important negative effects, and statistically significant negative effects. For more, please see the WWC Handbook.
The WWC may review studies for multiple purposes, including different reports and re-reviews using updated standards. Each WWC review of this study is listed in the dropdown. Details on any review may be accessed by making a selection from the drop down list.
Tier 1 Strong indicates strong evidence of effectiveness,
Tier 2 Moderate indicates moderate evidence of effectiveness, and
Tier 3 Promising indicates promising evidence of effectiveness,
as defined in the
non-regulatory guidance for ESSA
and the regulations for ED discretionary grants (EDGAR Part 77).