
2016-2017 Implementation Evaluation of the National Math and Science Initiative's College Readiness Program
Phelan, Julia; Egger, Jeffrey; Michiuye, Joanne K.; Keum, Eunhee; Choi, Kilchan; Chung, Gregory K. W. K.; Baker, Eva L. (2018). National Center for Research on Evaluation, Standards, and Student Testing (CRESST). Retrieved from: https://eric.ed.gov/?id=ED615910
-
examining21,287Students, grades11-12
Department-funded evaluation
Review Details
Reviewed: April 2022
- Department-funded evaluation (findings for National Math + Science Initiative (NMSI) College Readiness Program)
- Randomized Controlled Trial
- Meets WWC standards with reservations because it is a cluster randomized controlled trial with low cluster-level attrition that provides evidence of effects on clusters by demonstrating that the analytic sample of individuals is representative of the clusters.
This review may not reflect the full body of research evidence for this intervention.
Evidence Tier rating based solely on this study. This intervention may achieve a higher tier when combined with the full body of evidence.
Findings
Outcome measure |
Comparison | Period | Sample |
Intervention mean |
Comparison mean |
Significant? |
Improvement index |
Evidence tier |
---|---|---|---|---|---|---|---|---|
Percent Passing Math-Science AP Exam |
National Math + Science Initiative (NMSI) College Readiness Program vs. Business as usual |
0 Months |
2017 All students in grades 11 and 12 ;
|
2.00 |
3.00 |
No |
-- |
Evidence Tier rating based solely on this study. This intervention may achieve a higher tier when combined with the full body of evidence.
Sample Characteristics
Characteristics of study sample as reported by study author.
-
- B
- A
- C
- D
- E
- F
- G
- I
- H
- J
- K
- L
- P
- M
- N
- O
- Q
- R
- S
- V
- U
- T
- W
- X
- Z
- Y
- a
- h
- i
- b
- d
- e
- f
- c
- g
- j
- k
- l
- m
- n
- o
- p
- q
- r
- s
- t
- u
- v
- x
- w
- y
California, Georgia, Illinois, Louisiana, Michigan, Missouri, North Dakota, Ohio, Pennsylvania, Texas
Study Details
Setting
The evaluation identified 10 states to represent all locations implementing the College Readiness Program (CRP): CA, GA, IL, LA, MI, MO, ND, OH, PA, TX. Within each state, the evaluation recruited one to three districts, focusing on districts with high schools that served a socioeconomically disadvantaged population of students and offered few or no advanced placement (AP) courses.
Study sample
Information on the characteristics of the sample -- race, gender, free/reduced price lunch status -- is not included in the description of the sample.
Intervention Group
The objective of the College Readiness Program (CRP) is to increase the number of high-need students enrolling in Advanced Placement (AP) courses and taking and earning qualifying scores on math, science, and English language arts AP exams, with a focus on math and science. CRP provides (1) supports to teachers, such as course-specific training, mentorship, resources, and awards, (2) supports to students, such as focused study sessions, equipment and supplies, exam fee subsidies, and student awards, and (3) supports to schools, such as performance analysis, access to academic and program experts, shared goal setting, and school awards. Through these supports, it is hypothesized that teachers will become more effective AP instructors and schools will exhibit a culture of continuous improvement where STEM learning is valued, resulting in an increased number of students taking STEM AP courses and exams and receiving qualifying scores on AP exams.
Comparison Group
The schools in the experimental comparison condition received delayed treatment. During the 2016-17 school year, these comparison schools did not have access to the CRP program. Instead, students in the comparison schools only had access to the AP courses offered as part of the standard school/district approach. After outcomes data for the experimental study were collected, comparison schools received access to the CRP program, first implementing the program components in the 2017-18 school year.
Support for implementation
School administrators received a financial stipend and incentive payments for student performance on AP exams, as well as funding to purchase equipment and supplies, and support from program staff to set goals for increased student enrollment and performance in AP courses.
Additional Sources
In the case of multiple manuscripts that report on one study, the WWC selects one manuscript as the primary citation and lists other manuscripts that describe the study as additional sources.
-
Phelan, Julia; Egger, Jeffrey; Kim, Junok; Choi, Kilchan; Keum, Eunhee; Chung, Gregory K. W. K.; Baker, Eva L. (2021). 2017-2019 Implementation Evaluation of the National Math and Science Initiative's College Readiness Program. National Center for Research on Evaluation, Standards, and Student Testing (CRESST).
An indicator of the effect of the intervention, the improvement index can be interpreted as the expected change in percentile rank for an average comparison group student if that student had received the intervention.
For more, please see the WWC Glossary entry for improvement index.
An outcome is the knowledge, skills, and attitudes that are attained as a result of an activity. An outcome measures is an instrument, device, or method that provides data on the outcome.
A finding that is included in the effectiveness rating. Excluded findings may include subgroups and subscales.
The sample on which the analysis was conducted.
The group to which the intervention group is compared, which may include a different intervention, business as usual, or no services.
The timing of the post-intervention outcome measure.
The number of students included in the analysis.
The mean score of students in the intervention group.
The mean score of students in the comparison group.
The WWC considers a finding to be statistically significant if the likelihood that the finding is due to chance alone, rather than a real difference, is less than five percent.
The WWC reviews studies for WWC products, Department of Education grant competitions, and IES performance measures.
The name and version of the document used to guide the review of the study.
The version of the WWC design standards used to guide the review of the study.
The result of the WWC assessment of the study. The rating is based on the strength of evidence of the effectiveness of the intervention. Studies are given a rating of Meets WWC Design Standards without Reservations, Meets WWC Design Standards with Reservations, or >Does Not Meet WWC Design Standards.
A related publication that was reviewed alongside the main study of interest.
Study findings for this report.
Based on the direction, magnitude, and statistical significance of the findings within a domain, the WWC characterizes the findings from a study as one of the following: statistically significant positive effects, substantively important positive effects, indeterminate effects, substantively important negative effects, and statistically significant negative effects. For more, please see the WWC Handbook.
The WWC may review studies for multiple purposes, including different reports and re-reviews using updated standards. Each WWC review of this study is listed in the dropdown. Details on any review may be accessed by making a selection from the drop down list.
Tier 1 Strong indicates strong evidence of effectiveness,
Tier 2 Moderate indicates moderate evidence of effectiveness, and
Tier 3 Promising indicates promising evidence of effectiveness,
as defined in the
non-regulatory guidance for ESSA
and the regulations for ED discretionary grants (EDGAR Part 77).