
Final Report of the Impacts of the National Math + Science Initiative's (NMSI's) College Readiness Program on High School Students' Outcomes
Sherman, Dan; Li, Yibing; Darwin, Marlene; Taylor, Suzanne; Song, Mengli (2017). American Institutes for Research. Retrieved from: https://eric.ed.gov/?id=ED577450
-
examining116Schools, grades10-12
Practice Guide
Review Details
Reviewed: April 2024
- Practice Guide (findings for National Math + Science Initiative (NMSI) College Readiness Program)
- Quasi-Experimental Design
- Meets WWC standards with reservations because it uses a cluster quasi-experimental design that provides evidence of effects on clusters by demonstrating that the analytic sample of individuals is representative of the clusters and satisfying the baseline equivalence requirement for the clusters in the analytic intervention and comparison groups.
This review may not reflect the full body of research evidence for this intervention.
Evidence Tier rating based solely on this study. This intervention may achieve a higher tier when combined with the full body of evidence.
Findings
Outcome measure |
Comparison | Period | Sample |
Intervention mean |
Comparison mean |
Significant? |
Improvement index |
Evidence tier |
|
---|---|---|---|---|---|---|---|---|---|
Passed an English, math or science advanced placement exam |
National Math + Science Initiative (NMSI) College Readiness Program vs. Business as usual |
1 Year |
Full sample;
|
2.98 |
0.23 |
Yes |
|
|
|
Show Supplemental Findings | |||||||||
Passed an English advanced placement exam |
National Math + Science Initiative (NMSI) College Readiness Program vs. Business as usual |
1 Year |
Full sample;
|
1.65 |
0.12 |
Yes |
|
||
Passed a math or science advanced placement exam |
National Math + Science Initiative (NMSI) College Readiness Program vs. Business as usual |
1 Year |
Full sample;
|
2.20 |
0.24 |
Yes |
|
||
Passed a science advanced placement exam |
National Math + Science Initiative (NMSI) College Readiness Program vs. Business as usual |
1 Year |
Full sample;
|
1.68 |
0.22 |
Yes |
|
||
Passed a math advanced placement exam |
National Math + Science Initiative (NMSI) College Readiness Program vs. Business as usual |
1 Year |
Full sample;
|
1.15 |
0.17 |
Yes |
|
||
School-wide percent passing AP exam in English, mathematics, or science |
National Math + Science Initiative (NMSI) College Readiness Program vs. Business as usual |
1 Year |
Full sample;
|
2.98 |
0.23 |
Yes |
-- |
Evidence Tier rating based solely on this study. This intervention may achieve a higher tier when combined with the full body of evidence.
Sample Characteristics
Characteristics of study sample as reported by study author.
-
Rural, Suburban, Urban
-
- B
- A
- C
- D
- E
- F
- G
- I
- H
- J
- K
- L
- P
- M
- N
- O
- Q
- R
- S
- V
- U
- T
- W
- X
- Z
- Y
- a
- h
- i
- b
- d
- e
- f
- c
- g
- j
- k
- l
- m
- n
- o
- p
- q
- r
- s
- t
- u
- v
- x
- w
- y
Colorado, Indiana
-
Race Other or unknown 47% White 53% -
Ethnicity Other or unknown 100% -
Eligible for Free and Reduced Price Lunch Free or reduced price lunch (FRPL) 51% No FRPL 49%
Study Details
Setting
The study took place in 116 high schools in Colorado and Indiana, including rural, urban, and suburban schools.
Study sample
The study includes students in grades 10–12 from 116 high schools, but does not provide exact numbers of students, teachers or classrooms included in the study. Approximately 51% of students were eligible for free or reduced-price lunch, 53% were White, and 47% did not report race. The study did not report ethnicity or disability information for any students.
Intervention Group
The National Math + Science Initiative (NMSI) College Readiness Program aims to support successful completion of rigorous Advanced Placement (AP) coursework that better prepares students for postsecondary education, particularly in STEM disciplines. Program components included providing (1) teacher training and ongoing support aimed at increasing content knowledge and improving instructional practices in science, math, and English; (2) STEM-related supplies and equipment like calculators and laboratory equipment; (3) subsidies for students’ AP exam fees; and (4) incentive payments to teachers and students based on AP test performance. The program was offered at the school level over a three-year period.
Comparison Group
Comparison group schools did not offer the National Math + Science Initiative College Readiness Program. Comparison teachers may have participated in other business-as-usual training and professional development offered by their schools or school districts.
Support for implementation
NMSI partnered with nonprofit organizations that served as the primary implementer of the program in each state. Prior to the first year of the intervention, these organizations hired and trained staff as content specialists to support implementation in program schools. These content specialists worked closely with science, math, or English teachers in each school to provide professional development, resources, monitoring, consultation, and problem solving. This included a weeklong training session during the summer prior to the first year program implementation. Content specialists also supported lead teachers at each school to conduct meetings to align and improve instruction in each content area, especially in AP courses. AP teachers also participated in 4-day College Board Summer Institutes each year to prepare them to teach AP content. Finally, the implementing organization provided three 6-hour Saturday study sessions that cover pedagogy and course content. Students also had access to online tools, tutoring, and Saturday study sessions focused on AP course content and test-taking. Students and teachers received cash payments of $100 per passing score and additional bonuses for achieving other targets. Finally, the program covers 50% of AP exam fees for students.
An indicator of the effect of the intervention, the improvement index can be interpreted as the expected change in percentile rank for an average comparison group student if that student had received the intervention.
For more, please see the WWC Glossary entry for improvement index.
An outcome is the knowledge, skills, and attitudes that are attained as a result of an activity. An outcome measures is an instrument, device, or method that provides data on the outcome.
A finding that is included in the effectiveness rating. Excluded findings may include subgroups and subscales.
The sample on which the analysis was conducted.
The group to which the intervention group is compared, which may include a different intervention, business as usual, or no services.
The timing of the post-intervention outcome measure.
The number of students included in the analysis.
The mean score of students in the intervention group.
The mean score of students in the comparison group.
The WWC considers a finding to be statistically significant if the likelihood that the finding is due to chance alone, rather than a real difference, is less than five percent.
The WWC reviews studies for WWC products, Department of Education grant competitions, and IES performance measures.
The name and version of the document used to guide the review of the study.
The version of the WWC design standards used to guide the review of the study.
The result of the WWC assessment of the study. The rating is based on the strength of evidence of the effectiveness of the intervention. Studies are given a rating of Meets WWC Design Standards without Reservations, Meets WWC Design Standards with Reservations, or >Does Not Meet WWC Design Standards.
A related publication that was reviewed alongside the main study of interest.
Study findings for this report.
Based on the direction, magnitude, and statistical significance of the findings within a domain, the WWC characterizes the findings from a study as one of the following: statistically significant positive effects, substantively important positive effects, indeterminate effects, substantively important negative effects, and statistically significant negative effects. For more, please see the WWC Handbook.
The WWC may review studies for multiple purposes, including different reports and re-reviews using updated standards. Each WWC review of this study is listed in the dropdown. Details on any review may be accessed by making a selection from the drop down list.
Tier 1 Strong indicates strong evidence of effectiveness,
Tier 2 Moderate indicates moderate evidence of effectiveness, and
Tier 3 Promising indicates promising evidence of effectiveness,
as defined in the
non-regulatory guidance for ESSA
and the regulations for ED discretionary grants (EDGAR Part 77).