Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: National Center for Performance Incentives (Policy-NCPI)
Center: NCER Year: 2006
Principal Investigator: Springer, Matthew G. Awardee: Vanderbilt University
Program: Education Research and Development Centers      [Program Details]
Award Period: 5 years Award Amount: $10,835,509
Type: Multiple Goals Award Number: R305A060034
Description:

Co-Principal Investigators: Ballou, Dale; Podgursky, Michael

Purpose: The purpose of the National Center on Performance Incentives (NCPI) was to conduct a focused program of education research on the impact of teacher incentives on student and teacher outcomes.

Project Activities: NCPI's primary activities were two randomized field trials to examine the impacts of teacher incentives. The first, the Project on Incentives in Teaching (POINT), examined the impact of individual teacher incentives on student and teacher outcomes in Metropolitan Nashville Public Schools (MNPS) in Tennessee. The second experiment, the Round Rock Pilot Project on Team Incentives (PPTI), was conducted in Round Rock Independent School District (RRISD) in Texas and examined the impact of team-based incentives. In addition, NCPI examined the New York City School-wide Performance Bonus Program (SPBP), the Texas Governor's Educator Excellence Award Programs (GEEAP) and carried out additional research on teacher value-added measures and teacher pensions.

The Project on Incentives in Teaching (POINT): The research team followed 296 volunteer teachers in grades 5 to 8 who taught at least 1 math class and were expected to have at least 10 students take the Tennessee Comprehensive Assessment Program (TCAP) math test from SY 2006–07 through 2008–09. The teachers were randomized into a treatment group and a control group. Treatment teachers were eligible to receive bonuses ranging from $5,000 to $15,000 based on their students' annual gains on the TCAP assessments. The researchers surveyed teachers on their opinions of performance pay and instructional practices. Teacher attrition was high—about 50 percent by the third year was due to teacher mobility—and did not vary by assignment group systematically. No evidence was found of manipulation of student assignments to teachers in the treatment or control group.

The Round Rock Pilot Project on Team Incentives (PPTI): The study consisted of two 1-year random experiments to test the effects of teacher bonuses provided to multidisciplinary teams based on their students' test scores. For the 2 years combined, 158 teams comprising 665 teachers taught core subjects to about 17,000 students in grades 6 to 8. Teams were randomly assigned to a treatment or control group within grade. Treatment teams could receive a bonus of $4,500 to $7,500 based on their students' performance on the Texas Assessment of Knowledge and Skills (TAKS) and the Stanford Achievement Test -10 (the latter was used for subjects not covered by the TAKS in a specific grade). The researchers surveyed teachers on their opinions of performance pay and instructional practices.

The New York City School-Wide Performance Bonus Program (SPBP): Midway into the 2007–2008 school year, the New York City Department of Education (NYC DOE) randomly assigned eligible schools to the SPBP under which schools could earn bonus awards of up to $3,000 per full-time union member if the school met predetermined performance targets. A unique feature was within-school teams defining how award dollars would be distributed in their school. To examine its implementation and effects, the NYC DOE tasked a RAND Corporation-led partnership with NCPI to conduct a 2-year study of the program that would offer an independent assessment. The study was built on past research and guided by a theory of action articulated by program leaders. For the 2007–08 to 2009–10 school years, the research team examined student test scores; teacher, school staff, and administrator surveys; and interviews with administrators, staff members, program sponsors, and union and district officials.

The Texas Governor's Educator Excellence Award Programs (GEEAP): NCPI was funded directly by the Texas Education Agency to separately examine the three incentive programs making up the GEEAP: (1) the Governor's Educator Excellence Grants (GEEG), (2) the Texas Educator Excellence Grants (TEEG), and (3) a district-level grant program called the District Awards for Teacher Excellence (DATE).

Key Outcomes: The main findings of this project are as follows:

  • The POINT study found no overall impact of incentives on student achievement. There was a positive effect of incentives detected in fifth grade during the second and third years of the experiment. However, the effect did not persist after students left fifth grade. An investigation of instructional practices, participation in professional development, and teacher perceptions shows that treatment teachers differed little from control teachers (Springer et al. 2010; Springer et al., 2012; Ballou et al.,  2012).
  • The PPTI study results showed no overall impact of incentives on student achievement or the attitudes and practices of teachers (Springer et al., 2012).
  • The POINT and PPTI studies demonstrated the feasibility of pilot programs rewarding teachers or teams of teachers for increases in student test scores. Both showed that performance pay programs could be implemented without negative impacts on teacher relationships (e.g., diminished collegiality or increased competition) (Springer et al. 2010; Springer et al., 2012; Ballou et al., 2012).
  • The SPBP study team found fairly smooth implementation but no discernable effect on student achievement or school progress report cards. The SPBP had no effects on teacher attitudes, perceptions, and behaviors. The majority of participating schools adopted egalitarian award distribution plans, reflecting a strong preference for equal bonus shares for all staff. It is unclear whether the lack of program effects on collaboration or morale contributed to the lack of intermediate outcomes (Springer & Winters, 2009).

For the components of the GEEAP, the following were found:

  • The GEEG study found inconclusive results for the impacts on student achievement. The study also found a decrease in teacher turnover in the first year of implementation, and the receipt and size of bonus awards had a strong impact on teacher turnover. Although implementation success varied across participating schools, principals had an overall positive perception of the program (Springer, 2009a and 2009b; Springer et al., 2007).
  • The TEEG study found no strong evidence of a systematic treatment effect on student achievement gains. There was mixed evidence on associations between TEEG plan design features and student achievement gains. The study found no overall effect on teacher turnover, although there was strong evidence that several design features influenced teacher turnover in participating schools. The size and receipt of bonus awards had a strong impact on teacher turnover. Although implementation success varied across participating schools, most personnel in TEEG schools supported the principle of performance pay. Over time, personnel in schools that remained in the TEEG program tended to have more favorable opinions towards performance pay and the impact of TEEG in schools, workplace collegiality, and principal leadership (Springer et al., 2009; Springer et al., 2008; Springer et al., 2008).
  • The DATE study found that students in DATE schools had greater Texas Assessment of Knowledge and Skills TAKS gains than those in non-DATE schools during the first 2 years of implementation. The difference between TAKS passing rates of DATE and non-DATE schools decreased, indicating that passing rates in DATE schools were catching up to rates in non-DATE schools. The effects on student achievement varied by district approaches to incentive pay. The probability of teacher turnover increased among teachers who did not receive a DATA award, while it fell among teachers receiving awards. Overall, teachers in DATE schools reported that the incentive pay plans were fair, the goals targeted by the plans were worthy, and that the correct teachers were identified to receive awards. Teachers did not perceive negative effects from DATE, but they did not report that the incentive plans were contributing to school improvements. Program effects varied across districts (Springer et al., 2010; Springer et al., 2010).

Products and Publications

ERIC Citations: Find available citations in ERIC for this award here.

Project Website: National Center on Performance Incentives (vanderbilt.edu)

Select Publications:

Ballou, D. (2009). Test scaling and value-added measurement. Education Finance and Policy, 4(4), 351–383.

Ballou, D., Springer, M.G., McCaffrey, D.F., Lockwood, J.R., Stecher, B., and Hamilton, L. (2012). POINT/CounterPOINT: The view from the trenches of education policy research. Education Finance and Policy, 7(2), 170–202. ERIC Number: ED624372 and EJ971072

Costrell, R. M., & Podgursky, M. (2010). Distribution of benefits in teacher retirement systems and their implications for mobility. Education Finance and Policy, 5(4), 519–557. ERIC Number: EJ902751

DeArmond, M., & Goldhaber, D. (2010a). Scrambling the nest egg: How well do teachers understand their pensions, and what do they think about alternative pension structures? Education Finance and Policy, 5(4), 558–586.

Friedberg, L., & Turner, S. (2010). Labor market effects of pensions and implications for teachers. Education Finance and Policy, 5(4), 463–491.

Hansen, J. S. (2010). An introduction to teacher retirement benefits. Education Finance and Policy, 5(4), 402–437.

Hess, F. M., & Squire, J. P. (2010). "But the pension fund was just 'SITTING' there...": The politics of teacher retirement plans. Education Finance and Policy, 5(4), 587–616.

Koedel, C., & Betts, J. (2010). Value added to what? How a ceiling in the testing instrument influences value-added estimation. Education Finance and Policy, 5(1), 54–81.

Koedel, C., & Betts, J. R. (2011). Does student sorting invalidate value-added models of teacher effectiveness? An extended analysis of the Rothstein critique. Education Finance and Policy, 6(1), 18–42. ERIC Number: EJ933124

Podgursky, M., & Springer, M. G. (2007). Credentials versus performance: Review of the teacher performance pay research. Peabody Journal of Education, 82(4), 551–573.

Springer, M. G., & Taylor, L. L. (2016). Designing incentives for public school teachers: Evidence from a Texas incentive pay program. Journal of Education Finance, 41(3), 344–381. ERIC Number: EJ1168744

Springer, M.G., Pane, J., Le, V., McCaffrey, D., Burns, S.F., Hamilton, L., and Stecher, B. (2012). Team pay for performance: Experimental evidence from the round rock pilot project on team incentives. Educational Evaluation and Policy Analysis. 34(4), 367–390.


Back