IES Grant
Title: | National Center for Performance Incentives (Policy-NCPI) | ||
Center: | NCER | Year: | 2006 |
Principal Investigator: | Springer, Matthew G. | Awardee: | Vanderbilt University |
Program: | Education Research and Development Centers [Program Details] | ||
Award Period: | 5 years | Award Amount: | $10,835,509 |
Type: | Multiple Goals | Award Number: | R305A060034 |
Description: | Co-Principal Investigators: Ballou, Dale; Podgursky, Michael Purpose: The purpose of the National Center on Performance Incentives (NCPI) was to conduct a focused program of education research on the impact of teacher incentives on student and teacher outcomes. Project Activities: NCPI's primary activities were two randomized field trials to examine the impacts of teacher incentives. The first, the Project on Incentives in Teaching (POINT), examined the impact of individual teacher incentives on student and teacher outcomes in Metropolitan Nashville Public Schools (MNPS) in Tennessee. The second experiment, the Round Rock Pilot Project on Team Incentives (PPTI), was conducted in Round Rock Independent School District (RRISD) in Texas and examined the impact of team-based incentives. In addition, NCPI examined the New York City School-wide Performance Bonus Program (SPBP), the Texas Governor's Educator Excellence Award Programs (GEEAP) and carried out additional research on teacher value-added measures and teacher pensions. The Project on Incentives in Teaching (POINT): The research team followed 296 volunteer teachers in grades 5 to 8 who taught at least 1 math class and were expected to have at least 10 students take the Tennessee Comprehensive Assessment Program (TCAP) math test from SY 2006–07 through 2008–09. The teachers were randomized into a treatment group and a control group. Treatment teachers were eligible to receive bonuses ranging from $5,000 to $15,000 based on their students' annual gains on the TCAP assessments. The researchers surveyed teachers on their opinions of performance pay and instructional practices. Teacher attrition was high—about 50 percent by the third year was due to teacher mobility—and did not vary by assignment group systematically. No evidence was found of manipulation of student assignments to teachers in the treatment or control group. The Round Rock Pilot Project on Team Incentives (PPTI): The study consisted of two 1-year random experiments to test the effects of teacher bonuses provided to multidisciplinary teams based on their students' test scores. For the 2 years combined, 158 teams comprising 665 teachers taught core subjects to about 17,000 students in grades 6 to 8. Teams were randomly assigned to a treatment or control group within grade. Treatment teams could receive a bonus of $4,500 to $7,500 based on their students' performance on the Texas Assessment of Knowledge and Skills (TAKS) and the Stanford Achievement Test -10 (the latter was used for subjects not covered by the TAKS in a specific grade). The researchers surveyed teachers on their opinions of performance pay and instructional practices. The New York City School-Wide Performance Bonus Program (SPBP): Midway into the 2007–2008 school year, the New York City Department of Education (NYC DOE) randomly assigned eligible schools to the SPBP under which schools could earn bonus awards of up to $3,000 per full-time union member if the school met predetermined performance targets. A unique feature was within-school teams defining how award dollars would be distributed in their school. To examine its implementation and effects, the NYC DOE tasked a RAND Corporation-led partnership with NCPI to conduct a 2-year study of the program that would offer an independent assessment. The study was built on past research and guided by a theory of action articulated by program leaders. For the 2007–08 to 2009–10 school years, the research team examined student test scores; teacher, school staff, and administrator surveys; and interviews with administrators, staff members, program sponsors, and union and district officials. The Texas Governor's Educator Excellence Award Programs (GEEAP): NCPI was funded directly by the Texas Education Agency to separately examine the three incentive programs making up the GEEAP: (1) the Governor's Educator Excellence Grants (GEEG), (2) the Texas Educator Excellence Grants (TEEG), and (3) a district-level grant program called the District Awards for Teacher Excellence (DATE). Key Outcomes: The main findings of this project are as follows:
For the components of the GEEAP, the following were found:
Products and Publications ERIC Citations: Find available citations in ERIC for this award here. Project Website: National Center on Performance Incentives (vanderbilt.edu) Select Publications: Ballou, D. (2009). Test scaling and value-added measurement. Education Finance and Policy, 4(4), 351–383. Ballou, D., Springer, M.G., McCaffrey, D.F., Lockwood, J.R., Stecher, B., and Hamilton, L. (2012). POINT/CounterPOINT: The view from the trenches of education policy research. Education Finance and Policy, 7(2), 170–202. ERIC Number: ED624372 and EJ971072 Costrell, R. M., & Podgursky, M. (2010). Distribution of benefits in teacher retirement systems and their implications for mobility. Education Finance and Policy, 5(4), 519–557. ERIC Number: EJ902751 DeArmond, M., & Goldhaber, D. (2010a). Scrambling the nest egg: How well do teachers understand their pensions, and what do they think about alternative pension structures? Education Finance and Policy, 5(4), 558–586. Friedberg, L., & Turner, S. (2010). Labor market effects of pensions and implications for teachers. Education Finance and Policy, 5(4), 463–491. Hansen, J. S. (2010). An introduction to teacher retirement benefits. Education Finance and Policy, 5(4), 402–437. Hess, F. M., & Squire, J. P. (2010). "But the pension fund was just 'SITTING' there...": The politics of teacher retirement plans. Education Finance and Policy, 5(4), 587–616. Koedel, C., & Betts, J. (2010). Value added to what? How a ceiling in the testing instrument influences value-added estimation. Education Finance and Policy, 5(1), 54–81. Koedel, C., & Betts, J. R. (2011). Does student sorting invalidate value-added models of teacher effectiveness? An extended analysis of the Rothstein critique. Education Finance and Policy, 6(1), 18–42. ERIC Number: EJ933124 Podgursky, M., & Springer, M. G. (2007). Credentials versus performance: Review of the teacher performance pay research. Peabody Journal of Education, 82(4), 551–573. Springer, M. G., & Taylor, L. L. (2016). Designing incentives for public school teachers: Evidence from a Texas incentive pay program. Journal of Education Finance, 41(3), 344–381. ERIC Number: EJ1168744 Springer, M.G., Pane, J., Le, V., McCaffrey, D., Burns, S.F., Hamilton, L., and Stecher, B. (2012). Team pay for performance: Experimental evidence from the round rock pilot project on team incentives. Educational Evaluation and Policy Analysis. 34(4), 367–390. |
||
Back |