Skip Navigation

2006Research Conference | June 15–16

This conference highlighted the work of invited speakers, independent researchers who have received grant funds from the Institute of Education Sciences, and trainees supported through predoctoral training grants and postdoctoral fellowships. The presentations are those of the authors and do not necessarily represent the views of the U.S. Department of Education or the Institute of Education Sciences.
Hyatt Regency Washington on Capitol Hill
400 New Jersey Avenue, N.W.
Select Conference Year

First Year Results of a Randomized Trial of a New School Principal Performance Evaluation System

Steven Kimball, University of Wisconsin-Madison
Anthony Milanowski, University of Wisconsin-Madison

Abstract: This presentation highlights the first year results of a study of a new principal performance evaluation system being conducted in a large Southwestern school district. We hypothesized that the new system would provide better performance feedback, make district expectations more clear, and influence principals' priorities. This would be expected to improve principal performance as defined by the district. We tested these hypotheses by randomly assigning one-half of the principals evaluated by each principal supervisor to be evaluated in SY 2005-06 using either the new or old system. Forty principals were assigned to each group. Pre-treatment measures of performance feedback, perceived district expectations, and principal priorities were made at the beginning of the school year. Two principals under each supervisor were interviewed (n=14) at the beginning of the period, in the middle, and at then end to check for treatment fidelity and bleed-over between the groups. Principals supervisors (n=7) were also interviewed to understand how they conducted evaluations for principals in each group. Post-evaluation measures of clarity of performance expectations, feedback quality, and usefulness of evaluation in improving performance were higher for principals under the new system, but sample attrition and survey non-response prevented some differences from being significant. No effect on priorities was found. Interviews suggested that the new system was a better formative tool, but that implementation was uneven and that supervisor implementation may be more important than evaluation instrumentation.