Skip Navigation

National Center for Education Evaluation and Regional Assistance

Operational Authority, Support, and Monitoring of School Turnaround

The federal School Improvement Grants (SIG) program, to which $3 billion were allocated under the American Recovery and Reinvestment Act of 2009 (ARRA), supports schools attempting to turn around a history of low performance. School turnaround also is a focus of Race to the Top (RTT), another ARRA-supported initiative, which involved a roughly $4 billion comprehensive education reform grant competition for states. Given the size of these federal investments, in 2010 the Institute of Education Sciences (IES), began to conduct a large-scale evaluation of RTT and SIG to better understand the implementation and impacts of these programs. The SIG component, in particular, focuses on a purposive sample of SIG-eligible schools, including (1) a group of schools that received SIG to implement one of four intervention models specified by the U.S. Department of Education and (2) a comparison group of schools from the same districts that were not implementing one of these four intervention models with SIG support. Though the results from this evaluation of SIG are not necessarily generalizable to SIG schools nationwide, they are nonetheless important because they add to the limited knowledge base about the implementation and impacts of SIG-funded school turnaround efforts.

This brief focuses on the implementation of SIG by examining three interrelated levers for school improvement: (1) school operational authority, (2) state and district support for turnaround, and (3) state monitoring of turnaround efforts. SIG principles emphasize that school leaders should be given the autonomy to operate on matters such as staffing, calendars, and budgeting, but then also be appropriately supported and monitored by states and districts to ensure progress. It is thus of interest to document the actual policies and practices related to these three levers, and to see whether there are differences between study schools implementing a SIG-funded intervention model and comparison schools not implementing a SIG-funded intervention model. Findings are based on spring 2012 survey responses from 450 school administrators and interviews with administrators in the 60 districts and 21 of the 22 states where these schools are located. Key findings include the following:

  • Operational authority. The most common area in which schools implementing and not implementing a SIG-funded intervention model reported having primary responsibility was their budgets (55 percent and 54 percent). Fewer than half of the schools in both groups reported primary responsibility in the other seven operational areas examined, such as student discipline policies (38 percent and 35 percent), staffing (37 percent and 46 percent), assessment policies (25 percent and 21 percent), and curriculum (18 percent and 16 percent). Schools implementing a SIG-funded intervention model were no more likely than non-implementing schools to report having primary responsibility, except in two areas: (1) setting professional development requirements (53 percent versus 39 percent) and (2) determining the length of the school day (19 percent versus 12 percent).
  • Support for improvement. The most common technical assistance and other supports for turnaround that states reported providing related to developing school improvement plans (20 of the 21 states interviewed) and identifying effective improvement strategies (19 of the 21 states interviewed). These two supports were also the ones districts and schools most frequently reported receiving. Schools implementing a SIG-funded intervention model were no more likely than non-implementing schools to report receiving supports in nine of twelve areas examined, including working with parents, school improvement planning, and recruiting or retaining teachers. The three exceptions were: (1) identifying turnaround strategies (82 percent versus 65 percent), (2) identifying effective instructional leaders (61 percent versus 51 percent), and (3) supporting data use (71 percent versus 40 percent).
  • State monitoring. All 21 of the states interviewed reported being responsible for monitoring low-performing schools, although just 13 of them reported that districts were also responsible. State monitoring almost universally took the form of analyzing student data (21 states) and conducting site visits (20 states), and to a lesser extent having discussions with parents/community (16 states) and surveying school staff (12 states). Most states also reported that monitoring not only served accountability purposes, but also was used for formative purposes, such as to assess implementation fidelity (14 states) and identify additional supports for schools (14 states). These monitoring activities may help inform states whether stronger action is needed, such as taking over failing schools, which 11 states reported having the authority to do in the 2011–2012 school year, and placing low-performing schools in a special district focused on school improvement, which 5 states reported having the authority to do.

PDF File View, download, and print the report as a PDF file (1.7 MB)