Skip Navigation

Home Blogs Measurement for School Improvement: The Peril and the Promise

Measurement for School Improvement: The Peril and the Promise

Mid-Atlantic | February 21, 2023

A student walking upwards atop of school books

In the wake of a pandemic that caused a nationwide nosedive in student achievement and disproportionate harms to students who were already disadvantaged, good measures of school performance may be more important than ever. States are now identifying low-performing schools under the Every Student Succeeds Act (ESSA) for the first time in three years; bad performance measures can lead to identifying the wrong schools, undermining the efficacy of the accountability system. And good school performance measures are needed not only for formal accountability systems: States and districts need good measures to know which schools and student groups need support the most, and to diagnose the kind of support they need.

School performance measures can be misinterpreted, unreliable, or just wrong.

But performance measures can also lead decisionmakers astray. They can be misinterpreted: Did student proficiency decline due to bad instructional leadership or to something outside the principal's control? They can be unreliable: Did the change in a district's National Assessment of Educational Progress score represent a real change or a statistical fluke? And they can be just wrong: Were school-level and statewide proficiency rates biased due to low test participation in 2021? To help policymakers and educators avoid mistakes in interpreting and using measures, we recently produced an infographic describing some "Dos and Don'ts" of school performance measurement. It's a great little cheat sheet for policymakers—check it out.

In general, school performance indicators need to be valid, reliable, and robust, whether they are used for formal accountability or for lower-stakes diagnostic purposes. Valid indicators actually measure what they say they measure, rather than something unrelated. Reliable indicators are reasonably consistent over time, rather than showing large random changes from one year to the next or one day to the next. Robust indicators resist manipulation, rather than being inflated by clever tricks (such as teaching to the test).

A three-pronged measurement strategy can help

In the January issue of School Administrator magazine, with a framework informed by years of working with states and districts through the REL, I describe how a system of performance indicators that is comprehensive and actionable should include a wide range of measures in each of three categories:

Figure that describes outcomes, impacts, and processes in a system of performance measures. Outcomes: A complete picture of how students are doing across multiple measures and dimensions related to long-term success. Impacts: The schools effect on outcomes over time; examples include value-added measures, median student growth percentiles, and promotion power. Processes: Instructional quality, professional climate, student experiences, and other activities that produce the impacts leading to improved student outcomes
  1. Outcomes that encompass not only basic skills in reading and math, but also a panoply of the varied knowledge, skills, and attitudes that schools seek to promote in students to prepare them for work, life, and citizenship
  2. Impacts of the school on those outcomes, using statistical approaches (such as value-added models, student growth percentiles, or promotion power models) that distinguish the school's contribution from the effect of factors outside the school's control (such as family resources)
  3. Processes that open a window on what is happening in the school, thereby helping leaders to identify factors that might be contributing to a school's success or failure and interventions that might help improve the school's impacts on its students. These might include school climate surveys and school inspection reports.

REL Mid-Atlantic partnerships are refining school accountability systems in the wake of the COVID-19 pandemic.

REL Mid-Atlantic hosts a community of practice on accountability that has been working with state agencies, districts, and charter-school authorizers (in the Mid-Atlantic region and beyond) to refine and improve their measures of school performance so they are more valid, more reliable, more robust—and more actionable for improvement. We helped Maryland develop and analyze a new measure of school climate to include in its ESSA accountability system. We measured the promotion power of every public high school in the District of Columbia to identify each school's impact on its students' postsecondary readiness. We produced the first measures of student growth in grades K-3 in individual schools across an entire state (in Maryland). We used Bayesian statistical methods to improve the reliability and stability of measures of subgroup performance in schools across Pennsylvania, developing a method that could enhance states' ability to effectively promote equitable outcomes for students. And we are working with New Jersey to test the validity and reliability of its school-level measures of students' growth in English language proficiency.

In short: It is easy to say that your agency is data-driven, but harder to make sure you're driving in the right direction. Talk to your REL. We can help.


Brian Gill

Brian Gill
Director for REL Mid-Atlantic

Connect with REL Mid-Atlantic