In the wake of a pandemic that caused a nationwide nosedive in student achievement and disproportionate harms to students who were already disadvantaged, good measures of school performance may be more important than ever. States are now identifying low-performing schools under the Every Student Succeeds Act (ESSA) for the first time in three years; bad performance measures can lead to identifying the wrong schools, undermining the efficacy of the accountability system. And good school performance measures are needed not only for formal accountability systems: States and districts need good measures to know which schools and student groups need support the most, and to diagnose the kind of support they need.
School performance measures can be misinterpreted, unreliable, or just wrong.
But performance measures can also lead decisionmakers astray. They can be misinterpreted: Did student proficiency decline due to bad instructional leadership or to something outside the principal's control? They can be unreliable: Did the change in a district's National Assessment of Educational Progress score represent a real change or a statistical fluke? And they can be just wrong: Were school-level and statewide proficiency rates biased due to low test participation in 2021? To help policymakers and educators avoid mistakes in interpreting and using measures, we recently produced an infographic describing some "Dos and Don'ts" of school performance measurement. It's a great little cheat sheet for policymakers—check it out.
In general, school performance indicators need to be valid, reliable, and robust, whether they are used for formal accountability or for lower-stakes diagnostic purposes. Valid indicators actually measure what they say they measure, rather than something unrelated. Reliable indicators are reasonably consistent over time, rather than showing large random changes from one year to the next or one day to the next. Robust indicators resist manipulation, rather than being inflated by clever tricks (such as teaching to the test).
In the January issue of School Administrator magazine, with a framework informed by years of working with states and districts through the REL, I describe how a system of performance indicators that is comprehensive and actionable should include a wide range of measures in each of three categories:
REL Mid-Atlantic partnerships are refining school accountability systems in the wake of the COVID-19 pandemic.
REL Mid-Atlantic hosts a community of practice on accountability that has been working with state agencies, districts, and charter-school authorizers (in the Mid-Atlantic region and beyond) to refine and improve their measures of school performance so they are more valid, more reliable, more robust—and more actionable for improvement. We helped Maryland develop and analyze a new measure of school climate to include in its ESSA accountability system. We measured the promotion power of every public high school in the District of Columbia to identify each school's impact on its students' postsecondary readiness. We produced the first measures of student growth in grades K-3 in individual schools across an entire state (in Maryland). We used Bayesian statistical methods to improve the reliability and stability of measures of subgroup performance in schools across Pennsylvania, developing a method that could enhance states' ability to effectively promote equitable outcomes for students. And we are working with New Jersey to test the validity and reliability of its school-level measures of students' growth in English language proficiency.
In short: It is easy to say that your agency is data-driven, but harder to make sure you're driving in the right direction. Talk to your REL. We can help.
Director for REL Mid-Atlantic