Skip to main content

Breadcrumb

Home arrow_forward_ios Resource Library Search arrow_forward_ios Stabilizing School Performance Indi ...
Home arrow_forward_ios ... arrow_forward_ios Stabilizing School Performance Indi ...
Resource Library Search
Report Descriptive Study

Stabilizing School Performance Indicators in New Jersey to Reduce the Effect of Random Error

REL Mid-Atlantic
Author(s):
Morgan Rosendahl,
Brian Gill,
Jennifer Starling
Publication date:
October 2024

Summary

The Every Student Succeeds Act of 2015 requires states to use a variety of indicators, including standardized tests and attendance records, to designate schools for support and improvement based on schoolwide performance and the performance of groups of students within schools. Schoolwide and group-level performance indicators are also diagnostically relevant for district-level and school-level decisionmaking outside the formal accountability context. Like all measurements, performance indicators are subject to measurement error, with some having more random error than others. Measurement error can have an outsized effect for smaller groups of students, rendering their measured performance unreliable, which can lead to misidentification of groups with the greatest needs. Many states address the reliability problem by excluding from accountability student groups smaller than an established threshold, but this approach sacrifices equity, which requires counting students in all relevant groups.

With the aim of improving reliability, particularly for small groups of students, this study applied a stabilization model called Bayesian hierarchical modeling to group-level data (with groups assigned according to demographic designations) within schools in New Jersey. Stabilization substantially improved the reliability of test-based indicators, including proficiency rates and median student growth percentiles. The stabilization model used in this study was less effective for non-test-based indictors, such as chronic absenteeism and graduation rate, for several reasons related to their statistical properties. When stabilization is applied to the indicators best suited for it (such as proficiency and growth), it leads to substantial changes in the lists of schools designated for support and improvement. These results indicate that, applied correctly, stabilization can increase the reliability of performance indicators for processes using these indicators, simultaneously improving accuracy and equity.

Download, view, and print

Descriptive Study
REL Mid-Atlantic

Stabilizing School Performance Indicators in New Jersey to Reduce the Effect of Random Error

By: Morgan Rosendahl, Brian Gill, Jennifer Starling
Download and view this document Appendices Study snapshot

Share

Icon to link to Facebook social media siteIcon to link to X social media siteIcon to link to LinkedIn social media siteIcon to copy link value

Tags

Academic Achievement, Data and Assessments

You may also like

Zoomed in IES logo
Workshop/Training

Data Science Methods for Digital Learning Platform...

August 18, 2025
Read More
Zoomed in IES logo
Workshop/Training

Meta-Analysis Training Institute (MATI)

July 28, 2025
Read More
Zoomed in Yellow IES Logo
Workshop/Training

Bayesian Longitudinal Data Modeling in Education S...

July 21, 2025
Read More
icon-dot-govicon-https icon-quote