Skip Navigation

Publications & Products

Search Results: (1-15 of 36 records)

 Pub Number  Title  Date
REL 2023001 Stabilizing subgroup proficiency results to improve identification of low-performing schools
The Every Student Succeeds Act (ESSA) requires states to identify schools with low-performing student subgroups for Targeted Support and Improvement (TSI) or Additional Targeted Support and Improvement (ATSI). Random differences between students’ true abilities and their test scores, also called measurement error, reduce the statistical reliability of the performance measures used to identify schools for these categorizations. Measurement error introduces a risk that the identified schools are unlucky rather than truly low performing. Using data provided by the Pennsylvania Department of Education (PDE), the study team used Bayesian hierarchical modeling to improve the reliability of subgroup proficiency measures, allowing PDE to target the schools and students that most need additional support. PDE plans to incorporate stabilization as a “safe harbor” alternative in its 2022 accountability calculations. The study also shows that Bayesian stabilization produces reliable results for subgroups as small as 10 students—suggesting that states could choose to reduce minimum counts used in subgroup calculations (typically now around 20 students), promoting accountability for all subgroups without increasing random error. Findings could be relevant to states across the country, all of which face the same need to identify schools for TSI and ATSI, and the same tension between accountability and reliability, which Bayesian stabilization could help to resolve.
2/27/2023
REL 2023146 Indicators of School Performance in Texas
The School Improvement Division of the Texas Education Agency (TEA) identifies, monitors, and supports low-performing schools. To identify low-performing schools, TEA assigns annual academic accountability ratings to its districts and schools, but these ratings are only provided once per year and are vulnerable to disruptions in the assessment system. Schools that receive low accountability ratings do not meet accountability expectations and are considered low-performing.
12/5/2022
NCES 2022028 2019 NAEP High School Transcript Study

The 2019 NAEP High School Transcript Study (HSTS) describes the coursetaking patterns and academic performance of graduates from a national sample of U.S. public and private schools who also took the 2019 NAEP twelfth-grade mathematics and science assessments. This report uses data from the 1990, 2000, 2009, and 2019 NAEP HSTS for coursetaking results and from 2005, 2009, and 2019 for comparisons to NAEP.

The study of high school graduates’ academic performance and coursetaking patterns is based on an analysis of their transcripts and NAEP twelfth-grade mathematics and science assessment results. HSTS show trends from 1990, 2000, 2009, and 2019 in grade point averages, course credits earned, curriculum levels, and various coursetaking patterns. The 2019 HSTS uses a new course classification system, the School Courses for the Exchange of Data (SCED), to provide a more detailed breakdown of cross-disciplinary coursetaking programs such as Career and Technical Education (CTE) and Science Technology Engineering and Mathematics (STEM) coursetaking.

The study also compares graduates’ average NAEP scale scores from the twelfth-grade mathematics and science assessments to the academic achievement reported in their transcripts. The linkage of the NAEP twelfth-grade mathematics and science assessments to HSTS provides the opportunity for school leaders, policy makers, and researchers to analyze student performance by a rich set of HSTS and NAEP contextual factors.

3/16/2022
NCES 2021077 2020 Long-Term Trend Reading and Mathematics Assessment Results at Age 9 and Age 13
This report presents the results of the National Assessment of Educational Progress (NAEP) long-term trend assessments in reading and mathematics administered during the 2019–20 school year to 9- and 13-year-old students. Long-term trend assessments were first administered in the early 1970s; results are available for 13 reading assessments dating back to 1971 and 12 mathematics assessments dating back to 1973. This report provides trend results in terms of average scale scores, selected percentiles, and five performance levels. Item maps for each age group illustrate skills demonstrated by students when responding to assessment questions. Scale score results are included for students by selected background characteristics (e.g., race/ethnicity, gender, and grade attended). Overall, the 2020 average scores in reading and mathematics for 13-year-olds were higher than the earliest assessments but declined since 2012. Scores for the lowest-performing students (at the 10th percentile) decreased from 2012 at both ages and subjects.
10/14/2021
REL 2021226 Identifying Students At Risk Using Prior Performance Versus a Machine Learning Algorithm

This report provides information for administrators in local education agencies who are considering early warning systems to identify at-risk students. Districts use early warning systems to target resources to the most at-risk students and intervene before students drop out. Schools want to ensure the early warning system accurately identifies the students that need support to make the best use of available resources. The report compares the accuracy of using simple flags based on prior academic problems in school (prior performance early warning system) to an algorithm using a range of in- and out-of-school data to estimate the specific risk of each academic problem for each student in each quarter. Schools can use one or more risk-score cutoffs from the algorithm to create low- and high-risk groups. This study compares a prior performance early warning system to two risk-score cutoff options: a cutoff that identifies the same percentage of students as the prior performance early warning system, and a cutoff that identifies the 10 percent of students most at risk.

The study finds that the prior performance early warning system and the algorithm using the same-percentage risk score cutoffs are similarly accurate. Both approaches successfully identify most of the students who ultimately are chronically absent, have a low grade point average, or fail a course. In contrast, the algorithm with 10-percent cutoffs is good at targeting the students who are most likely to experience an academic problem; this approach has the advantage in predicting suspensions, which are rarer and harder to predict than the other outcomes. Both the prior performance flags and the algorithm are less accurate when predicting outcomes for students who are Black.

The findings suggest clear tradeoffs between the options. The prior performance early warning system is just as accurate as the algorithm for some purposes and is cheaper and easier to set up, but it does not provide fine-grained information that could be used to identify the students who are at greatest risk. The algorithm can distinguish degrees of risk among students, enabling a district to set cutoffs that vary depending on the prevalence of different outcomes, the harms of over-identifying versus under-identifying students at risk, and the resources available to support interventions.

9/28/2021
REL 2021107 Characteristics and Performance of High School Equivalency Exam Takers in New Jersey
Since 2014 the New Jersey Department of Education has offered three high school equivalency (HSE) exams for nongraduates seeking credentials: the GED, the High School Equivalency Test (HiSET), and the Test Assessing Secondary Completion (TASC). This study used data on exam takers who had been grade 8 students in a New Jersey public school between 2008/09 and 2013/14 and who had attempted at least one HSE exam in New Jersey between March 2014 and December 2018. It analyzed how the characteristics of exam takers differ across exams and from the characteristics of non–exam takers, how the performance of exam takers with similar backgrounds varies, and how a recent reduction in the passing threshold for two of the exams affected passing rates. Among all students who had been grade 8 students in a New Jersey public school during the study years, HSE exam takers completed fewer years of school, were more likely to have been eligible for the national school lunch program in grade 8, and were more likely to identify as Black or Hispanic than non–exam takers. GED takers had received higher grade 8 standardized test scores, were more likely to identify as White, and were less likely to have been eligible for the national school lunch program in grade 8 than HiSET and TASC takers. Under the New Jersey Department of Education's original passing thresholds, exam takers in the study sample were more likely to pass the HiSET and TASC than the GED on the first attempt (after grade 8 standardized test scores were controlled for). However, after the reduction in passing thresholds, the first-attempt passing rate was similar across the three exams. Under the new passing thresholds, two-thirds of GED takers and more than half of HiSET and TASC takers passed on the first attempt, and—when all exam attempts are included—three-quarters of all exam takers ever passed each exam.
8/23/2021
NCES 2021019 Program for the International Student Assessment (PISA) 2018 Public Use File (PUF)

The PISA 2018 Public Use File (PUF) consists of data from the PISA 2018 sample. Statistical confidentiality treatments were applied due to confidentiality concerns. The PUF can be accessed from the National Center for Education Statistics website at http://nces.ed.gov/surveys/pisa/datafiles.asp.

For more details on the data, please refer to chapter 9 of the PISA 2018 Technical Report and User Guide (NCES 2021-011).

7/8/2021
NCES 2021020 Technical Report and User Guide for the 2016 Program for International Student Assessment (PISA) Young Adult Follow-up Study
This technical report and user guide is designed to provide researchers with an overview of the design and implementation of PISA YAFS 2016, as well as with information on how to access the PISA YAFS 2016 data.
7/8/2021
NCES 2021022 Program for the International Student Assessment Young Adult Follow-up Study (PISA YAFS) 2016 Public Use File (PUF)

The PISA YAFS 2016 Public Use File (PUF) consists of data from the PISA YAFS 2016 sample. PISA YAFS was conducted in the United States in 2016 with a sample of young adults (at age 19) who participated in PISA 2012 when they were in high school (at age 15). In PISA YAFS, students took the Education and Skills Online (ESO) literacy and numeracy assessments, which are based on the Program for the International Assessment of Adult Competencies (PIAAC). It contains data for individuals including responses to the background questionnaire and the cognitive assessment. Statistical confidentiality treatments were applied due to confidentiality concerns.

For more details on the data, please refer to chapter 8 of the PISA YAFS 2016 Technical Report and User Guide (NCES 2021-020).

7/8/2021
NCES 2021047 Program for the International Student Assessment (PISA) 2018 Restricted-Use Files (RUF)

The PISA 2018 Restricted Use File (RUF) consists of restricted-use data from PISA 2018 for the United States. The data file and documentation includes the data file, a codebook, instructions on how to merge with the U.S. PISA 2018 public-use dataset (NCES 2021-047), and a cross-walk to assist in merging with other public datasets, such as the Common Core of Data (CCD) and Private School Survey (PSS). As these data files can be used to identify respondent schools, a restricted-use license must be obtained before access to the data is granted. Click on the restricted-use license link below for more details https://nces.ed.gov/surveys/pisa/datafiles.asp.

For more details on the data, please refer to chapter 9 of the PISA 2018 Technical Report and User Guide (NCES 2021-011).

7/8/2021
REL 2021085 Relationship between State Annual School Monitoring Indicators and Outcomes in Massachusetts Low‑Performing Schools
The Massachusetts Department of Elementary and Secondary Education supports low-performing schools through a process that draws on qualitative and quantitative data from monitoring visits. The data are used to produce ratings for 26 turnaround indicators in four turnaround practice areas relating to school leadership, instructional practices, student supports, and school climate. This study analyzed data on school indicator ratings collected during school years 2014/15–2018/19 from 91 low-performing schools, with a focus on the distribution of the ratings among schools during their first year in the monitoring system and on the relationship of ratings to school outcomes. During the first year in which ratings data were available for a school, a majority of schools were in the two highest rating levels for 21 of the 26 indicators. Schools generally had lower rating levels for indicators in the student supports practice area than in the other three practice areas. Ratings for half the indicators were statistically significantly related to better schoolwide student outcomes and had a practically meaningful effect size of .25 or greater, and none was statistically significantly related to worse outcomes. Two indicators in the leadership practice area (school leaders' high expectations for students and staff and trusting relationships among staff) were related to lower chronic absenteeism rates. Ratings for five indicators in the instructional practices area were related to higher student academic growth in English language arts or math; two of these indicators (use of student assessment data to inform classroom instruction and school structures for instructional improvements) were related to higher growth in both English language arts and math. Ratings for four indicators in the student supports practice area (teacher training to identify student needs, research-based interventions for all students, interventions for English learner students, and interventions for students with disabilities) were related to higher student academic growth in English language arts or math. Two indicators in the school climate practice area (schoolwide behavior plans and adult–student relationships) were related to higher student academic growth in English language arts or math or lower chronic absenteeism rate. Eight indicators were not statistically related to any of the outcomes of interest.
5/17/2021
REL 2021048 Creating and Using Performance Assessments: An Online Course for Practitioners
This self-paced, online course provides educators with detailed information on performance assessment. Through five modules, practitioners, instructional leaders, and administrators will learn foundational concepts of assessment literacy and how to develop, score, and use performance assessments. They will also learn about the role of performance assessment within a comprehensive assessment system. Each module will take approximately 30 minutes to complete, with additional time needed to complete the related tasks, such as creating a performance assessment and rubric. Participants will be provided with a certificate of completion upon finishing the course.
11/2/2020
REL 2017226 Growth mindset, performance avoidance, and academic behaviors in Clark County School District
Previous research strongly suggests that beliefs regarding the nature of ability and the payoff to effort (academic mindsets) and the related actions (academic behaviors) play an important role in supporting student success. Not much is known about the distribution of these beliefs among teachers and students in different academic contexts. This study examined the distribution of reported academic mindsets and behaviors in Nevada’s Clark County School District. The analysis revealed that most students reported beliefs that are largely consistent with a growth mindset. However, reported beliefs and behaviors differed significantly depending on students' English learner status, race/ethnicity, grade level and prior achievement. For example, Black and Hispanic students reported lower levels of growth mindset than White students. English learner students reported significantly lower levels of growth mindset and higher levels of performance avoidance than their non-English learner counter parts. Lower achieving students reported significantly lower levels of growth mindset and significantly higher levels of performance avoidance than their higher achieving peers. Teachers reported greater beliefs in growth mindset than students, and their beliefs regarding growth mindset did not, for the most part, vary significantly depending on the characteristics of the students attending their schools.
4/5/2017
REL 2017253 Implementing the extended school day policy in Florida's 300 lowest performing elementary schools
Since 2014, Florida law has required the 300 elementary schools with the lowest reading performance to provide supplemental reading instruction through an extended school day. This study found that in 2014/15, on average, the lowest performing schools were smaller than other elementary schools and served higher proportions of racial/ethnic minority students and students eligible for the federal school lunch program. Schools reported using a variety of strategies to comply with the extended school day policy such as increasing reading instruction time each day, increasing staff, providing professional development for teachers, and providing instruction in the extra hour that differed from instruction during the rest of the day. Increased professional development and curricular and pedagogic changes were identified as indirect benefits of implementation.
3/28/2017
REL 2017267 Exploring district-level expenditure-to-performance ratios
Districts across the nation are seeking ways to increase efficiency by maintaining, if not improving, educational outcomes using fewer resources. One proxy for school district efficiency is an expenditure-to-performance ratio, for example a ratio of per pupil expenditures to student academic performance. Using state education department data from an example state in the Regional Educational Laboratory Northeast & Islands Region, researchers created six different expenditure-to-performance ratios and investigated how districts' inclusion in the highest quartile on districts rankings varied according to the expenditure and performance measures used to calculate each ratio. By demonstrating the variability in district rankings depending on the ratio being examined, this guide provides states and districts with evidence to suggest that state policymakers should carefully consider the examination of expenditure and performance measures that are most relevant to their questions of interest when investigating district efficiency.
3/22/2017
   1 - 15     Next >>
Page 1  of  3