Search Results: (1-15 of 33 records)
|NCES 2021077||2020 Long-Term Trend Reading and Mathematics Assessment Results at Age 9 and Age 13
This report presents the results of the National Assessment of Educational Progress (NAEP) long-term trend assessments in reading and mathematics administered during the 2019–20 school year to 9- and 13-year-old students. Long-term trend assessments were first administered in the early 1970s; results are available for 13 reading assessments dating back to 1971 and 12 mathematics assessments dating back to 1973. This report provides trend results in terms of average scale scores, selected percentiles, and five performance levels. Item maps for each age group illustrate skills demonstrated by students when responding to assessment questions. Scale score results are included for students by selected background characteristics (e.g., race/ethnicity, gender, and grade attended). Overall, the 2020 average scores in reading and mathematics for 13-year-olds were higher than the earliest assessments but declined since 2012. Scores for the lowest-performing students (at the 10th percentile) decreased from 2012 at both ages and subjects.
|REL 2021226||Identifying Students At Risk Using Prior Performance Versus a Machine Learning Algorithm
This report provides information for administrators in local education agencies who are considering early warning systems to identify at-risk students. Districts use early warning systems to target resources to the most at-risk students and intervene before students drop out. Schools want to ensure the early warning system accurately identifies the students that need support to make the best use of available resources. The report compares the accuracy of using simple flags based on prior academic problems in school (prior performance early warning system) to an algorithm using a range of in- and out-of-school data to estimate the specific risk of each academic problem for each student in each quarter. Schools can use one or more risk-score cutoffs from the algorithm to create low- and high-risk groups. This study compares a prior performance early warning system to two risk-score cutoff options: a cutoff that identifies the same percentage of students as the prior performance early warning system, and a cutoff that identifies the 10 percent of students most at risk.
The study finds that the prior performance early warning system and the algorithm using the same-percentage risk score cutoffs are similarly accurate. Both approaches successfully identify most of the students who ultimately are chronically absent, have a low grade point average, or fail a course. In contrast, the algorithm with 10-percent cutoffs is good at targeting the students who are most likely to experience an academic problem; this approach has the advantage in predicting suspensions, which are rarer and harder to predict than the other outcomes. Both the prior performance flags and the algorithm are less accurate when predicting outcomes for students who are Black.
The findings suggest clear tradeoffs between the options. The prior performance early warning system is just as accurate as the algorithm for some purposes and is cheaper and easier to set up, but it does not provide fine-grained information that could be used to identify the students who are at greatest risk. The algorithm can distinguish degrees of risk among students, enabling a district to set cutoffs that vary depending on the prevalence of different outcomes, the harms of over-identifying versus under-identifying students at risk, and the resources available to support interventions.
|REL 2021107||Characteristics and Performance of High School Equivalency Exam Takers in New Jersey
Since 2014 the New Jersey Department of Education has offered three high school equivalency (HSE) exams for nongraduates seeking credentials: the GED, the High School Equivalency Test (HiSET), and the Test Assessing Secondary Completion (TASC). This study used data on exam takers who had been grade 8 students in a New Jersey public school between 2008/09 and 2013/14 and who had attempted at least one HSE exam in New Jersey between March 2014 and December 2018. It analyzed how the characteristics of exam takers differ across exams and from the characteristics of non–exam takers, how the performance of exam takers with similar backgrounds varies, and how a recent reduction in the passing threshold for two of the exams affected passing rates. Among all students who had been grade 8 students in a New Jersey public school during the study years, HSE exam takers completed fewer years of school, were more likely to have been eligible for the national school lunch program in grade 8, and were more likely to identify as Black or Hispanic than non–exam takers. GED takers had received higher grade 8 standardized test scores, were more likely to identify as White, and were less likely to have been eligible for the national school lunch program in grade 8 than HiSET and TASC takers. Under the New Jersey Department of Education's original passing thresholds, exam takers in the study sample were more likely to pass the HiSET and TASC than the GED on the first attempt (after grade 8 standardized test scores were controlled for). However, after the reduction in passing thresholds, the first-attempt passing rate was similar across the three exams. Under the new passing thresholds, two-thirds of GED takers and more than half of HiSET and TASC takers passed on the first attempt, and—when all exam attempts are included—three-quarters of all exam takers ever passed each exam.
|NCES 2021019||Program for the International Student Assessment (PISA) 2018 Public Use File (PUF)
The PISA 2018 Public Use File (PUF) consists of data from the PISA 2018 sample. Statistical confidentiality treatments were applied due to confidentiality concerns. The PUF can be accessed from the National Center for Education Statistics website at http://nces.ed.gov/surveys/pisa/datafiles.asp.
For more details on the data, please refer to chapter 9 of the PISA 2018 Technical Report and User Guide (NCES 2021-011).
|NCES 2021020||Technical Report and User Guide for the 2016 Program for International Student Assessment (PISA) Young Adult Follow-up Study
This technical report and user guide is designed to provide researchers with an overview of the design and implementation of PISA YAFS 2016, as well as with information on how to access the PISA YAFS 2016 data.
|NCES 2021022||Program for the International Student Assessment Young Adult Follow-up Study (PISA YAFS) 2016 Public Use File (PUF)
The PISA YAFS 2016 Public Use File (PUF) consists of data from the PISA YAFS 2016 sample. It contains data for individuals including responses to the background questionnaire and the cognitive assessment. Statistical confidentiality treatments were applied due to confidentiality concerns.
For more details on the data, please refer to chapter 8 of the PISA YAFS 2016 Technical Report and User Guide (NCES 2021-020).
|NCES 2021047||Program for the International Student Assessment (PISA) 2018 Restricted-Use Files (RUF)
The PISA 2018 Restricted Use File (RUF) consists of restricted-use data from PISA 2018 for the United States. The data file and documentation includes the data file, a codebook, instructions on how to merge with the U.S. PISA 2018 public-use dataset (NCES 2021-047), and a cross-walk to assist in merging with other public datasets, such as the Common Core of Data (CCD) and Private School Survey (PSS). As these data files can be used to identify respondent schools, a restricted-use license must be obtained before access to the data is granted. Click on the restricted-use license link below for more details https://nces.ed.gov/surveys/pisa/datafiles.asp.
For more details on the data, please refer to chapter 9 of the PISA 2018 Technical Report and User Guide (NCES 2021-011).
|REL 2021085||Relationship between State Annual School Monitoring Indicators and Outcomes in Massachusetts Low‑Performing Schools
The Massachusetts Department of Elementary and Secondary Education supports low-performing schools through a process that draws on qualitative and quantitative data from monitoring visits. The data are used to produce ratings for 26 turnaround indicators in four turnaround practice areas relating to school leadership, instructional practices, student supports, and school climate. This study analyzed data on school indicator ratings collected during school years 2014/15–2018/19 from 91 low-performing schools, with a focus on the distribution of the ratings among schools during their first year in the monitoring system and on the relationship of ratings to school outcomes. During the first year in which ratings data were available for a school, a majority of schools were in the two highest rating levels for 21 of the 26 indicators. Schools generally had lower rating levels for indicators in the student supports practice area than in the other three practice areas. Ratings for half the indicators were statistically significantly related to better schoolwide student outcomes and had a practically meaningful effect size of .25 or greater, and none was statistically significantly related to worse outcomes. Two indicators in the leadership practice area (school leaders' high expectations for students and staff and trusting relationships among staff) were related to lower chronic absenteeism rates. Ratings for five indicators in the instructional practices area were related to higher student academic growth in English language arts or math; two of these indicators (use of student assessment data to inform classroom instruction and school structures for instructional improvements) were related to higher growth in both English language arts and math. Ratings for four indicators in the student supports practice area (teacher training to identify student needs, research-based interventions for all students, interventions for English learner students, and interventions for students with disabilities) were related to higher student academic growth in English language arts or math. Two indicators in the school climate practice area (schoolwide behavior plans and adult–student relationships) were related to higher student academic growth in English language arts or math or lower chronic absenteeism rate. Eight indicators were not statistically related to any of the outcomes of interest.
|REL 2021048||Creating and Using Performance Assessments: An Online Course for Practitioners
This self-paced, online course provides educators with detailed information on performance assessment. Through five modules, practitioners, instructional leaders, and administrators will learn foundational concepts of assessment literacy and how to develop, score, and use performance assessments. They will also learn about the role of performance assessment within a comprehensive assessment system. Each module will take approximately 30 minutes to complete, with additional time needed to complete the related tasks, such as creating a performance assessment and rubric. Participants will be provided with a certificate of completion upon finishing the course.
|REL 2017226||Growth mindset, performance avoidance, and academic behaviors in Clark County School District
Previous research strongly suggests that beliefs regarding the nature of ability and the payoff to effort (academic mindsets) and the related actions (academic behaviors) play an important role in supporting student success. Not much is known about the distribution of these beliefs among teachers and students in different academic contexts. This study examined the distribution of reported academic mindsets and behaviors in Nevada’s Clark County School District. The analysis revealed that most students reported beliefs that are largely consistent with a growth mindset. However, reported beliefs and behaviors differed significantly depending on students' English learner status, race/ethnicity, grade level and prior achievement. For example, Black and Hispanic students reported lower levels of growth mindset than White students. English learner students reported significantly lower levels of growth mindset and higher levels of performance avoidance than their non-English learner counter parts. Lower achieving students reported significantly lower levels of growth mindset and significantly higher levels of performance avoidance than their higher achieving peers. Teachers reported greater beliefs in growth mindset than students, and their beliefs regarding growth mindset did not, for the most part, vary significantly depending on the characteristics of the students attending their schools.
|REL 2017253||Implementing the extended school day policy in Florida's 300 lowest performing elementary schools
Since 2014, Florida law has required the 300 elementary schools with the lowest reading performance to provide supplemental reading instruction through an extended school day. This study found that in 2014/15, on average, the lowest performing schools were smaller than other elementary schools and served higher proportions of racial/ethnic minority students and students eligible for the federal school lunch program. Schools reported using a variety of strategies to comply with the extended school day policy such as increasing reading instruction time each day, increasing staff, providing professional development for teachers, and providing instruction in the extra hour that differed from instruction during the rest of the day. Increased professional development and curricular and pedagogic changes were identified as indirect benefits of implementation.
|REL 2017267||Exploring district-level expenditure-to-performance ratios
Districts across the nation are seeking ways to increase efficiency by maintaining, if not improving, educational outcomes using fewer resources. One proxy for school district efficiency is an expenditure-to-performance ratio, for example a ratio of per pupil expenditures to student academic performance. Using state education department data from an example state in the Regional Educational Laboratory Northeast & Islands Region, researchers created six different expenditure-to-performance ratios and investigated how districts' inclusion in the highest quartile on districts rankings varied according to the expenditure and performance measures used to calculate each ratio. By demonstrating the variability in district rankings depending on the ratio being examined, this guide provides states and districts with evidence to suggest that state policymakers should carefully consider the examination of expenditure and performance measures that are most relevant to their questions of interest when investigating district efficiency.
|REL 2017250||How well does high school grade point average predict college performance by student urbanicity and timing of college entry?
This report examines how well high school GPA and college entrance exams predict college grades for particular subgroups of students who enrolled directly in college math and English in the University of Alaska system over a four-year period. The report builds on a previous Regional Educational Laboratory Northwest study and examines whether high school GPA is less predictive for certain groups of students, such as students who come from different parts of the state or recent high school graduates versus older students. This study used regression analysis to assess the extent to which high school GPA and test scores predict college grades. Regressions were estimated separately for English and math course grades and within each subject area for students who took the SAT, students who took the ACT, and students who took ACCUPLACER. Overall, high school GPA surpassed test scores in explaining variance in college course grades regardless of where students were from in Alaska. High school GPA explained 9-18 percentage of variance in course grades for urban students, while test scores explained 1-5 percentage of variance. Similarly, high school GPA explained 7-21 percentage of variance in course grades for rural students, while test scores explain 0-3 percentage of variance in course grades. High school GPA was also more predictive of college course performance for students who directly entered college from high school compared to those who delayed entry. These findings provide evidence of the predictive power of high school GPA in explaining the readiness of college students for college English and math across different groups of students. Secondary and postsecondary stakeholders can use these findings to engage in conversations regarding whether and how to use high school grade point average as part of the placement process.
|REL 2017179||A Guide to Calculating District Expenditure-to-Performance Ratios Using Publicly Available Data
Districts across the nation are seeking ways to increase efficiency by maintaining, if not improving, educational outcomes using fewer resources. One measure that is sometimes used as a proxy for school district efficiency is an expenditure-to-performance ratio, for example a ratio of per pupil expenditures to student academic performance. This guide shows states and districts how to use publicly available data about district-level expenditures and student academic performance to create six expenditure-to-performance ratios. By illustrating the steps needed to calculate different expenditure-to-performance ratios, the guide also provides states and districts with a straightforward strategy for exploring how conclusions about district efficiency may vary, sometimes substantially, depending on which types of expenditures and which measures of performance are considered. The guide is based on a recent Regional Educational Laboratory (REL) Northeast and Islands project conducted for the Northeast Rural Districts Research Alliance and uses state education department data from one state in the REL Northeast and Islands region. Through the illustration of the steps necessary for calculating expenditure-to-performance ratios, the guide provides states and districts with a set of steps they can use to explore districts' resource use. Particularly given the descriptive nature of the expenditure-to-performance ratios, the guide also summarizes both the implications for and the limitations of their use.
|REL 2017212||How are middle school climate and academic performance related across schools and over time?
The purpose of this study was to examine the relationship between school climate and academic performance in two different ways: (1) by comparing the academic performance of different schools with different levels of school climate and (2) by examining how changes in a school's climate were associated with changes in its students' academic achievement. To examine how school climate and academic performance are related, this study analyzed grade 7 student data from 2004/05 to 2010/11 from the California Healthy Kids Survey, the California Standardized Testing and Reporting program, and the California Basic Educational Data System for 978 middle schools in California. School climate was measured by a set of student survey questions that assessed students' perceptions about six domains of school climate. Schools with positive school climates were those in which students reported high levels of safety/connectedness, caring relationships with adults, and meaningful student participation, as well as low levels of substance use at school, bullying/discrimination, and student delinquency. Regression models were used to estimate the relationship between student-reported school climate and students' average academic performance across schools. Regression models were also used to estimate how, for a given school, academic performance changes as school climate changes. All models included controls for racial/ethnic composition, percentage of English learners, and percentage of students eligible for free/reduced-price meals. The study found that (1) middle schools with higher levels of positive student-reported school climate exhibited higher levels of academic performance; (2) increases in a school's level of positive student-reported school climate were associated with simultaneous increases in that school's academic achievement; and (3) within-school increases in academic achievement associated with school climate increases were substantially smaller than the academic performance differences across schools with different school climate levels. As positive school climate is continuing to gain more attention as a lever to improve student learning, there is increasing interest in how improvements in school climate are related to improvements in academic performance. Most studies examining the school climate-academic performance relationship compare the academic achievement across schools with different levels of school climate. Although the results of this study found that schools with high levels of positive school climate exhibited substantially higher levels of academic performance than their counterparts with low levels of positive school climate, such differences across schools were not an accurate guide for predicting the magnitude of school-specific gains in academic performance associated with increases in school climate.