Skip Navigation

Publications & Products

Search Results: (31-45 of 52 records)

 Pub Number  Title  Date
REL 2017182 How kindergarten entry assessments are used in public schools and how they correlate with spring assessments
This study examined how many public schools nationwide used kindergarten entry assessments (KEAs), and for what purposes; the characteristics of public schools that used KEAs; and whether the use of KEAs was correlated with student assessment scores in reading and mathematics in spring of the kindergarten year. Drawing on a nationally representative sample from the Early Childhood Longitudinal Study—Kindergarten Class of 2010—11 (ECLS—K:2011), the study examined responses to an ECLS—K:2011 administrator questionnaire that included a set of questions about schools' uses of KEAs. The sample consisted of 9,370 kindergarten students attending 640 public schools. Schools that used KEAs were compared to schools that did not in terms of enrollment, student body demographics, and other characteristics. In addition, multilevel regression models were used to compare students' kindergarten spring assessment scores in early reading and mathematics at schools that did and did not report KEA use, after controlling for fall assessment scores, student demographics, and school characteristics. Overall, 73 percent of public schools offering kindergarten classes reported that they used KEAs. Among schools using KEAs, 93 percent stated that individualizing instruction was one purpose, and 80 percent cited multiple purposes. Schools' reported use of KEAs did not have a statistically significant relationship with students' early reading or mathematics achievement in spring of the kindergarten year after controlling for student and school characteristics. Results from this study offer contextual information to state—level administrators as they select, develop, and implement KEAs. Future research could examine relationships between the nature and quality of KEA implementation and student outcomes.
10/12/2016
REL 2016156 Measuring principals' effectiveness: Results from New Jersey’s first year of statewide principal evaluation
This study describes measures used to evaluate New Jersey principals in the first year of statewide implementation of the new evaluation system. It examines four statistical properties of the measures: the variation in ratings across principals, their year-to-year stability, the associations between component ratings and the characteristics of students in the schools, and the associations among component ratings. Based on statewide principal performance ratings from the 2013/14 school year and ratings from 14 districts that piloted the principal evaluation system in the 2012/13 school year, the study found a mix of strengths and weaknesses in the statistical properties of the measures used to evaluate principals in New Jersey. First, nearly all principals received effective or highly effective summative ratings. Second, fewer principals evaluated on school median student growth percentiles received highly effective summative ratings than principals not evaluated on this measure. Third, principal practice instrument ratings and school median student growth percentiles had moderate to high levels of year-to-year stability. Fourth, several component ratings—school median student growth percentiles, teachers' student growth objectives, and principal practice instrument ratings—and the summative rating had low, negative correlations with student socioeconomic disadvantage. Finally, principals' ratings on component measures had low to moderate positive correlations with each other, consistent with the idea that they measure distinct dimensions of overall principal performance. Nevertheless, the validity of the principal evaluation measures cannot be verified without a measure of principals' effectiveness at raising student achievement to use as a standard. More evidence is needed on the accuracy of measures used to evaluate principals.
8/30/2016
REL 2016147 An analysis of student engagement patterns and online course outcomes in Wisconsin
Student enrollment in online courses has increased over the past 15 years and continues to grow. However, there is much that is not known about students' educational experiences and outcomes in online courses. The purpose of the study conducted by REL Midwest in partnership with the Virtual Education Research Alliance was to identify distinct patterns—or trajectories—of students' engagement within their online courses over time and examine whether these patterns were associated with their academic outcomes in the online course. The study used data collected by Wisconsin Virtual School's learning management system and student information system, including 1,512 student enrollments in 109 online elective, core, and Advanced Placement high school courses. Group-based trajectory modeling was employed to estimate the number and shapes of engagement patterns evident in the sample, and hierarchical linear modeling assessed the associations between engagement group membership and course outcomes, controlling for demographic characteristics. Analyses revealed six distinct patterns of student engagement in online courses: Initial 1.5 Hours with Decrease, Steady 1.5 Hours, Initial 2 Hours with Spike, Steady 2.5 Hours, 4+ Hours, and 6+ Hours. Students with relatively low but steady engagement had better outcomes than students with similar initial engagement that diminished throughout the course. Overall, students engaging two or more hours per week had better online course outcomes than students who engaged less than two hours per week. Wisconsin Virtual School directors and directors of other online learning programs can use information from this study to consider the supports they implement to help students successfully complete their courses, especially students who display engagement patterns that are associated with poorer course outcomes. Other online learning programs across the country can use the results of this project as a framework for investigating the data they have available in their learning management systems and student information systems.
7/6/2016
REL 2016118 Identifying early warning indicators in three Ohio school districts
The purpose of this study was to identify a set of data elements for students in grades 8 and 9 in three Ohio school districts that could serve as accurate early warning indicators of their failure to graduate high school on time and to comparatively examine the accuracy of those indicators. In order to identify the set of indicators with the optimal accuracy for each district, the research team collected student-level data on two cohorts of grade 8 and 9 students in each school district. Datasets used in the analyses included students’ four-year graduation status (the outcome) and 8th and 9th grade data on attendance, coursework, suspensions, and test score records (the candidate early warning indicators). Logistic regression and Receiver Operating Characteristic (ROC) curve analyses were used to identify the candidate indicators that were the consistent predictors of students’ failure to graduate on time in each district and to identify the cut points on the original scales that most accurately distinguish students who were at risk of not graduating on time from those who did graduate on time. The analyses were restricted to students who were first-time freshmen within the districts in 2006/07 or 2007/08, and excluded students who entered the district after grade 9. Students in the 2006/07 cohort graduated in 2010, and students in the 2007/08 cohort graduated in 2011. The three districts included in the study varied in size, demographic composition, and locale. Results show that the optimal cut point for classifying students as at risk varied significantly across districts for five of the eight candidate indicators included in the study. Across the three districts and two grades, different indicators were identified as the most accurate predictors of students’ failure to graduate on time. End-of-year attendance rate was the only indicator that was a consistent predictor for both grades in all three districts. The most accurate indicators in both grade 8 and grade 9 were based on coursework (GPAs and course credits). Consistent with prior literature, failing more than one class and earning one or more suspensions also were strong predictors of failure to graduate on time. On average, indicators were more accurate in grade 9 than in grade 8. Findings illustrate why it is important for districts to conduct local validation using their own data to verify that indicators selected for their early warning systems accurately predict students’ failure to graduate on time. The methods laid out in this study can be used to help districts identify the best off-track indicators, and indicator cut points, for their particular early warning systems.
7/6/2016
REL 2016141 School reading performance and the extended school day policy in Florida
Beginning with the 2012/13 school year, Florida law required that the 100 lowest-performing elementary schools in reading extend the school day. This study examined how the lowest performing schools implemented the extended school day policy and the trends in school reading performance among the lowest performing schools and other elementary schools. The lowest-performing schools were located throughout Florida and on average, were smaller but served higher proportions of minorities and higher proportions of students receiving free or reduced-price lunch compared to other elementary schools. The lowest-performing schools reported increasing the number of minutes of reading instruction provided to students, increasing staff, and providing different instruction in the extra hour than during other reading instructional blocks. An increase in reading performance was observed for the lowest-performing schools the year the extended school day was implemented. However, this increase did not exceed what would have been expected in the absence of the required increase in reading instruction.
6/16/2016
REL 2016135 Examining the validity of ratings from a classroom observation instrument for use in a district's teacher evaluation system
The purpose of this study was to examine the validity of teacher evaluation scores that are derived from an observation tool, adapted from Danielson's Framework for Teaching, designed to assess 22 teaching components from four teaching domains. The study analyzed principals' observations of 713 elementary, middle, and high school teachers in Washoe County School District (Reno, NV). The findings support the use of a single, summative score to evaluate teachers, one that is derived by totaling or averaging all 22 ratings. The findings do not support using domain- or component-level scores to evaluate teachers' skills, because there was little evidence that these scores measure distinct aspects of teaching. The information that the total score provides predicts the learning of teachers' students. While the relationship is moderate, it is evidence to support interpreting the observation score as an indicator of teachers' effectiveness in promoting learning.
5/31/2016
REL 2016133 Relationship between school professional climate and teachers' satisfaction with the evaluation process
This study, conducted by the Regional Educational Laboratory Northeast & Islands in collaboration with the Northeast Educator Effectiveness Research Alliance, reports on the relationship between teachers' perceptions of school professional climate and their satisfaction with their formal evaluation process using the responses of a nationally representative sample of teachers from the Schools and Staffing Surveys. Specifically, the study used logistic regression analysis to examine whether teachers' satisfaction with their evaluation was associated with two measures of school professional climate (principal leadership and teacher influence), teacher and school characteristics, and the inclusion of student test scores in the evaluation system. The results indicate that teachers' perceptions of their principals' leadership was associated with their satisfaction with the evaluation system—the more positively teachers rated their principal's leadership, the more likely they were to report satisfaction with their evaluation process. The rating teachers received on their evaluation was also associated with their satisfaction, with those rated satisfactory or higher more likely to be satisfied. Teachers whose evaluation process included student test score outcomes were less likely to be satisfied with that process than teachers whose evaluations did not include student test scores. The findings reinforce current literature about the importance of the school principal in establishing positive school professional climate. The report recommends additional research related to the implementation of new educator evaluation systems.
5/3/2016
REL 2016124 Can scores on an interim high school reading assessment accurately predict low performance on college readiness exams?
The purpose of this study was to examine the relationship between measures of reading comprehension, decoding, and language with college-ready performance. This research was motivated by leaders in two Florida school districts interested in the extent to which performance on Florida’s interim reading assessment could be used to identify students who may not perform well on the Preliminary SAT/National Merit Scholarship Qualifying Test (PSAT/NMSQT) and ACT Plan. One of the districts primarily administers the PSAT/NMSQT and the other primarily administers the ACT Plan. Data included the 2013/14 PSAT/NMSQT or ACT Plan results for students in grade 10 from these districts, as well as their grade 9 results on the Florida Assessments for Instruction in Reading – Florida Standards (FAIR-FS). Classification and regression tree (CART) analyses formed the framework for an early warning system of risk for each PSAT/NMSQT and ACT Plan subject-area assessment. PSAT/NMSQT Critical Reading performance is best predicted in the study sample by a student’s reading comprehension skills, while PSAT/NMSQT Mathematics and Writing performance is best predicted by a student’s syntactic knowledge. Syntactic knowledge is the most important predictor of ACT Plan English, Reading, and Science in the study sample, whereas reading comprehension skills were found to best predict ACT Plan Mathematics results. Sensitivity rates (the percentage of students correctly identified as at risk) ranged from 81 percent to 89 percent correct across all of the CART models. These results provide preliminary evidence that FAIR-FS scores could be used to create an early warning system for performance on both the PSAT/NMSQT and ACT Plan. The potential success of using FAIR-FS scores as an early warning system could enable districts to identify at-risk students without adding additional testing burden, time away from instruction, or additional cost. The analyses should be replicated statewide to verify the stability of the models and the generalizability of the results to the larger Florida student population.
4/20/2016
REL 2016106 Measuring school leaders' effectiveness: Final report from a multiyear pilot of Pennsylvania's Framework for Leadership
This study examines the accuracy of performance ratings from the Framework for Leadership (FFL), Pennsylvania's tool for evaluating the leadership practices of principals and assistant principals. The study analyzed four key properties of the FFL: score variation, internal consistency, year-to-year stability, and concurrent validity. Score variation was characterized by the percentages of school leaders earning scores in different portions of the rating scale. To measure the internal consistency of the FFL, Cronbach's alpha was calculated for the full FFL and for each of its four categories of leadership practices. Analyses of score stability used data on FFL scores of school years across two years to calculate Pearson’s correlation coefficient. Concurrent validity was assessed through a regression model for the relationship between school leaders' estimated contributions to student achievement growth and their FFL scores. This report is based primarily on the 2013/14 pilot in which 517 principals and 123 assistant principals were rated by their supervisors; an interim report examined data from the 2012/13 pilot year. The study finds that the FFL is a reliable measure, with good internal consistency and a moderate level of year-to-year stability in scores. The study also finds evidence of the FFL's concurrent validity: principals with higher scores on the FFL, on average, make larger estimated contributions to student achievement growth. Higher total FFL scores and scores in two of the four FFL domains are significantly or marginally significantly associated with both value-added in all subjects combined and value-added in math specifically. This evidence of the validity of the FFL sets it apart from other principal evaluation tools: No other measures of principals' professional practice have been shown to be related to principals' effects on student achievement. However, in both pilot years, variation in scores was limited, with most school leaders scoring in the upper third of the rating scale. As the FFL is implemented statewide, continued examination of evidence on its statistical properties, especially the variation in scores, is important.
1/21/2016
REL 2015095 Comparing Success Rates for General and Credit Recovery Courses Online and Face to Face: Results for Florida High School Courses
This report describes the results of a REL Southeast study comparing student success in online credit recovery and general courses taken online compared to traditional face-to-face courses. Credit recovery occurs when a student fails a course and then retakes the same course to earn high school credit. This research question was motivated by the high use of online learning in the Southeast, particularly as a method to help students engage in credit recovery. The data for this study covered all high school courses taken between 2007/08 and 2010/11 in Florida (excluding Driver’s and Physical Education). The study compares the likelihood of a student earning a C or better in an online course as compared to a face-to-face course. Comparisons for both general and online courses include those courses taken for the first time and credit recovery courses. The results show that the likelihood of a student earning a grade of C or better was higher when a course was taken online than when taken face-to-face, both for general courses and credit recovery courses. Most subgroups of students also had higher likelihood of success in online courses compared to face-to-face courses, except that English language learners showed no difference in outcomes when taking credit recovery courses online. However, it is not possible to determine whether these consistent differences in course outcomes are attributable to greater student learning, other factors such as differences in student characteristics, or differences in grading standards.
9/15/2015
REL 2015083 College Enrollment Patterns for Rural Indiana High School Graduates
This study examined 1) average distances traveled to attend college, (2) presumptive college eligibility, (3) differences between two-year and four-year college enrollment, (4) differences in enrollment related to differences in colleges' selectivity, and (5) degree of "undermatching" (i.e., enrolling in a college less selective than one's presumptive eligibility suggested) for rural and nonrural graduates among Indiana's 2010 high school graduates. "Presumptive eligibility" refers to the highest level of college selectivity for which a student is presumed eligible for admission, as determined by academic qualifications. The researchers obtained student-level, school-level, and university-related data from Indiana's state longitudinal data system on the 64,534 students who graduated from high school in 2010. Of the original sample, 30,624 graduates entered a public two-year or four-year college in the fall immediately after high school graduation. Data were analyzed using Chi-square tests, GIS analysis, and hierarchical generalized linear models. Rural and nonrural graduates enrolled in college at similar rates, but rural graduates enrolled more frequently in two-year colleges than nonrural graduates. About one third of rural graduates enrolled in colleges that were less selective than colleges for which they were presumptively eligible. Rural graduates travel farther to attend both two-year and less selective four-year colleges than nonrural graduates. More information is needed about how students learn about their college options, what support structures are in place in order to assist students in enrolling in college, and how these processes and supports differ between rural and nonrural schools.
6/9/2015
REL 2015089 Measuring principals' effectiveness: Results from New Jersey's principal evaluation pilot
The purpose of this study was to describe the measures used to evaluate principals in New Jersey in the first (pilot) year of the new principal evaluation system and examine three of the statistical properties of the measures: their variation among principals, their year-to-year stability, and the associations between these measures and the characteristics of students in the schools. The study reviewed information that developers of principal practice instruments provided about their instruments and examined principals' performance ratings using data from 14 districts in New Jersey that piloted the principal evaluation system in the 2012/13 school year. The study had four key findings: First, the developers of principal practice instruments provided partial information about their instruments' reliability (consistency across raters and observations) and validity (accurate measurement of true principal performance). Second, principal practice ratings and schoolwide student growth percentiles have the potential to differentiate among principals. Third, school median student growth percentiles, which measure student achievement growth during the school year, exhibit year-to-year stability even when the school changes principals. This may reflect persistent school characteristics, suggesting a need to investigate whether other evaluation measures could more closely gauge principals' contributions to student achievement growth. Finally, school median student growth percentiles correlate with student disadvantage, a relationship that warrants further investigation using statewide evaluation data. Results show a mix of strengths and weaknesses in the statistical properties of the measures used to evaluate principals in New Jersey. Future research could provide more evidence on the accuracy of measures used to evaluate principals.
5/12/2015
REL 2015082 Changes in financial aid and student enrollment at historically Black colleges and universities after the tightening of PLUS credit standards
The purpose of this study was to examine the changes in financial aid and student enrollment at historically Black colleges and universities (HBCUs) after the U.S. Department of Education increased the credit history requirements necessary to obtain Parental Loans for Undergraduate Students (PLUS). The study used institution-level data to examine financial aid and enrollment changes at four-year non-profit institutions in 2012/13 (the first full academic year after the new credit standards were in place). Descriptive statistics summarize financial aid and enrollment changes at HBCUs and at non-HBCUs that enroll a comparable proportion of low-income students. Results indicate that PLUS loans declined substantially at HBCUs in 2012/13, and the decreases were not fully replaced by other types of federal financial aid. HBCUs also experienced larger declines in enrollment than other institutions in 2012/13, corresponding to the larger decline in PLUS recipients at HBCUs. Enrollment declines at HBCUs were especially large for first-year students. Nationwide enrollment decreased more for Black students than for students in other groups. The results in this report may help inform policymakers who are considering future rule changes to the PLUS program.
4/14/2015
REL 2015078 Who will succeed and who will struggle? Predicting early college success with Indiana’s Student Information System
This study examined whether data on Indiana high school students, their high schools, and the Indiana public colleges and universities in which they enroll predict their academic success during the first two years in college. The researchers obtained student-level, school-level, and university-related data from Indiana's state longitudinal data system on the 68,802 students who graduated high school in 2010. For the 32,564 graduates who first entered a public 2-year or 4-year college, the researchers examined their success during the first two years of college using four indicators of success: (1) enrolling in only nonremedial courses, (2) completion of all attempted credits, (3) persistence to the second year of college, and (4) an aggregation of the other three indicators. HLM was used to predict students' performance on indicators using students' high school data, information about their high schools and information about the colleges they first attended. Half of Indiana 2010 high school graduates who enrolled in a public Indiana college were successful by all indicators of success. College success differed by student demographic and academic characteristics, by the type of college a student first entered, and by the indicator of college success used. Academic preparation in high school predicted all indicators of college success, and student absences in high school predicted two individual indicators of college success and a composite of college success indicators. While statistical relationships were found, the predictors collectively only predicted less than 35 percent of the variance. The predictors from this study can be used to identify students who will likely struggle in college, but there will likely be false positive (and false negative) identifications. Additional research is needed to identify other predictors--possibly non-cognitive predictors--that can improve the accuracy of the identification models.
3/31/2015
REL 2015058 Measuring school leaders' effectiveness: An interim report from a multiyear pilot of Pennsylvania's Framework for Leadership
This study examines the accuracy of performance ratings from the Framework for Leadership (FFL), Pennsylvania's tool for evaluating the leadership practices of principals and assistant principals. The study analyzed three key properties of the FFL: internal consistency, score variation, and concurrent validity. To measure the internal consistency of the FFL, Cronbach's alpha was calculated for the full FFL and for each of its four categories of leadership practices. Score variation was characterized by the percentages of school leaders earning scores in different portions of the rating scale. Concurrent validity was assessed through a regression model for the relationship between school leaders' estimated contributions to student achievement growth and their FFL scores. Based on a pilot in which 336 principals and 69 assistant principals were rated by their supervisors in 2012/13, this interim report finds that the full FFL had good internal consistency for both principals and assistant principals. However, most scores for specific leadership practices were in the top two of four possible performance levels, and FFL scores were not associated with school leaders' contributions to student achievement growth. These findings suggest that more evidence is needed on the validity of using FFL scores to identify effective and ineffective school leaders.
12/17/2014
<< Prev    31 - 45     Next >>
Page 3  of  4