Search Results: (16-30 of 42 records)
|REL 2017186||Stated Briefly: Relationship between school professional climate and teachers' satisfaction with the evaluation process
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. This study, conducted by the Regional Educational Laboratory Northeast & Islands in collaboration with the Northeast Educator Effectiveness Research Alliance, reports on the relationship between teachers' perceptions of school professional climate and their satisfaction with their formal evaluation process using the responses of a nationally representative sample of teachers from the Schools and Staffing Surveys. Specifically, the study used logistic regression analysis to examine whether teachers' satisfaction with their evaluation was associated with two measures of school professional climate (principal leadership and teacher influence), teacher and school characteristics, and the inclusion of student test scores in the evaluation system. The results indicate that teachers' perceptions of their principals' leadership was associated with their satisfaction with the evaluation system—the more positively teachers rated their principal's leadership, the more likely they were to report satisfaction with their evaluation process. The rating teachers received on their evaluation was also associated with their satisfaction, with those rated satisfactory or higher more likely to be satisfied. Teachers whose evaluation process included student test score outcomes were less likely to be satisfied with that process than teachers whose evaluations did not include student test scores. The findings reinforce current literature about the importance of the school principal in establishing positive school professional climate. The report recommends additional research related to the implementation of new educator evaluation systems.
|REL 2016184||Stated Briefly: Ramping up to college readiness in Minnesota high schools: Implementation of a schoolwide program
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. This study examined whether the Ramp-Up to Readiness program (Ramp-Up) differs from college readiness supports that are typically offered by high schools, whether high schools were able to implement Ramp-Up to Readiness to the developer's satisfaction, and how staff in schools implementing Ramp-Up to Readiness perceive the program. The researchers conducted interviews and focus groups with staff in two groups of schools: (1) a group of 10 schools that were in the first year of implementation of Ramp-Up to Readiness, and (2) a group of 10 other schools that were not implementing the program. The researchers also administered surveys to staff employed by these 20 schools as well as to students in grades 10-12 in these schools. Through these data collection efforts, the researchers obtained information on the types of college readiness programming and supports in the two types of schools, students' perceptions of college-focused staff-student interactions, schools' success at implementing Ramp-Up to Readiness’ core components and sub-components, and the opinions of staff in implementing schools about the program. Compared with non-Ramp-Up schools, those implementing Ramp-Up offered more college-oriented structural supports, professional development, and student-staff interactions. Ramp-Up schools also made greater use of postsecondary planning tools. Students in Ramp-Up schools perceived more emphasis on four of five dimensions of college readiness than students in comparison schools. Ramp-Up schools met the program developer’s threshold for adequate implementation on four of five program components (structural supports, professional development, curriculum delivery, and curriculum content). However only 2 of the 10 schools met the developer’s adequacy threshold for the other component (use of postsecondary planning tools). Staff at Ramp-Up schools generally had favorable perceptions of the program. Schools that implement Ramp-Up were able to offer deeper college readiness support to more students than comparison schools. Schools that adopt Ramp-Up can implement the program as intended by the program developer, but some program components are more challenging to implement than others. Additional studies should be performed to examine whether implementation improves after a second year of implementation and whether Ramp-Up improves the likelihood that students will enroll and succeed in college.
|REL 2016207||Stated Briefly: Identifying early warning indicators in three Ohio school districts
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. The purpose of this study was to identify a set of data elements for students in grades 8 and 9 in three Ohio school districts that could serve as accurate early warning indicators of their failure to graduate high school on time and to comparatively examine the accuracy of those indicators. In order to identify the set of indicators with the optimal accuracy for each district, the research team collected student-level data on two cohorts of grade 8 and 9 students in each school district. Datasets used in the analyses included students' four-year graduation status (the outcome) and 8th and 9th grade data on attendance, coursework, suspensions, and test score records (the candidate early warning indicators). Logistic regression and Receiver Operating Characteristic (ROC) curve analyses were used to identify the candidate indicators that were the consistent predictors of students' failure to graduate on time in each district and to identify the cut points on the original scales that most accurately distinguish students who were at risk of not graduating on time from those who did graduate on time. The analyses were restricted to students who were first-time freshmen within the districts in 2006/07 or 2007/08, and excluded students who entered the district after grade 9. Students in the 2006/07 cohort graduated in 2010, and students in the 2007/08 cohort graduated in 2011. The three districts included in the study varied in size, demographic composition, and locale. Results show that the optimal cut point for classifying students as at risk varied significantly across districts for five of the eight candidate indicators included in the study. Across the three districts and two grades, different indicators were identified as the most accurate predictors of students’ failure to graduate on time. End-of-year attendance rate was the only indicator that was a consistent predictor for both grades in all three districts. The most accurate indicators in both grade 8 and grade 9 were based on coursework (GPAs and course credits). Consistent with prior literature, failing more than one class and earning one or more suspensions also were strong predictors of failure to graduate on time. On average, indicators were more accurate in grade 9 than in grade 8. Findings illustrate why it is important for districts to conduct local validation using their own data to verify that indicators selected for their early warning systems accurately predict students' failure to graduate on time. The methods laid out in this study can be used to help districts identify the best off-track indicators, and indicator cut points, for their particular early warning systems.
|REL 2016171||Stated Briefly: Reshaping rural schools in the Northwest Region: Lessons from federal School Improvement Grant implementation
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. This study examines implementation of the School Improvement Grant (SIG) transformation model in rural regions, exploring challenges in implementation and technical assistance to support these efforts. This study is not part of the federal evaluation of the SIG, which provides more comprehensive information about SIG schools. Leaders participating in research alliances with REL Northwest and other regional stakeholders requested this study to learn more about how implementation of the SIG transformation model has played out in rural schools across the nation.
Researchers used data from the first cohort of the U.S. Department of Education's SIG baseline database to administer a survey addressing four research questions: 1) How did principals of rural SIG transformation schools rate their school's implementation of the requirements of the transformation model?; 2) To what extent do principals report challenges to implementation of the transformation model?; 3) To what extent do principals report their school received technical assistance for the implementation of the transformation model?; and 4) To what extent are principals' reports of challenges and technical assistance related to implementation? The survey was sent to all cohort 1 SIG principals of rural schools using the transformation model—a group that represented 42 states and Bureau of Indian Education schools. The final sample size was 135 principals (67 percent of the 201 schools where staff members who worked under SIG were still present). All surveyed principals worked in schools that were similar in size and student characteristics to the total sample.
Principal responses highlight challenges in both implementation and technical assistance. The results confirm previous research, by finding that certain elements of the transformation model are challenging for rural schools to implement—particularly, those related to ensuring high-quality staff and family and community engagement. The study also finds that principals are more likely to implement strategies for which they receive technical assistance; at the same time, they implement fewer strategies that present challenges. This suggests that rural schools working on improvement strategies need help beyond just grant funding.
|REL 2016170||Stated Briefly: Exploring the foundations of the future STEM workforce: K-12 indicators of postsecondary STEM success
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. The purpose of this study was to review recent peer-reviewed studies in order to identify malleable factors measured in K-12 settings that are related to students' postsecondary STEM success, particularly for Hispanic students. Postsecondary STEM success was defined as enrollment in, persistence in, and completion of postsecondary STEM majors or degrees. Twenty-three relevant studies were identified, yet only 4 examined K-12 factors predictive of postsecondary STEM success specifically for Hispanic students. The review found that the number of high school mathematics and science courses taken, and the level of those courses is a consistent predictor of postsecondary STEM outcomes for all student subgroups. However, the literature indicates that minority students, including Hispanics, were less likely to take the highest-level mathematics and science courses. Students' interest and confidence in STEM at the K-12 levels was also predictive of postsecondary STEM success. Yet, despite lower levels of postsecondary STEM success, some studies indicate racial/ethnic minority and White students had similar levels of interest and confidence in STEM. The reviewed research suggests that reducing disparities in mathematics and science preparation between Hispanic and White students and increasing the rates at which Hispanic students take high-level mathematics and science classes has promise for informing interventions designed to improve STEM outcomes.
|REL 2016157||Stated Briefly An analysis of student engagement patterns and online course outcomes in Wisconsin
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. The purpose of the study was to identify distinct patterns—or trajectories—of students' engagement within their online courses over time and examine whether these patterns were associated with their academic outcomes in the online course. The study used data collected by Wisconsin Virtual School's learning management system and student information system, including 1,512 student enrollments in 109 online elective, core, and Advanced Placement high school courses. Group-based trajectory modeling was employed to estimate the number and shapes of engagement patterns evident in the sample, and hierarchical linear modeling assessed the associations between engagement group membership and course outcomes, controlling for demographic characteristics. Analyses revealed six distinct patterns of student engagement in online courses. Students with relatively low but steady engagement had better outcomes than students with similar initial engagement that diminished throughout the course. Overall, students engaging two or more hours per week had better online course outcomes than students who engaged less than two hours per week.
|REL 2016159||Stated Briefly: Examining changes to Michigan's early childhood quality rating and improvement system (QRIS)
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. Documenting and improving early childhood program quality is a national priority, leading to a rapid expansion of Quality Rating and Improvement Systems (QRISs). QRISs document and improve the quality of early childhood education programs and provide clear information to families about their child care choices. This study described how early childhood programs were rated in Michigan's QRIS and examined how alternative approaches to calculating ratings affected the number of programs rated at each quality level. Using extant data from 2,390 early childhood education programs that voluntarily participated in Michigan's QRIS, the study found that programs in Michigan self-rated at low quality (level 1) and high quality (level 5) more often than at moderate quality (levels 2 through 4). The study also found that programs with both a self-rating and an independent observation of quality generally had higher self-ratings than observational ratings. The study used simulated data to compare the distributions of ratings in the original QRIS, the newly revised QRIS with relaxed domain requirements, and an approach that only used programs' overall scores. Findings revealed that in the new relaxed system and the total score approach, programs were rated at higher levels of quality when compared to the original QRIS. Implications of changes to the calculation systems in QRIS are discussed in terms of program ratings and financial implications for states.
|REL 2016150||Stated Briefly: College enrollment patterns for rural Indiana high school graduates
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. This study examined 1) average distances traveled to attend college, (2) presumptive college eligibility, (3) differences between two-year and four-year college enrollment, (4) differences in enrollment related to differences in colleges' selectivity, and (5) degree of "undermatching" (i.e., enrolling in a college less selective than one's presumptive eligibility suggested) for rural and nonrural graduates among Indiana's 2010 high school graduates. "Presumptive eligibility" refers to the highest level of college selectivity for which a student is presumed eligible for admission, as determined by academic qualifications. The researchers obtained student-level, school-level, and university-related data from Indiana's state longitudinal data system on the 64,534 students who graduated from high school in 2010. Of the original sample, 30,624 graduates entered a public two-year or four-year college in the fall immediately after high school graduation. Data were analyzed using Chi-square tests, GIS analysis, and hierarchical generalized linear models. Rural and nonrural graduates enrolled in college at similar rates, but rural graduates enrolled more frequently in two-year colleges than nonrural graduates. About one third of rural graduates enrolled in colleges that were less selective than colleges for which they were presumptively eligible. Rural graduates travel farther to attend both two-year and less selective four-year colleges than nonrural graduates.
|REL 2016134||Stated Briefly: Can scores on an interim high school reading assessment accurately predict low performance on college readiness exams?
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. The purpose of this study was to examine the extent to which performance on Florida's interim reading assessment could be used to identify students who may not perform well on the Preliminary SAT/National Merit Scholarship Qualifying Test (PSAT/NMSQT) and ACT Plan. Data included the 2013/14 PSAT/NMSQT or ACT Plan results for students in grade 10 from two districts, as well as their grade 9 results on the Florida Assessments for Instruction in Reading—Florida Standards (FAIR-FS). PSAT/NMSQT Critical Reading performance is best predicted in the study sample by a student's reading comprehension skills, while PSAT/NMSQT Mathematics and Writing performance is best predicted by a student's syntactic knowledge. Syntactic knowledge is the most important predictor of ACT Plan English, Reading, and Science in the study sample, whereas reading comprehension skills were found to best predict ACT Plan Mathematics results. Sensitivity rates ranged from 81 percent to 89 percent correct across all of the models. These results provide preliminary evidence that FAIR-FS scores could be used to create an early warning system for performance on both the PSAT/NMSQT and ACT Plan.
|REL 2016132||Stated Briefly: The utility of teacher and student surveys in principal evaluations: An empirical investigation
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. This study examined whether measures from student and teacher surveys that reflect principals' practice are related to schoolwide academic performance. The study was conducted using data from 2011–12 on 39 elementary and secondary schools within a midsize urban school district in the REL Midwest Region. The research team used the results of the district's Tripod student and teacher surveys to construct six school-level measures of school conditions that prior research has shown to associate with effective school leadership. The study finds that adding the full set of six survey measures as a group results in statistically significant increases in variation explained in mathematics and composite value-added outcomes, but not in reading. A stepwise regression procedure identified two measures—instructional leadership and classroom instructional environment—as an optimal subset of the six measures. This evidence indicates that student and teacher survey measures can have utility for principal performance evaluation.
|REL 2016120||Stated Briefly: Teacher evaluation and professional learning: Lessons from early implementation in a large urban district
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. REL Northeast and Islands, in collaboration with the Northeast Educator Effectiveness Research Alliance, examined the alignment of teacher evaluation and professional learning in a large urban district in the Northeast. REL researchers examined the types of professional learning activities teachers reported they participated in, the alignment of the reported activities with what evaluators prescribed, and whether evaluation ratings improved from one academic year to the next. The study found that teachers received written feedback across all standards of the evaluation rubric. Each prescription tended to include one or two recommended professional activities, and more of these activities were professional practice activities, such as independent work to improve instruction, than professional development activities, such as courses or workshops. Teachers reported participating in more professional activities for the instruction-based standards than for the non-instruction-based standards. For all standards, less than 40 percent of teachers reported participating in the activities their evaluator recommended. While further work may be needed to strengthen the connection between teacher evaluation and a comprehensive system of teacher support and development, this study takes the first step in illustrating the need for coherence among these related systems.
|REL 2016119||Stated Briefly: How methodology decisions affect the variability of schools identified as beating the odds
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. Schools that show better academic performance than would be expected given characteristics of the school and student populations are often described as "beating the odds" (BTO). State and local education agencies often attempt to identify such schools as a means of identifying strategies or practices that might be contributing to the schools' relative success. Key decisions on how to identify BTO schools may affect whether schools make the BTO list and thereby the identification of practices used to beat the odds. The purpose of this study was to examine how a list of BTO schools might change depending on the methodological choices and selection of indicators used in the BTO identification process. This study considered whether choices of methodologies and type of indicators affect the schools that are identified as BTO. The three indicators were (1) type of performance measure used to compare schools, (2) the types of school characteristics used as controls in selecting BTO schools, and (3) the school sample configuration used to pool schools across grade levels. The study applied statistical models involving the different methodologies and indicators and documented how the lists schools identified as BTO changed based on the models. Public school and student data from one midwest state from 2007-08 through 2010-11 academic years were used to generate BTO school lists. By performing pairwise comparisons among BTO school lists and computing agreement rates among models, the project team was able to gauge the variation in BTO identification results. Results indicate that even when similar specifications were applied across statistical methods, different sets of BTO schools were identified. In addition, for each statistical method used, the lists of BTO schools identified varied with the choice of indicators. Fewer than half of the schools were identified as BTO in more than one year. The results demonstrate that different technical decisions can lead to different identification results.
|REL 2016126||Stated Briefly: Who will succeed and who will struggle? Predicting early college success with Indiana’s Student Information System
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. This study examined whether data on Indiana high school students, their high schools, and the Indiana public colleges and universities in which they enroll predict their academic success during the first two years in college. The researchers obtained student-level, school-level, and university-related data from Indiana's state longitudinal data system on the 68,802 students who graduated high school in 2010. For the 32,564 graduates who first entered a public 2-year or 4-year college, the researchers examined their success during the first two years of college using four indicators of success: (1) enrolling in only nonremedial courses, (2) completion of all attempted credits, (3) persistence to the second year of college, and (4) an aggregation of the other three indicators. HLM was used to predict students' performance on indicators using students' high school data, information about their high schools and information about the colleges they first attended. Half of Indiana 2010 high school graduates who enrolled in a public Indiana college were successful by all indicators of success. College success differed by student demographic and academic characteristics, by the type of college a student first entered, and by the indicator of college success used. Academic preparation in high school predicted all indicators of college success, and student absences in high school predicted two individual indicators of college success and a composite of college success indicators. While statistical relationships were found, the predictors collectively only predicted less than 35 percent of the variance. The predictors from this study can be used to identify students who will likely struggle in college, but there will likely be false positive (and false negative) identifications. Additional research is needed to identify other predictors--possibly non-cognitive predictors--that can improve the accuracy of the identification models.
|REL 2016127||Stated Briefly: Professional experiences of online teachers in Wisconsin: Results from a survey about training and challenges
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. REL Midwest, in partnership with the Midwest Virtual Education Research Alliance, analyzed the results of a survey administered to Wisconsin Virtual School teachers about the training in which they participated related to online instruction, the challenges they encounter while teaching online, and the type of training they thought would help them address those challenges. REL Midwest researchers and Virtual Education Research Alliance members collaborated to develop the survey based on items from the Going Virtual! survey (Dawley et al., 2010; Rice & Dawley, 2007; Rice et al., 2008). Wisconsin Virtual School administered the survey to its 54 teachers, and 49 (91 percent) responded to the survey. The responses of the 48 teachers who indicated that they taught an online course during the 2013/14 or 2014/15 school year were analyzed for the report. Results indicate that all Wisconsin Virtual School teachers reported participating in training or professional development related to online instruction and that more teachers reported participating in training that occurred while teaching online than prior to teaching online or during preservice education. The teachers most frequently reported challenges related to students' perseverance and engagement and indicated that they preferred unstructured professional development to structured professional development to help them address those challenges. Further research is needed to determine what types of professional development and training are most effective in improving teaching practice, especially related to student engagement and perseverance.
|REL 2016111||Measuring school leaders' effectiveness: Findings from a multiyear pilot of Pennsylvania's Framework for Leadership
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. This study examines the accuracy of performance ratings from the Framework for Leadership (FFL), Pennsylvania's tool for evaluating the leadership practices of principals and assistant principals. The study analyzed four key properties of the FFL: score variation, internal consistency, year-to-year stability, and concurrent validity. Score variation was characterized by the percentages of school leaders earning scores in different portions of the rating scale. To measure the internal consistency of the FFL, Cronbach's alpha was calculated for the full FFL and for each of its four categories of leadership practices. Analyses of score stability used data on FFL scores of school years across two years to calculate Pearson’s correlation coefficient. Concurrent validity was assessed through a regression model for the relationship between school leaders' estimated contributions to student achievement growth and their FFL scores. This report is based primarily on the 2013/14 pilot in which 517 principals and 123 assistant principals were rated by their supervisors; an interim report examined data from the 2012/13 pilot year. The study finds that the FFL is a reliable measure, with good internal consistency and a moderate level of year-to-year stability in scores. The study also finds evidence of the FFL’s concurrent validity: principals with higher scores on the FFL, on average, make larger estimated contributions to student achievement growth. Higher total FFL scores and scores in two of the four FFL domains are significantly or marginally significantly associated with both value-added in all subjects combined and value-added in math specifically. This evidence of the validity of the FFL sets it apart from other principal evaluation tools: No other measures of principals' professional practice have been shown to be related to principals' effects on student achievement. However, in both pilot years, variation in scores was limited, with most school leaders scoring in the upper third of the rating scale. As the FFL is implemented statewide, continued examination of evidence on its statistical properties, especially the variation in scores, is important.
Page 2 of 3