Search Results: (1-12 of 12 records)
|REL 2018290||Impact of providing information to parents in Texas about the role of Algebra II in college admission
This study examines the impact of providing parents with an informational brochure about the role of algebra II in college access on students’ grade 11 algebra II completion rates in Texas. One hundred nine schools, covering all 20 Educational Service Center regions in Texas, participated in the study. Parents in the 54 treatment schools were mailed brochures containing information about the role of algebra II in college access and success, as well as information about the new high school graduation options, while parents in the 55 control schools received brochures only about changes in the high school graduation requirements. The study used data from the Texas Education Agency’s Public Education Information Management System, statewide assessment files, and Texas Academic Performance Report files. A multilevel regression model was used to compare algebra II completion rates during grade 11 for students in participating schools that received information about the role of algebra II in college access and students in participating schools that received the alternate brochure. Interaction terms were included in the model in order to look for differential impacts for high-minority or low-income schools. The study found no statistically significant differences in algebra II completion rates during grade 11 between students in treatment in control schools. However, the study did find a statistically significant interaction between school-level treatment and low-income status. While the estimated impacts of the treatment were not statistically significant for students in low-income schools or for students in non-low-income schools, the interaction suggested a less positive impact in the low-income schools. Additional research could help to parse this out. If parents and guardians of students in schools with and without high percentages of low-income students do respond differently to the two types of brochures, this could help TEA to better design and target informational materials for parents and guardians.
|REL 2018285||Impact of a checklist on principal–teacher feedback conferences following classroom observations
In partnership with the New Mexico Public Education Department, Regional Educational Laboratory (REL) Southwest researchers conducted a statewide experiment in school year 2015/16 to test impacts of a checklist on the feedback conferences principals had with teachers after formal classroom observations. Of the 336 participating schools in New Mexico, the REL Southwest researchers selected half at random in fall 2015 as the treatment group. All school leaders in the treatment group received the checklist as an email attachment, plus a hyperlink to a three-minute principal testimonial video. School leaders in the control group received an email attachment with a guide that reprised the five tips about feedback included in the mandatory New Mexico Public Education Department-sponsored professional development. As of one year later, the checklist had few clear impacts on the quality of feedback, professional development outcomes, instructional practice, and student achievement. The exceptions are that teachers who received the checklist reported that their principals were less likely to dominate the feedback conferences, and reported that they were more likely to follow their principals’ professional development recommendation. The overall usage of the feedback checklist was moderate, with about three-quarters of principals who were encouraged to use the checklist reporting that they saw it, and 58 percent reported using it in post-observation feedback sessions with at least a few teachers. This study suggests that if school districts or state departments of education wish to change school leaders’ feedback conferences with teachers, they need to invest in more substantial training for their school leaders.
|REL 2017272||Getting students on track for graduation: Impacts of the Early Warning Intervention and Monitoring System after one year
Early warning systems that use research-based warning signs to identify students at risk of dropping out of high school have emerged as one strategy for improving graduation rates. This study tested the impact of one early warning system, the Early Warning Intervention and Monitoring System (EWIMS), on 37,671 students in grades 9 and 10 and their schools after one year of implementation. Seventy-three high schools were randomly assigned to implement EWIMS during the 2014/15 school year or to continue their usual practices for identifying and supporting students at risk of not graduating on time. Impact findings show that EWIMS reduced the percentage of students with risk indicators related to chronic absence and course failure but not related to low grade point averages, suspensions, or insufficient credits to graduate. At the school level, EWIMS did not have a detectable impact on school data culture, that is, the ways in which schools use data to make decisions and identify students in need of additional support. Findings suggest that overall implementation of the EWIMS seven-step process was low in nearly all EWIMS schools, and that implementation of EWIMS was challenging for participating schools. The authors hypothesize that other school-level processes, unmeasured in this study, also may have contributed to impacts on students. For example, effects might have emerged for chronic absence and course failure if schools prioritized encouraging students to show up and participate in their courses, even if they did not have a sophisticated set of interventions. Further research is needed to better understand the mechanisms through which EWIMS had an impact on chronic absence and course failure. This report provides rigorous, initial evidence that even with limited implementation during the first year of adoption, use of a comprehensive early warning system such as EWIMS can reduce the percentage of students who are chronically absent or who fail one or more courses. These short-term results are promising because chronic absence and course failures in grades 9 and 10 are two key indicators that students are off track for graduation.
|REL 2017256||Impact of the Developing Mathematical Ideas professional development program on grade 4 students' and teachers' understanding of fractions
The purpose of this study was to assess the impact of the Developing Mathematical Ideas (DMI) professional development program on grade 4 teachers' in-depth knowledge of fractions as well as their students' understanding and proficiency with fractions. The study was conducted during the 2014/15 school year. A total of 84 schools from eight school districts in three states (Florida, Georgia, and South Carolina) agreed to participate. Participants included 264 grade 4 teachers and their 4,204 students. The study utilized the "gold standard" methodology involving random assignment of schools to either DMI or the control condition. Teachers in the DMI condition participated in 24 hours of professional development on fractions during fall 2014. They attended eight 3-hour sessions conducted over four days (two 3-hour sessions per day; one day per month). DMI did not demonstrate any impact on student knowledge of fractions. Students of DMI teachers performed at almost the same level as those taught by control teachers; the difference was not statistically significant. The impact of the DMI on teachers’ knowledge of fractions was inconclusive. DMI teachers performed slightly better than teachers who did not participate in DMI, but the result was not statistically significant. It was, however, close to the threshold of statistical significance (p = .051).
|REL 2017225||Impacts of the Retired Mentors for New Teachers program
This study evaluates the impact of the Retired Mentors for New Teachers Program, a two-year intervention at the elementary-school level. The program pairs recently retired, master educators with probationary teachers in high-need schools. These retired educators provide the teachers with weekly support over two years that includes tailored in-class observations, coaching, and mentoring. The study used a randomized controlled trial approach to assess the program’s impact on student learning in reading and math, on teacher turnover, and on teacher evaluation ratings. To assist education leaders interested in replicating the program, the study also gathered detailed data on the program’s cost to the school district and return on investment over time. Key findings include that students of teachers collaborating with retired mentors demonstrated a significant improvement in math achievement equivalent to one month’s worth of added instructional time. At an annual local cost of $171 per student, the positive impacts on math achievement produce a return on investment that can pay back the program’s cost more than 15 times over, through increased student earnings over time.
|REL 2017244||Quality improvement efforts among early childhood education programs participating in Iowa’s Quality Rating System
The purpose of this study was to examine the use and outcomes of quality improvement activities among early childhood education programs participating in the Iowa Quality Rating System (Iowa QRS). The study summarized survey responses from 388 program administrators, describing how staff of programs in Iowa QRS participate in quality improvement activities such as training, coaching, and continuing education. The study also used logistic regression analysis to examine the relationship between quality improvement activities and increases in Iowa QRS ratings, in a subset of 146 programs that received Iowa QRS ratings at two different points in time. Survey responses indicated that almost all programs had staff participate in trainings and a majority of programs offered coaching, but participation in continuing education was less common. The most common topic of professional development was health and safety practices, followed by child development and classroom practices. Analysis results found that Iowa QRS ratings tend to increase across time, and programs that provide key staff with 15 or more training hours per year are more likely to increase ratings over time than programs that do not. The results also suggest that topics covered in professional development are important, with both positive and negative relationships observed between different professional development topics and rating outcomes. The study findings can help Iowa QRS administrators plan and allocate resources to support programs' quality improvement efforts. The findings also can help administrators in other states better understand the types of quality improvement activities to which programs are drawn naturally, as well as factors that may facilitate or impede programs' pursuit of quality.
|REL 2017241||Impacts of Ramp-Up to Readiness™ after one year of implementation
This study examined whether the Ramp-Up to Readiness program (Ramp-Up) produced impacts on high school students' college enrollment actions and personal college readiness following one year of program implementation. The study also looked at Ramp-Up's impact on more immediate outcomes, such as the emphasis placed on college readiness and the amount of college-related teacher-student interactions taking place in high schools. The impacts were studied in context by assessing the degree to which schools were implementing Ramp-Up to the developer's satisfaction. Forty-nine Minnesota and Wisconsin high schools were randomly assigned to one of two groups: (1) the Ramp-Up group that would implement the program during the 2014–15 school year (25 schools), or (2) the comparison group that would implement Ramp-Up the following school year, 2015–16 (24 schools). The researchers collected data from students and school staff during the fall of 2014, before program implementation and during the spring of 2015 after one year of implementation. The study team administered surveys to staff, surveys to students in grades 10–12, and the commitment to college and goal striving scales from ACT's ENGAGE instrument. Researchers also obtained extant student-level data from the high schools and school-level data from their respective state education agencies. The outcomes of most interest were students' submission of the Free Application for Federal Student Aid (FAFSA) and their scores on the two ENGAGE scales. Data indicated that following a single year of implementation, Ramp-Up had no impact on grade 12 students' submission rates for the FAFSA or on the commitment to college and goal striving of students in grades 10–12. However, the program did produce greater emphasis on college-readiness and more student-teacher interactions related to college. Implementation data showed mixed results: on average, Ramp-Up schools implemented the program with adequate fidelity, but some schools struggled with implementation and 88 percent of schools did not adequately implement the planning tools component of the program. Schools implementing Ramp-Up demonstrated a greater emphasis on college-readiness than comparison schools, but a single year of program exposure is insufficient to produce greater college readiness among students or FAFSA submissions among grade 12 students. Schools that adopt Ramp-Up can implement the program as intended by the program developer, but some program components are more challenging to implement than others. Additional studies need to examine Ramp-Up's impact on students' college enrollment actions, their college admission rates, and their success in college following multiple years of program exposure. Studies also should investigate whether implementation gets stronger in subsequent years as schools gain more experience with Ramp-Up's curriculum and processes.
|REL 2017251||The relative effectiveness of two approaches to early literacy intervention in grades K-2
This study examined whether using a stand-alone intervention outside the core curriculum leads to better outcomes than using the embedded curriculum for small group intervention in grades K-2. Fifty-five schools located across Florida were randomly assigned to stand-alone or embedded interventions delivered daily throughout the school year for 45 minutes in small groups of four or five students. Students below the 30th percentile in reading-related skills and/or vocabulary were eligible for intervention. One-third of participating students were English language learners. Both interventions were implemented with high fidelity. The stand-alone intervention significantly improved grade 2 spelling. However, impacts on other student outcomes were comparable. On average, students showed improvement in reading and language skills in both interventions. The two interventions had relatively similar impacts on reading and language outcomes among English learners and non-English learners, with the exception of some reading outcomes in kindergarten.
|REL 2017252||A Randomized Experiment Using Absenteeism Information to "Nudge" Attendance
Can a single postcard sent to guardians help reduce student absenteeism? This randomized controlled trial, conducted in collaboration with the School District of Philadelphia, shows that: (a) a single mail piece that encouraged guardians to improve their student's attendance reduced absences by roughly 2.4 percentage points; (b) there were no statistically significant differences between two types of messages in reducing student absences; and (c) the effect of the single mailing did not differ for students in grades K–8 versus students in grades 9–12.
|REL 2017210||Short-term impacts of student listening circles on student perceptions of school climate and of their own competencies
The primary purpose of this study was to examine the short-term impacts of participation in a student voice facilitation strategy—a Student Listening Circle (SLC)—on student perceptions of their input into decisionmaking at school, their relationships with school staff and peers, school bonding, their competencies for improving the school, and academic self-efficacy. The study also examined adult participants' perceptions before and after the SLC and describes how SLCs are conducted in study schools. To investigate impacts of SLC participation on students, 90 of the 256 students who volunteered to participate in the study were randomly assigned to participate in the SLC (treatment group). The remaining 166 students did not participate directly in the SLC (control group). The study took place in 9 schools in California, with random assignment conducted within each school. Both groups of students completed surveys to ascertain perceptions of school climate and personal competencies 1 week before SLC implementation, 1 week after SLC implementation, and 12 weeks after SLC implementation. SLC impacts were estimated by comparing survey responses between the treatment group and the control group at 1 week and at 12 weeks after SLC implementation. The secondary component of the study used staff surveys to assess changes in adult SLC participants' perceptions of school supports and of student competencies after the SLC, and interviews to assess their perceptions of practices implemented as a result of the SLC. The main experimental results of the study found no discernible effects of the SLC on student participants' perceptions of school climate and personal competencies. Descriptive results indicated that participating school staff reported greater average perceptions of students' abilities to contribute to school improvement, trust in students, and perceptions of student opportunities for meaningful participation at school after the SLC than they did before the SLC. Moreover, schools that implemented SLCs followed through with most action steps generated during the SLCs and implemented multiple school improvement practices to address themes suggested during the SLC. The fact that this study found no short-term impacts of SLC participation on students' perceptions of school climate or of their competencies does not necessarily mean that there is no value in implementing SLCs. SLCs are intended to produce improvements in the overall school environment, including on such factors as school-wide governance and perceptions of adults and students who do not participate in the SLC. It is possible for the SLC to have no discernable impacts on student participants but to still have impacts on school climate.
|REL 2017204||Scaling academic planning in community college: A randomized controlled trial
Community college students often lack an academic plan to guide their choices of coursework to achieve their educational goals, in part because counseling departments typically lack the capacity to advise students at scale. This randomized controlled trial tests the impact of guaranteed access to one of two alternative counseling sessions (group workshops or one-on-one counseling), each of which was combined with targeted “nudging." Outcome measures included scheduling and attending the counseling session, completing an academic plan, and re-enrolling in the following semester. Evidence suggests that both variations on the intervention increase academic plan completion rates by over 20 percentage points compared to a control group that did not receive guaranteed access to a counseling session or the automated nudges. Exploratory evidence suggests that when combined with nudging, the guarantee of workshop counseling is as effective as the guarantee of one-on-one counseling in causing students to schedule and attend academic planning appointments.
|REL 2015096||The Effects of the Elevate Math Summer Program on Math Achievement and Algebra Readiness
This randomized trial examined the effects of the Elevate Math summer program on math achievement and algebra readiness, as well as math interest and self-efficacy, among rising 8th grade students in California's Silicon Valley. The Elevate Math summer math program targets students who score in the range between "high basic" and "low proficient" on state math tests. It consists of 19 days of mathematics instruction, consisting of three hours per day in traditional classroom instruction and one hour per day using Khan Academy (a free online learning system).
During summer 2014, students were randomly assigned to a treatment group that received access to the program at the beginning of the summer or to a control group that received access to the program later in the summer. End-of-program test scores and survey responses of students in the treatment group were compared with those of students in the control group prior to their exposure to the program. Treatment group students scored significantly higher than the control group (4 points or 0.7 standard deviation) on a test of algebra readiness. They were also significantly more likely (29 percent versus 12 percent) to reach achievement thresholds associated with success in algebra I. However, treatment and control groups did not show significant differences in terms of math interest or self-efficacy.
The results show that the Elevate Math summer program can significantly improve student math achievement and algebra readiness; however, 70 percent of program participants were still not ready for algebra I content. This suggests that summer math programs such as Elevate Math's may be important tools for improving math achievement among rising eighth grade students, but most targeted students will need additional support in order to ensure success in algebra.