April is National Bilingual/Multilingual Learner Advocacy Month! In this guest blog, Dr. Ryan Williams, principal researcher at the American Institutes for Research, describes his IES-funded project focused on identifying factors that help explain variation in the effects programs have on English learner student outcomes using a broad systematic review and meta-analysis.
Over the past two decades, empirical research on programs that support English language and multilingual learners has surged. Many of the programs that researchers have studied are designed to support English literacy development and are tailored to the unique needs of English learners. Other programs are more general, but researchers often study program impacts on English learners in addition to impacts on a broader population of students. Relatively few attempts have been made to identify common findings across this literature. Even fewer attempts have been made to identify meaningful sources of variation that drive program impacts for English learner students—that is, understanding what works, for whom, and under what conditions. To help provide educators and policymakers answers to those important questions, we conducted a systematic review and meta-analysis of the effectiveness of programs and strategies that may support English language learner students.
Our Systematic Review Process
We conducted a broad search that combed through electronic databases, unpublished ‘grey’ literature (for example, working papers, conference presentations, or research briefs), and sources that required hand-searching such as organizational websites. After documenting our primary decision-making factors within a review protocol, we applied a set of rigorous criteria to select studies for inclusion in the meta-analysis. We ultimately identified 83 studies that met our inclusion criteria. Each of these were randomized field studies that included English learner students in grades PK-12 and student academic learning outcomes such as English literacy, mathematics, science, and social studies. Each of the included studies was systematically coded to capture characteristics about the research methods, students and schools, settings, programs, outcome measures, and importantly, the program impacts that the studies reported. We then conducted a meta-analysis to understand the relationships between the characteristics we coded and the program impacts.
We are still working on finalizing our analyses; however, our initial analyses revealed several interesting findings.
- Programs that included support for students to develop their first language skills tended to have larger improvements in student learning. This is consistent with prior research that suggests that supporting first language development can lead to improved learning in core content areas. However, the initial findings from this meta-analysis build on the prior research by providing empirical evidence across a large number of rigorous studies.
- There are some particularly promising practices for educators serving English learner students. These promising practices include the use of content differentiation, the use of translation in a student’s first language, and a focus on writing. Content differentiation aligns with best practices for teaching English learners, which emphasize the importance of providing instruction that is tailored to language proficiency levels and academic needs. The use of first language translation can be helpful for English learner students, as it can support their ability to access and comprehend academic content while they are still building their English proficiency. Focusing on writing can also be particularly important for English learners, as writing is often the last domain of language proficiency for students to develop. Our preliminary findings that English learner writing skills are responsive when targeted by instructional programs may hold implications for how to focus support for students who are nearing but not yet reaching English proficiency.
- The type of test used to measure program impact was related to the size of the program impact on student learning that studies found. Specifically, we found that it is reasonable to expect smaller program impacts when examining state standardized tests and larger impacts for other types of tests. This is consistent with findings from prior meta-analyses based on more general student populations, and it demonstrates the same applies when studying program impacts for English learner students. Statewide standardized tests are typically designed to cover a broad range of state content standards and thus may not reflect improvements in more specific areas of student learning targeted by a given program. On the other hand, researcher-developed tests may align too closely with a program and may not reflect broader, policy-relevant, changes in learning. Our initial evidence suggests that to understand program impacts for English learner students—or any group of students—we may want to use established, validated assessments but not only consider statewide standardized tests.
In terms of next steps, we will complete the meta-analysis work this summer and focus on disseminating the findings through multiple avenues, including a journal publication, review summaries on the AIR website, and future conference proceedings. In addition, we are working to deepen our understanding of the relationships identified in this study and explore promising avenues for practice and future research.
If you’d like to continue learning and see the results of this study, please continue to check back at AIR’s Methods of Synthesis and Integration Center project page, located here.
This blog was produced by Helyn Kim (Helyn.Kim@ed.gov), program officer for the English Learners portfolio, NCER.