IES Blog

Institute of Education Sciences

Observations Matter: Listening to and Learning from English Learners in Secondary Mathematics Classrooms

April is National Bilingual/Multilingual Learner Advocacy Month and Mathematics and Statistics Awareness Month. We asked Drs. Haiwen Chu and Leslie Hamburger, secondary mathematics researchers at the IES-funded National Research & Development Center to Improve Education for Secondary English Learners (EL R&D Center), to share how classroom observations are critical to analyzing and improving learning opportunities for English learners.

Could you tell us about your IES-funded project?

Haiwen: As part of the EL R&D Center portfolio of work, we developed RAMPUP, or Reimagining and Amplifying Mathematics Participation, Understanding, and Practices. RAMPUP is a summer bridge course for rising ninth graders. The three-week course is designed to challenge and support English learners to learn ambitious mathematics and generative language simultaneously. We will conduct a pilot study during summer 2024, with preliminary findings in fall 2024.

 

What motivated you to do this work?

Haiwen: English learners are frequently denied opportunities to engage in conceptually rich mathematics learning. We want to transform these patterns of low challenge and low support by offering a summer enrichment course that focuses on cross-cutting concepts uniting algebra, geometry, and statistics. We also designed active and engaged participation to be central to the development of ideas and practices in mathematics. English learners learn by talking and interacting with one another in ways that are both sustained and reciprocal.

Leslie: In addition, we wanted to offer broader approaches to developing language with English learners. As we have refined the summer program, we have explicitly built in meaningful opportunities for English learners to grow in their ability to describe, argue, and explain critical mathematics concepts in English This language development happens simultaneously with the development of conceptual understanding.

What have you observed among English learners so far in RAMPUP study classrooms?

Leslie: Over the past two summers, I have observed RAMPUP in two districts for two weeks total. The classrooms reflect America’s wide diversity, including refugee newcomers and students who were entirely educated in the United States. I was able to see both teachers facilitating and students learning. I observed how students developed diverse approaches to solving problems.

Through talk, students built upon each other’s ideas, offered details, and expanded descriptions of data distributions. Over time, their descriptions of data became more precise, as they attended to similarities and differences and developed labels. I also observed how teachers assisted students by giving hints without telling them what to do.

Haiwen: As we observed, we wanted to understand how English learners engaged in the activities we had designed, as well as how their conceptual understandings and language developed simultaneously. I have spent two summers immersed in three districts over seven weeks with diverse students as they developed relationships, deep understandings, and language practices.

I was honestly surprised by the complex relationships between how students wrote and the development of their ideas and language. Sometimes, students wrote to collect their thoughts, which they then shared orally with others, to collectively compose a common way to describe a pattern. Other times, writing was a way to reflect and give each other feedback on what was working well and how peers could improve their work. Writing was also multi-representational as students incorporated diagrams, tables, and other representations as they wrote.

From closely observing students as they wrote, I also gained valuable insight into how they think. For example, they often looked back at their past work and then went on to write, stretching their understanding.

Why are your observations important to your project?

Haiwen: RAMPUP is an iterative design and development project: our observations were driven by descriptive questions (how students learned) and improvement questions (how to refine activities and materials). By observing each summer what worked well for students, and what fell flat, we have been able to iteratively improve the flow and sequencing of activities.

We have learned that observations matter most when they directly inform broader, ongoing efforts at quality learning.

Now, in our final phase, we are working to incorporate educative examples of what quality interactions looked and sounded like to enhance the teacher materials. Beyond the shorter episodes confined within a class period, we are also describing patterns of growth over time, including vignettes and portfolios of sample student work.

Leslie: Indeed, I think that wisdom comes both in practice and learning by looking back on practice. Our observations will enable teachers to better anticipate what approaches their students might take. Our educative materials will offer teachers a variety of real-life approaches that actual students similar to their own may take. This deep pedagogical knowledge includes knowing when, if, and how to intervene to give the just-right hints.

We will also soon finalize choices for how teachers can introduce activities, give instructions, and model processes. Having observed marvelous teaching moves—such as when a teacher created a literal “fishbowl” to model an activity (gathering students around a focal group to observe their talk and annotations), I am convinced we will be able to provide teachers with purposeful, flexible, and powerful choices to implement RAMPUP with quality and excellence.


To access research-based tools developed by the National Research & Development Center to Improve Education for Secondary English Learners to help teachers design deeper and more meaningful mathematics learning for all students, particularly those still learning English, see How to Engage English Learners in Mathematics: Q&A with Dr. Haiwen Chu.

To receive regular updates and findings from the Center, as well as webinar and conference opportunities, subscribe to Where the Evidence Leads newsletter.

This blog was produced by Helyn Kim (Helyn.Kim@ed.gov), program officer for the Policies, Practices, and Programs to Support English Learners portfolio at NCER.

How IES-Funded Research Infrastructure is Supporting Math Education Research

Every April, we observe Mathematics and Statistics Awareness month to increase public understanding of math and stats and to celebrate the unique role that math and stats play in solving critical real-world problems. In that spirit, we want to share some exciting progress that SEERNet has made in supporting math education research over the past three years.

In 2021, IES established SEERNet, a network of platform developers, researchers, and education stakeholders, to create and expand the capacity of digital learning platforms (DLPs) to enable equity-focused and rigorous education research at scale. Since then, SEERNet has made significant progress, and we are starting to see examples of how researchers can use this new research infrastructure.

Recently, IES held two rounds of a competition to identify research teams to join SEERNet to conduct a study or series of studies using one of the five DLPs within the SEERNet network. Two research teams joined the network from the first round, and the second round of applications are now under review. We want to highlight the two research teams that joined SEERNet and the important questions about math education that they are addressing.

  • Now I See It: Supporting Flexible Problem Solving in Mathematics through Perceptual Scaffolding in ASSISTments – Dr. Avery Closser and her team are working with the E-Trials/ASSISTments team. ASSISTments is a free tool to support math learning, which has been used by over 1 million students and 30,000 teachers across the nation. IES has supported its development and efficacy since 2003. E-Trials is the tool that researchers can use to develop studies to be implemented within ASSISTments. The research team’s studies are designed to test whether perceptual scaffolding in mathematics notation (for example, using color to highlight key terms such as the inverse operators in an expression) leads learners to pause and notice structural patterns and ultimately practice more flexible and efficient problem solving. This project will yield evidence on how, when, and for whom perceptual scaffolding works to inform classroom practice, which has implications for the development of materials for digital learning platforms.
  • Investigating the Impact of Metacognitive Supports on Students' Mathematics Knowledge and Motivation in MATHia – Dr. Cristina Zepeda and her team are working with the Upgrade/MATHia team. MATHia is an adaptive software program used in middle and high schools across the country. UpGrade is an open-source A/B testing platform that facilitates randomized experiments within educational software, including MATHia. The research team will conduct a series of studies focused on supporting students’ metacognitive skills, which are essential for learning in mathematics but not typically integrated into instruction. The studies will seek to identify supports that can be implemented during mathematics learning in MATHia that improve metacognition, mathematics knowledge, and motivation in middle school.

Both research teams are conducting studies that will have clear implications for curriculum design within DLPs focused on math instruction for K-12 students. The value of conducting these studies through existing DLPs rather than through individual researcher-designed tools and methods includes—

  1. Time and cost savings – Without the need to create materials from scratch, the research teams can immediately get to work on the specific instructional features they intend to test. Additionally, since the intervention and pre/post assessments can be administered through the online tool, the need to travel to study sites is reduced.
  2. Access to large sample sizes – Studies like the ones described above are frequently administered in laboratory settings or in a handful of schools. Since over 100k students use these DLPs, there is the potential to recruit a larger and more diverse sample of students for studies. This provides more opportunities to study what works for whom under what conditions.
  3. Tighter feedback loops between developers and researchers – Because the research teams need to work directly with the platform developers to administer their studies, the studies need to be designed in ways that will work within the platform and with the platform content. This ensures their relevance to the platform and means that the platform developers will be knowledgeable about what is being tested. They will be interested to hear the study’s findings and likely to use that information to inform future design decisions.

We look forward to seeing how other education researchers take advantage of this new research infrastructure. For math education researchers in particular, we hope these two example projects inspire you to consider how you might use a DLP in the future to address critical questions for math education.


This blog was written by Erin Higgins (Erin.Higgins@ed.gov), Program Officer, Accelerate, Transform, Scale Initiative.

 

Going beyond existing menus of statistical procedures: Bayesian multilevel modeling with Stan

For nearly 15 years, NCER has supported the development and improvement of innovative methodological and statistical tools and approaches that will better enable applied education researchers to conduct high-quality, rigorous education research. This blog spotlights the work of Andrew Gelman, a professor of statistics and political science at Columbia University, and Sophia Rabe-Hesketh, a professor of statistics at the School of Education at the University of California, Berkeley. IES has supported their research on hierarchical modeling and Bayesian computation has for many years. In this interview blog, Drs. Gelman and Rabe-Hesketh reflect on how Bayesian modeling applies to educational data and describe the general principles and advantages of Bayesian analysis.

What motivates your research on hierarchical modeling and Bayesian computation?

Education data can be messy. We need to adjust for covariates in experiments and observational studies, and we need to be able to generalize from non-random, non-representative samples to populations of interest.

The general motivation for multilevel modeling is that we are interested in local parameters, such as public opinion by states, small-area disease incidence rates, individual performance in sports, school-district-level learning loss, and other quantities that vary among people, across locations, and over time. In non-Bayesian settings, the local parameters are called random effects, varying intercepts/slopes, or latent variables.

Bayesian and non-Bayesian models differ in how completely the researcher using them must specify the probability distributions of the parameters. In non-Bayesian models, typically only the data model (also called the likelihood function) must be specified. The underlying parameters, such as the variances of random intercepts, are treated as unknown constants. On the other hand, the Bayesian approach requires specifying a full probability model for all parameters.  

A researcher using Bayesian inference encodes additional assumptions about all parameters into prior distributions, then combines information about the parameters from the data model with information from the prior distributions. This results in a posterior distribution for each parameter, which, compared to non-Bayesian model results, provides more information about the appropriateness of the model and supports more complex inferences.

What advantages are there to the Bayesian approach?

Compared to other estimates, Bayesian estimates are based on many more assumptions. One advantage of this is greater stability at small sample sizes. Another advantage is that Bayesian modeling can be used to produce flexible, practice-relevant summaries from a fitted model that other approaches cannot produce. For instance, when modeling school effectiveness, researchers using the Bayesian approach can rely on the full probability model to justifiably obtain the rankings of schools or the probabilities that COVID-related declines in NAEP mean test scores for a district or state have exceeded three points, along with estimates for the variability of these summaries. 

Further, Bayesian inference supports generalizability and replicability by freely allowing uncertainty from multiple sources to be integrated into models. Without allowing for uncertainty, it’s difficult to understand what works for whom and why. A familiar example is predicting student grades in college courses. A regression model can be fit to obtain a forecast with uncertainty based on past data on the students, and then this can be combined with student-specific information. Uncertainties in the forecasts for individual students or groups of students will be dependent and can be captured by a joint probability model, as implemented by posterior simulations. This contrasts with likelihood-based (non-Bayesian) inference where predictions and their uncertainty are typically considered only conditionally on the model parameters, with maximum likelihood estimates plugged in. Ignoring uncertainty leads to standard error estimates that are too small on average (see this introduction to Bayesian multilevel regression for a detailed demonstration and discussion of this phenomenon).

What’s an important disadvantage to the Bayesian approach?

Specifying a Bayesian model requires the user to make more decisions than specifying a non-Bayesian model. Until recently, many of these decisions had to be implemented using custom programming, so the Bayesian approach had a steep learning curve. Users who were not up to the programming and debugging task had to work within some restricted class of models that had already been set up with existing software. 

This disadvantage is especially challenging in education research, where we often need to adapt and expand our models beyond a restricted class to deal with statistical challenges such as imperfect treatment assignments, nonlinear relations, spatial correlations, and mixtures, along with data issues such as missingness, students changing schools, guessing on tests, and predictors measured with error.

How did your IES-funded work address this disadvantage?

In 2011, we developed Stan, our open-source Bayesian software, with funding from a Department of Energy grant on large-scale computing. With additional support from the National Science Foundation and IES, we have developed model types, workflows, and case studies for education researchers and also improved Stan’s computational efficiency.

By combining a state-of-the-art inference engine with an expressive modeling language, Stan allows education researchers to build their own models, starting with basic linear and logistic regressions and then adding components of variation and uncertainty and expanding as needed to capture challenges that arise in applied problems at hand.  We recommend the use of Stan as part of a Bayesian workflow of model building, checking, and expansion, making use of graphs of data and fitted models.

Stan can be accessed using R, Python, Stata, Julia, and other software. We recommend getting started by looking at the Stan case studies. We also have a page on Stan for education research and a YouTube channel.

In terms of dealing with the issues that arise in complex educational data, where do we stand today?

Put all this together, and we are in the business of fitting complex models in an open-ended space that goes beyond any existing menu of statistical procedures. Bayesian inference is a flexible way to fit such models, and Stan is a flexible tool that we have developed, allowing general models to be fit in reasonable time using advanced algorithms for statistical computing.  As always with research, there are many loose ends and there is more work to be done, but we can now routinely fit, check, and display models of much greater generality than was before possible, facilitating the goals of understanding processes in education.


This blog was produced by Charles Laurin (Charles.Laurin@ed.gov), NCER program officer for the Statistical and Research Methodology in Education grant program.

Supporting the Pipeline of Scholars of Color with Research, Training, and Mentorship

In recognition of Black History Month, we interviewed Dr. Tamara Bertrand Jones, an associate professor of higher education in the Department of Educational Leadership and Policy Studies at Florida State University and co-principal investigator of the Partners United for Research Pathways Oriented to Social Justice in Education (PURPOSE) program, funded by the IES Pathways to the Education Sciences Research Training Program. In this blog, Dr. Bertrand Jones discusses her experiences conducting research on the professional experiences of underrepresented populations as well as her work supporting emerging scholars of color.

Tamara Bertrand Jones photoHow has your background and experiences shaped your research on the graduate education and professional experiences of underrepresented populations, particularly Black women, in academia?

My dissertation research centered Black perspectives on cultural competence in evaluation. For the first 10 years of my academic career, I worked as an administrator, primarily in student affairs. When I transitioned from administration to faculty, I extended my research to Black experiences in academia. Microaggressions (such as unsolicited advice) that derailed my productivity and diminished my self-confidence immediately greeted me. For example, I was told that my research would be labeled navel-gazing (excessive self-contemplation) because I was a Black woman studying Black women and that this negative label may present challenges for my career. It took time, lots of positive self-affirmation, and validation from my mentors and close Black women colleagues to silence those voices and walk confidently in my contribution as a scholar and my personal purpose for pursuing an academic career. After these experiences, I doubled down on my commitment to demystify the hidden curriculum in the academy and support emerging scholars by being responsive to their identities, experiences, needs, and aspirations.

What is the PURPOSE training program, and what have you learned from administering PURPOSE?

We created PURPOSE to help develop more underrepresented and minoritized education researchers. To date, we have had seven cohorts of PURPOSE Fellows, totaling more than 80 fellows. The program includes critical discussions about social justice and educational inequities, mentoring, professional development, and service-learning research apprenticeships. During their training, we also encourage fellows to reflect on their own identities in terms of race, gender, and social class among other identities while they develop their individual researcher identities. These experiences culminate in capstone research projects related to social justice in education that fellows develop from inception to dissemination during the fellowship year. Taken together, these experiences foster capacities to conduct meaningful research and provide socialization into the rigors of research and graduate school.

We found that fellows experience socialization into education research in ways that help them 1) develop a researcher-identity, and 2) prepare products that demonstrate strong research potential for graduate school. Our fellows have experienced positive gains in their self-efficacy for carrying out a variety of research skills such as conducting literature reviews and working independently and in teams. We believe our approach to culturally relevant education and research methods and valuing the voices of our diverse fellows and mentors will lead to changes in future teaching and research practices.

Based on your research and experiences, what do you see as the greatest needs to improve the education and professional pathways for Black scholars?

In the over 14 years I have been a faculty member, I recognize that there are myriad ways to be successful in academia while remaining true to who you are. Through my work on early career professionals in partnership with Sisters of the Academy (SOTA), a community of Black women in higher education, I strive to create an environment where emerging scholars are exposed to scholars who represent diverse ways of being in academia. These models can shape emerging scholars’ vision of their future possible selves and help them develop their own pathways that are congruent with who they are. If institutions lack those models in their faculty, I urge leaders to intentionally connect with groups or organizations like SOTA that have the expertise and access to individuals who can serve in those roles from their emerging scholars.

What advice would you give to emerging scholars from underrepresented, minoritized groups that are pursuing a career in education research?

Often emerging scholars from underrepresented, minoritized groups are not encouraged to engage in work that speaks to their soul or can meaningfully impact the communities they serve. As in my experience, underrepresented emerging scholars are often told that doing research on our identity groups or researching issues that these groups experience is limiting, pigeonholing, and too self-reflective. Emerging Black scholars, in particular, are told they must approach their work in ways that are contradictory to their values or diminish their self-concepts. These messages can stunt growth and hinder the ability to identify innovative solutions to education’s most-pressing problems.

Because of this, I encourage all emerging scholars to consider the following reflective questions, guided by my emerging professional development framework—the 5 I’s, to help align their education research careers with how they see themselves, individually and in community.

  • Identity: How does my identity influence my research?
  • Intention: How can I create synergy between my research and scholarship, courses I teach, service I perform, and who I am as a scholar?
  • Implementation: How does my positionality influence my research design choices?
  • Influence: Who needs to know about my work? How can partnership extend the impact of my work?
  • Impact: How can my work be used to create better educational environments for marginalized or minoritized communities, or change education policy, research, or practice in meaningful ways?

This interview blog is part of a larger IES blog series on diversity, equity, inclusion and accessibility (DEIA) in the education sciences. It was produced by Akilah Nelson (akilah.nelson@ed.gov), a program officer within the National Center for Special Education Research.

Unlocking Opportunities: Understanding Connections Between Noncredit CTE Programs and Workforce Development in Virginia

With rapid technological advances, the U.S. labor market exhibits a growing need for more frequent and ongoing skill development. Community college noncredit career and technical education (CTE) programs that allow students to complete workforce training and earn credentials play an essential role in providing workers with the skills they need to compete for jobs in high-demand fields. Yet, there is a dearth of research on these programs because noncredit students are typically not included in state and national postsecondary datasets. In this guest blog for CTE Month, researchers Di Xu, Benjamin Castleman, and Betsy Tessler discuss their IES-funded exploration study in which they build on a long-standing research partnership with the Virginia Community College System and leverage a variety of data sources to investigate the Commonwealth’s FastForward programs. These programs are noncredit CTE programs designed to lead to an industry-recognized credential in one of several high-demand fields identified by the Virginia Workforce Board.

In response to the increasing demand for skilled workers in the Commonwealth, the Virginia General Assembly passed House Bill 66 in 2016 to establish the New Economy Workforce Credential Grant Program (WCG) with the goal of providing a pay-for-performance model for funding noncredit training. The WCG specifically funds FastForward programs that lead to an industry-recognized credential in a high-demand field in the Commonwealth. Under this model, funding is shared between the state, students, and training institutions based on student performance, with the goal of ensuring workforce training is affordable for Virginia residents. An important implication of WCG is that it led to systematic, statewide collection of student-level data on FastForward program enrollment, program completion, industry credential attainment, and labor market performance. Drawing on these unique data, coupled with interviews with key stakeholders, we generated findings on the characteristics of FastForward programs, as well as the academic and labor market outcomes of students enrolled in these programs. We describe the preliminary descriptive findings below.

FastForward programs enroll a substantially different segment of the population from credit-bearing programs and offer a vital alternative route to skill development and workforce opportunities, especially for demographic groups often underrepresented in traditional higher education. FastForward programs in Virginia enroll a substantially higher share of Black students, male students, and older students than short-duration, credit-bearing programs at community colleges that typically require one year or less to complete. Focus groups conducted with FastForward students at six colleges indicate that the students were a mix of workers sent by their employers to learn specific new skills and students who signed up for a FastForward program on their own. Among the latter group were older career changers and recent high school graduates, many of whom had no prior college experience and were primarily interested in landing their first job in their chosen field. Moreover, 61% of FastForward participants have neither prior nor subsequent enrollment in credit-bearing programs, highlighting the program’s unique role in broadening access to postsecondary education and career pathways.

FastForward programs offer an alternative path for students who are unsuccessful in credit-bearing programs. The vast majority of students (78%) enrolled in only one FastForward program, with the average enrollment duration of 1.5 quarters, which is notably shorter than most traditional credit-bearing programs. While 36% have prior credit-bearing enrollment, fewer than 20% of these students earned a degree or certificate from it, and less than 12% of FastForward enrollees transitioned to credit-bearing training afterward. Interviews with administrators and staff indicated that while some colleges facilitate noncredit-to-credit pathways by granting credit for prior learning, others prioritize employment-focused training and support over stackable academic pathways due to students’ primary interest in seeking employment post-training.

FastForward programs have a remarkable completion rate and are related to high industry credential attainment rates. Over 90% of students complete their program, with two-thirds of students obtaining industry credentials. Student focus groups echoed this success. They praised the FastForward program and colleges for addressing both their tuition and non-tuition needs. Many students noted that they had not envisioned themselves as college students and credited program staff, financial aid, and institutional support with helping them to be successful.

Earning an industry credential through FastForward on average increases quarterly earnings by approximately $1,000. In addition, industry credentials also increase the probability of being employed by 2.4 percentage points on average. We find substantial heterogeneity in economic return across different fields of study, where the fields of transportation (for example, commercial driver’s license) and precision production (for example, gas metal arc welding) seem to be associated with particularly pronounced earnings premiums. Within programs, we do not observe significant heterogeneity in economic returns across student subgroups.

What’s Next?

In view of the strong economic returns associated with earning an industry credential and the noticeable variation in credential attainment between training institutions and programs, our future exploration intends to unpack the sources of variation in program-institution credential attainment rates and to identify specific program-level factors that are within the control of an institution and which are associated with higher credential rates and lower equity gaps. Specifically, we will collect additional survey data from the top 10 most highly-enrolled programs at the Virginia Community College System (VCCS) that will provide more nuanced program-level information and identify which malleable program factors are predictive of higher credential attainment rates, better labor market outcomes, and smaller equity gaps associated with these outcomes.


Di Xu is an associate professor in the School of Education at UC, Irvine, and the faculty director of UCI’s Postsecondary Education Research & Implementation Institute.

Ben Castleman is the Newton and Rita Meyers Associate Professor in the Economics of Education at the University of Virginia.

Betsy Tessler is a senior associate at MDRC in the Economic Mobility, Housing, and Communities policy area.

Note: A team of researchers, including Kelli Bird, Sabrina Solanki, and Michael Cooper contributed jointly to the quantitative analyses of this project. The MDRC team, including Hannah Power, Kelsey Brown, and Mark van Dok, contributed to qualitative data collection and analysis. The research team is grateful to the Virginia Community College System (VCCS) for providing access to their high-quality data. Special thanks are extended to Catherine Finnegan and her team for their valuable guidance and support throughout our partnership.

This project was funded under the Postsecondary and Adult Education research topic; questions about it should be directed to program officer James Benson (James.Benson@ed.gov).

This blog was produced by Corinne Alfeld (Corinne.Alfeld@ed.gov), NCER program officer for the CTE research topic.