IES Blog

Institute of Education Sciences

Pennsylvania Student Proficiency Rates Rebound Partially from COVID-19-Related Declines

Given the magnitude of the disruption the COVID-19 pandemic caused to education practices, there has been considerable interest in understanding how the pandemic may have affected student proficiency. In this guest blog, Stephen Lipscomb, Duncan Chaplin, Alma Vigil, and Hena Matthias of Mathematica discuss their IES-funded grant project, in partnership with the Pennsylvania Department of Education (PDE), that is looking at the pandemic’s impacts in Pennsylvania.  

The onset of the COVID-19 pandemic in spring 2020 brought on a host of changes to K–12 education and instruction in Pennsylvania. Many local education agencies (LEAs) instituted remote learning and hybrid schedules as their primary mode of educating students, while others maintained in-person learning. Statewide assessments, which were suspended in spring 2020, resumed in 2021 with low participation rates, particularly among students with lower performance before the pandemic. Furthermore, test administration dates varied from spring 2021 to fall 2021. Pennsylvania statewide assessment data reveal that student proficiency rates may have rebounded in 2022, despite remaining below pre-pandemic levels. In grades 5–8, there was a marked increase in proficiency in English language arts (ELA) and a slightly smaller increase in proficiency in math compared to 2021 proficiency rates predicted in recent research. Despite these gains, increasing student proficiency rates to pre-pandemic levels will require additional efforts.

The Pennsylvania Department of Education (PDE) has been committed to providing LEAs with the resources and support necessary to help students achieve pre-pandemic academic proficiency rates. To learn more about changes in how those rates may have been associated with the pandemic, PDE and Mathematica partnered to explore trends in student proficiency data for students in grades 5–8. Given the lower and nonrepresentative participation in the 2021 statewide assessments, as well as the differences in when LEAs administered the assessments, we developed a predictive model of statewide proficiency rates for spring 2021 to produce predicted proficiency rates that would be more comparable to previous and future years. The results revealed that steep declines in proficiency likely occurred between 2019 and 2021 (see Figure 1 below). By spring 2022, proficiency rates in grades 5–8 regained 6 percentage points of their 10 percentage point drop in ELA and nearly 5 percentage points of their 13 percentage point drop in math. Taken together, these results suggest that although the pandemic may have originally been associated with declines in students’ academic proficiency, over time, student proficiency might move back towards pre-pandemic levels.

 

Figure 1. Actual and predicted proficiency rates in grades 5–8 in Pennsylvania, 2015–2022

The figure shows actual proficiency rates from the Pennsylvania System of School Assessment, averaged across grades 5–8, unless marked by either an open or closed circle. Notes: Open circle indicates Statewide assessment cancelled; closed circle indicates predicted proficiency rate. The figure shows actual proficiency rates from the Pennsylvania System of School Assessment, averaged across grades 5–8, unless marked by either an open or closed circle.

Source: Data from 2015–2019 and 2022 are from the Pennsylvania Department of Education. The 2021 data are predicted proficiency rates from Lipscomb et al. (2022a). The figure originally appeared in Lipscomb et al. (2022b).  

 

The next steps for this project will include a strong focus on dissemination of our findings. For example, we will develop a research brief that describes the role of remote learning in shaping academic outcomes beyond proficiency rates and community health outcomes during the pandemic. The findings will help PDE and LEAs refine strategies for supporting vulnerable students and help state policymakers and educators learn from the COVID-19 pandemic—specifically how it might have affected student outcomes and educational inequities.


This blog was produced by Allen Ruby (Allen.Ruby@ed.gov), Associate Commissioner for Policy and Systems Division, NCER.

What do Geometry Projects and Pie Have in Common?

On March 14, many students across the country will be paying homage to one of the most important irrational numbers – Pi (or π), the mathematical constant that represents the ratio between a circle’s circumference and its diameter. Here at IES, we are celebrating Pi Day in two ways – one, by highlighting two projects that are helping students better understand and apply pi and other geometry concepts, and two, by daydreaming about the other kind of pie (more on that below).

Highlighting Two IES-Funded Geometry Projects

Julie Booth (Temple University) and colleagues are developing and testing an intervention, GeometryByExample, to improve student learning of high school geometry. GeometryByExample provides strategically designed, worked-example based assignments of varied geometric content for students to complete in class in place of more typical practice assignments. Instead of solving all of the practice problems themselves, students study correct or incorrect examples of solutions to half of the problems and respond to prompts asking them to write explanations of why those procedures are correct or incorrect.

Candace Walkington and colleagues are exploring how the interaction between collaboration and multisensory experiences affects geometric reasoning. They are using augmented reality (AR) technology to explore different modalities for math learning, such as a hologram, a set of physical manipulatives, a dynamic geometry system (DGS) on a tablet, or a piece of paper. Each of these modalities has different affordances, including the degree to which they can represent dynamic transformations, represent objects and operations in 3 dimensions, support joint attention, and provide situational feedback. Researchers have developed an experimental platform that uses AR and situates experimental tasks in an engaging narrative story. The overarching research questions they are exploring are (1) How does shared AR impact student understanding of geometric principles? (2) how are these effects mediated by gesture, language, and actions? and (3) how are these effects moderated by student and task characteristics?

The Other Kind of Pi(e)

Baking pies with the pi symbol is another fun way to celebrate the day, so in the spirit of that, we asked NCER and NCSER staff about their favorite pie flavors. Below is a pie graph with the results – not much consensus on flavor, but it’s clear we all love pie. Happy Pi Day!


This chart shows NCER and NCSER staff pie preferences in percentages: Apple 19%, Cherry 19%, Chocolate 12%, Key Lime 12%, Peach 6%, Pecan 19%, and Rhubarb 13%.

 


Written by Christina Chhin (Christina.Chhin@ed.gov) and Erin Higgins (Erin.Higgins@ed.gov), NCER Program Officers.

Innovative Approaches to High Quality Assessment of SEL Skills

In celebration of IES’s 20th anniversary and SEL Day, we are highlighting NCER’s investments in field-initiated research. In this blog, program officer Dr. Emily Doolittle discusses a persistent barrier to supporting social and emotional learning (SEL) in schools—the lack of high quality, reliable, and valid SEL assessments—and the innovative research supported by IES to tackle this challenge.

High quality measurement is critical for education research and practice. Researchers need valid and reliable assessments to answer questions about what works for whom and why. Schools use assessments to guide instruction, determine student response to intervention, and for high-stakes decision-making such as provision of special education services.

For social and emotional learning (SEL), assessment can be particularly challenging due to lack of precision in defining core SEL competencies. One consequence of this imprecision is that measures and intervention targets are often misaligned. SEL assessment also tends to rely on student, teacher, and parent reports despite the lack of agreement among reporters and the potential for biased responding. Through NCER, IES is supporting the development and validation of SEL measures using new technologies and approaches to address some of these challenges. Here are some examples of this innovative measurement work.

  • SELweb is a web-based direct assessment of four specific SEL skills - emotion recognition, social perspective taking, social problem solving, and self-control. It is available for use with elementary school students in grades K-3 and 4-6 with a middle school version currently under development. The SEL Quest Digital Platform will support school-based implementation of SELweb and other SEL assessments with an instrument library and a reporting dashboard for educators.
  • vSchool uses a virtual reality (VR) environment to assess prosocial skills. Students in 4th to 6th grades build their own avatar to interact with other characters in school settings using menu-driven choices for prosocial (helping, encouraging, sharing) and non-prosocial (aggressive, bullying, teasing) behaviors.
  • VESIP (Virtual Environment for Social Information Processing) also uses a VR school environment with customizable avatars to assess 3rd through 7th grade students’ social information processing in both English and Spanish.

Other assessments take a different approach to the challenges of SEL measurement by looking for ways to improve self, teacher, and parent reports.

  • In Project MIDAS, the research team is creating a system to integrate the different information provided by students, teachers, and parents to see if combining these reports will lead to more accurate identification of middle school students with SEL needs.
  • In Project EASS-E, the researchers are creating a teacher-report measure that will incorporate information about a child’s environment (e.g., neighborhood and community context) to better support elementary school students’ needs.

Please check out IES’s search tool to learn more about the wide variety of measurement research we fund to develop and validate high quality assessments for use in education research and practice.


Written by Emily Doolittle (Emily.Doolittle@ed.gov), NCER Team Lead for Social Behavioral Research

 

Adult Education and Foundational Skills Grantee Spotlight: Dr. Daphne Greenberg’s Advice for New Researchers

As part of the IES 20th Anniversary, NCER is reflecting on the past, present, and future of adult education research. In this blog, Dr. Daphne Greenberg, Distinguished University Professor at Georgia State University, reflects on her connection to adult education and gives advice to researchers interested in this field. Dr. Greenberg has served as the principal investigator (PI) on three NCER grants, including the Center for the Study of Adult Literacy, and is also co-PI on three current NCER grants (1, 2, 3). She helped found the Georgia Adult Literacy Advocacy group and the Literacy Alliance of Metro Atlanta and has tutored adult learners and engaged in public awareness about adult literacy, including giving a TedTalk: Do we care about us? Daphne Greenberg at TEDxPeachtree.

Head Shot of Daphne GreenbergWhat got you interested in adult education research?

During graduate school, I was a volunteer tutor for a woman who grew up in a southern sharecropper family, did not attend school, and was reading at the first-grade level. Her stories helped me understand why learning was important to her. For example, her sister routinely stole money from her bank account because she couldn’t balance her checkbook.

I began wondering whether adults and children reading at the same level had similar strengths and weaknesses and whether the same word-reading components were equally important for them. I later published an article that became a classic in adult literacy research about this.

Over the years, I have grown to admire adult learners for their unique stories and challenges and am deeply impressed with their “grit” and determination even when faced with difficulties. When I watch a class of native-born adults reading below the 8th grade levels, I am inspired by them and yet deeply conflicted about our K-12 system and how many students aren’t getting what they need from it.

How does your personal experience influence your perspective?

I think my childhood and family planted the seeds. My grandfather ran a grocery store but had only a third-grade education. My parents were immigrants who worked hard to navigate a new culture and language, and I struggled with reading in English and English vocabulary growing up. As a result, I understand how people hide and compensate for academic weaknesses.

Also, my brother has profound autism. As a child, I insisted that I could teach him many skills, and I did. This taught me patience and the joy one feels when even the smallest gain is made.

As an adult, I mess up idioms, use Hebraic sentence structure, and need help with editing. I also have a visual condition that causes me to miss letters when I read and write. These difficulties help me relate to the struggles of adult learners. I often feel deep embarrassment when I make mistakes. But I am very fortunate that I have colleagues who celebrate my strengths and honor my weaknesses. Not all of our adult learners are as fortunate.

What should researchers new to adult education know about the system?

Adult education serves students with significant needs and numerous goals—from preparing for employment or postsecondary education to acquiring skills needed to pass the citizenship exam or helping their children with homework. But the adult education system has less public funding than K-12 or postsecondary systems.

Many of the educators are part-time or volunteers and may not have a background in teaching—or at least in teaching adults. There just aren’t the same level of professional development opportunities, technological and print instructional resources, infrastructure, or supporting evidence-based research that other education systems have. 

What should researchers know about adult learners?

As a starting point, here are three things that I think researchers should know about adult learners:

  • What it means to live in poverty. For example, I once worked with a researcher who, when told that adult learners wouldn’t have access to the internet, replied “That’s not an issue. They can take their laptops to a Starbucks to access the Internet.”
  • That adult learners are motivated. The fact that they have inconsistent attendance does not mean that they are not motivated. It means that they have difficult lives, and if we were in their shoes, we would also have difficulty attending on a regular basis.
  • That adult learners’ oral vocabulary often matches their reading vocabulary. If you want adult learners to understand something, such as informed consents, realize that their oral vocabulary often is very similar to their reading grade equivalencies and consider simplifying complex vocabulary and syntax structure.

What specific advice do you have about conducting research with adult learners?

Testing always takes longer to complete than anticipated. I never pull students out from classes for testing because their class time is so precious. So they have to be available after or before class to participate in research, and this can be problematic. We often need to reschedule an assessment because public transportation is late, a job shift suddenly changes, or a family member is sick.

Finding enough of particular types of students is difficult because sites often attract different demographics. For example, one site may have primarily 16- and 17-year-olds, another site may have mostly non-native speakers, and another site may have either lower- or higher-skilled adult learners.

Having a “clean” comparison group at the same site is challenging because of intervention “leakage” to nonintervention teachers.  Adult education teachers are often so hungry for resources that they may try to peak into classrooms while an intervention is in process, get access to materials, or otherwise learn about the intervention. Their desire for anything that might help students makes contamination a concern.  

What areas of adult education could use more research?

I think that policymakers and practitioners would benefit from many areas of research, but two come to mind.

  • How to measure outcomes and demonstrate “return”: Many funding agencies require “grade level” growth, but it can take years for skills to consolidate and manifest as grade level change. In the meantime, adults may have found a job, gotten promoted, feel more comfortable interacting with their children’s schools, voted for the first time, etc. Are we measuring the right things in the right way? Are we measuring the things that matter to students, programs, and society? Should life improvements have equal or even more weight than growth on standardized tests? After how much time should we expect to see the life improvements (months, years, decades)?
  • How to create useful self-paced materials for adults who need to “stop-out”: Due to the complexities of our learners’ lives, many have to “stop-out” for a period before resuming class attendance. These adults would benefit from resources that they could use on their own, at their own pace during this time. What is the best practice for delivery of these types of resources? Does this “best practice” depend on the adult’s ability level? Does it depend on the content area? 

Any final words for researchers new to adult education?

I extend a warm welcome to anyone interested in research with adult learners. You will discover that many adult learners are eager to participate in research studies. They feel good helping researchers with their work and are hopeful that their time will help them or be of help to future learners. I highly recommend that you collaborate with researchers and/or practitioners who are familiar with the adult education context to help smooth the bumps you will inevitably experience.


This blog was produced by Dr. Meredith Larson (Meredith.Larson@ed.gov), research analyst and program officer at NCER.

The Role of IES in Advancing Science and Pushing Public Conversation: The Merit Pay Experience

In celebration of IES’s 20th anniversary, we’re telling the stories of our work and the significant impact that—together—we’ve had on education policy and practice. As IES continues to look towards a future focused on progress, purpose, and performance, Dr. Matthew G. Springer of the University of North Carolina at Chapel Hill, discusses the merit pay debate and why the staying power of IES will continue to matter in the years to come.

Head shot of Matthew SpringerMerit Pay for Teachers

There are very few issues that impact Americans more directly or more personally than education. The experience of school leaves people with deep memories, strong opinions, and a highly localized sense of how education works in a vast and diverse country. Bringing scientific rigor to a subject so near to people’s lives is incredibly important — and maddeningly hard. The idea of merit pay inspires strong reactions from politicians and the general public, but for a long time there was vanishingly little academic literature to support either the dismissive attitude of skeptics or the enthusiasm of supporters.

Given the stakes of the merit pay debate—the size of the nation’s teacher corps, the scale of local, state, and federal educational investments, and longstanding inequities in access to highly effective teachers—policymakers desperately need better insight into how teacher compensation might incentivize better outcomes. Critics worry merit pay creates negative competition, causes teachers to focus narrowly on incentivized outcomes, and disrupts the collegiate ethos of teaching and learning. On the other hand, supporters argue that merit pay helps motivate employees, attract and retain employees in the profession, and improve overall productivity.

Generating Evidence: The Role of IES-Funded Research

That’s precisely the kind of need that IES was designed to meet. In 2006, with the support of IES, I joined a group of research colleagues to launch the Project on Incentives in Teaching (POINT) to test some of the major theories around merit pay. We wanted to apply the most rigorous research design—a fully randomized control trial—to the real-world question of whether rewarding teachers for better student test scores would move the needle on student achievement.

But orchestrating a real-world experiment is much harder than doing an observational analysis. Creating a rigorous trial to assess merit pay required enormous diplomacy. There are a lot of stakeholders in American education, and conducting a meaningful test of different teacher pay regimes required signoff from the local branch of the teacher’s union, the national union, the local school board and elected leadership, and the state education bureaucracy all the way up to the governor. Running an experiment in a lab is one thing; running an experiment with real-world teachers leading real, live classrooms is quite another.

With IES support to carry out an independent and scientifically rigorous study, POINT was able to move forward with the support of the Metropolitan Nashville Public Schools. The results were not what most anticipated. 

Despite offering up to $15,000 annually in incentive pay based on a value-added measure of teacher effectiveness, in general, we found no significant difference in student performance between the merit-pay recipients and teachers who weren’t eligible for bonuses. 

As with all good scientific findings, we strongly believed that our results should be the start of a conversation, not presented as the final word. Throughout my career, I’ve seen one-off research findings treated by media and advocacy organizations as irrefutable proof of a particular viewpoint, but that’s not how scientific research works. My colleagues and I took care to publish a detailed account of our study’s implementation, including an honest discussion of its methodological limitations and political constraints. We called for more studies in more places, all in an effort to contest or refine the limited insights we were able to draw in Nashville. “One experiment is not definitive,” we wrote. “More experiments should be conducted to see whether POINT’s findings generalize to other settings.”

How IES-Funded Research Informs Policy and Practice 

Around the time of the release of our findings, the Obama administration was announcing another investment in the Teacher Incentive Fund, a George W. Bush-era competitive grant program that rewarded teachers who proved effective in raising test scores. We were relieved that our study didn’t prompt the federal government to abandon ship on merit pay; rather, they reviewed our study findings and engaged in meaningful dialogue about how to refine their investments. Ultimately, the Teacher Incentive Fund guidelines linked pay incentives with capacity-building opportunities for teachers—things like professional development and professional learning communities—so that the push to get better was matched by resources to make improvement more likely.  

While we did not have education sector-specific research to support that pairing, it proved a critical piece of the merit pay design puzzle. In 2020, for example, I worked with a pair of colleagues on a meta-analysis of the merit pay literature. We synthesized effect sizes across 37 primary studies, 26 of which were conducted in the United States, finding that the merit pay effect is nearly twice as large when paired with professional development. This is by no means the definitive word, but it’s a significant contribution in the incremental advancement of scientific knowledge about American education. It is putting another piece of the puzzle together and moving the education system forward with ways to improve student opportunity and learning.

That’s the spirit IES encourages—open experimentation, modesty in drawing conclusions, and an eagerness to gather more data. Most importantly, we wanted our findings to spark conversation not just among academic researchers but among educators and policymakers, the people ultimately responsible for designing policy and implementing it in America’s classrooms.

Pushing that public conversation in a productive direction is always challenging. Ideological assumptions are powerful, interest groups are vocal, and even the most nuanced research findings can get flattened in the media retelling. We struggled with all of that in the aftermath of the POINT study, which arrived at a moment of increased federal interest in merit pay and other incentive programs.  Fortunately, the Obama administration listened to rigorous research findings and learned from the research funding arm of the U.S. Department of Education.

This is why the staying power of IES matters. The debates around education policy aren’t going away, and the need for stronger empirical grounding grows alongside the value of education and public investment. The POINT experiment didn’t settle the debate over merit pay, but we did complicate it in productive ways that yielded another critically important finding nearly eight years later. That’s a victory for education science, and a mark of progress for education policy.


Matthew G. Springer is the Robena and Walter E. Hussman, Jr. Distinguished Professor of Education Reform at the University of North Carolina at Chapel Hill School of Education.

This blog was produced by Allen Ruby (Allen.Ruby@ed.gov), Associate Commissioner for Policy and Systems Division, NCER.