IES Blog

Institute of Education Sciences

On March 14, many students across the country will be paying homage to one of the most important irrational numbers – Pi (or π), the mathematical constant that represents the ratio between a circle’s circumference and its diameter. Here at IES, we are celebrating Pi Day in two ways – one, by highlighting two projects that are helping students better understand and apply pi and other geometry concepts, and two, by daydreaming about the other kind of pie (more on that below).

Highlighting Two IES-Funded Geometry Projects

Julie Booth (Temple University) and colleagues are developing and testing an intervention, GeometryByExample, to improve student learning of high school geometry. GeometryByExample provides strategically designed, worked-example based assignments of varied geometric content for students to complete in class in place of more typical practice assignments. Instead of solving all of the practice problems themselves, students study correct or incorrect examples of solutions to half of the problems and respond to prompts asking them to write explanations of why those procedures are correct or incorrect.

Candace Walkington and colleagues are exploring how the interaction between collaboration and multisensory experiences affects geometric reasoning. They are using augmented reality (AR) technology to explore different modalities for math learning, such as a hologram, a set of physical manipulatives, a dynamic geometry system (DGS) on a tablet, or a piece of paper. Each of these modalities has different affordances, including the degree to which they can represent dynamic transformations, represent objects and operations in 3 dimensions, support joint attention, and provide situational feedback. Researchers have developed an experimental platform that uses AR and situates experimental tasks in an engaging narrative story. The overarching research questions they are exploring are (1) How does shared AR impact student understanding of geometric principles? (2) how are these effects mediated by gesture, language, and actions? and (3) how are these effects moderated by student and task characteristics?

The Other Kind of Pi(e)

Baking pies with the pi symbol is another fun way to celebrate the day, so in the spirit of that, we asked NCER and NCSER staff about their favorite pie flavors. Below is a pie graph with the results – not much consensus on flavor, but it’s clear we all love pie. Happy Pi Day!

This chart shows NCER and NCSER staff pie preferences in percentages: Apple 19%, Cherry 19%, Chocolate 12%, Key Lime 12%, Peach 6%, Pecan 19%, and Rhubarb 13%.

Written by Christina Chhin (Christina.Chhin@ed.gov) and Erin Higgins (Erin.Higgins@ed.gov), NCER Program Officers.

In celebration of IES’s 20th anniversary and SEL Day, we are highlighting NCER’s investments in field-initiated research. In this blog, program officer Dr. Emily Doolittle discusses a persistent barrier to supporting social and emotional learning (SEL) in schools—the lack of high quality, reliable, and valid SEL assessments—and the innovative research supported by IES to tackle this challenge.

High quality measurement is critical for education research and practice. Researchers need valid and reliable assessments to answer questions about what works for whom and why. Schools use assessments to guide instruction, determine student response to intervention, and for high-stakes decision-making such as provision of special education services.

For social and emotional learning (SEL), assessment can be particularly challenging due to lack of precision in defining core SEL competencies. One consequence of this imprecision is that measures and intervention targets are often misaligned. SEL assessment also tends to rely on student, teacher, and parent reports despite the lack of agreement among reporters and the potential for biased responding. Through NCER, IES is supporting the development and validation of SEL measures using new technologies and approaches to address some of these challenges. Here are some examples of this innovative measurement work.

• SELweb is a web-based direct assessment of four specific SEL skills - emotion recognition, social perspective taking, social problem solving, and self-control. It is available for use with elementary school students in grades K-3 and 4-6 with a middle school version currently under development. The SEL Quest Digital Platform will support school-based implementation of SELweb and other SEL assessments with an instrument library and a reporting dashboard for educators.
• vSchool uses a virtual reality (VR) environment to assess prosocial skills. Students in 4th to 6th grades build their own avatar to interact with other characters in school settings using menu-driven choices for prosocial (helping, encouraging, sharing) and non-prosocial (aggressive, bullying, teasing) behaviors.
• VESIP (Virtual Environment for Social Information Processing) also uses a VR school environment with customizable avatars to assess 3rd through 7th grade students’ social information processing in both English and Spanish.

Other assessments take a different approach to the challenges of SEL measurement by looking for ways to improve self, teacher, and parent reports.

• In Project MIDAS, the research team is creating a system to integrate the different information provided by students, teachers, and parents to see if combining these reports will lead to more accurate identification of middle school students with SEL needs.
• In Project EASS-E, the researchers are creating a teacher-report measure that will incorporate information about a child’s environment (e.g., neighborhood and community context) to better support elementary school students’ needs.

Please check out IES’s search tool to learn more about the wide variety of measurement research we fund to develop and validate high quality assessments for use in education research and practice.

Written by Emily Doolittle (Emily.Doolittle@ed.gov), NCER Team Lead for Social Behavioral Research

This week, Texas celebrates Educational Diagnosticians’ Week. In recognition, NCSER highlights the important work that one Texas-based educational diagnostician, Mahnaz (Nazzie) Pater-Rov, has been doing to disseminate information from IES researchers to practitioners on improving reading outcomes.

Nazzie conducts assessments of students who have been referred for testing within multi-tiered systems of support (MTSS) to determine whether they have a learning disability (LD) and makes recommendations for intervention/instruction to improve their literacy and achieve their Individualized Education Plan goals. Working in this field requires an understanding of district/school policies and research-based evidence on identifying students with disabilities. To do this, Nazzie has immersed herself in current research by reading many of the resources IES provides through the What Works Clearinghouse and IES-funded grants so that she can use valid measures and recommend evidence-based interventions. After 16 years in the profession, Nazzie has realized that she is not alone and wants to help other diagnosticians understand the latest developments in LD identification and intervention. Nazzie uses a social media audio application called Clubhouse to share what she is learning, including hosting researchers for chats to present current work on related topics. Nazzie’s chat room is called ED. DIAGNOSTICIANS and has over 900 members, mostly education diagnosticians. Some of her speakers have been IES-funded researchers.

 Date Title Researcher (Link to IES Grants) 1/13/2023 Are Subtypes of Dyslexia Real? Jack Fletcher, University of Houston 6/17/2022 Efforts to Reduce Academic Risk at the Meadows Center Sarah Powell, University of Texas at Austin 6/3/2022 Bringing the Dyslexia Definition in to Focus Jeremy Miciak, University of Houston 5/27/22 Pinpointing Needs with Academic Screeners Nathan Clemens, University of Texas at Austin 3/4/2022 Using EasyCBMs in our Evaluation Reports Julie Alonzo, University of Oregon

We asked Nazzie to share some of her top concerns and recommendations for research.

Responses have been edited for brevity and clarity.

What stimulated your desire to bring about changes not only in your school but across the state?

When Texas removed its cap on the number of students that could be identified as in need of special education, and districts changed procedures for identifying need, we started to experience a “tsunami” of referrals. Now we are creating a whole population of children identified with LDs without also simultaneously looking at ways to improve our system of policies, procedures, and instruction to ensure we meet the needs of all students through preventative practices.

How has the role of education diagnostician changed since the reauthorization of IDEA (2004)?

Prior to the reauthorization of IDEA, we would compare a student’s IQ with their academic performance. If there was a discrepancy, they were identified as LD. Many states now use a pattern of strengths and weaknesses (PSW) for identification, which is based on multiple measures of cognitive processes.

In Texas, there is also an increased demand for the specialized, evidence-based instruction now that we are better understanding how to identify students as LD and parents are seeing the need for identification and services for their children. However, this has led to doubling the LD identification rate in many districts. This, in turn, is increasing our caseloads and burning us out!

Some experts in the field advocate for using a tiered systems approach, such as MTSS, to identify when a student is not responding to instruction or intervention rather than relying only on the PSW approach. However, the challenge is that there are not enough evidence-based interventions in place across the tiers within MTSS for this identification process to work. In other words, can students appropriately be identified as not responding to instruction when evidence-based interventions are not being used? By not making these types of evidence-based interventions accessible at younger ages to general education students within MTSS, I worry that we are just helping kids tread water when we could have helped them learn to swim earlier.

What are your recommendations for systemic reform?

We need to find a better way to weave intervention implementation into teachers’ everyday practice so it is not viewed as “extra work.” Tiered models are general education approaches to intervention, but it is important for special education teachers and educational diagnosticians to also be involved. My worry is that diagnosticians, including myself, are helping to enable deficit thinking by reinforcing the idea that the child’s performance is more a result of their inherited traits rather than a result of instruction when, instead, we could focus our energy on finding better ways to provide instruction. Without well-developed tiered models, I worry that we end up working so hard because what we are doing is not working.

Are there specific training needs you feel exist for education diagnosticians?

Many new diagnosticians are trained on tools or methods that are outdated and no longer relevant to current evidence-based testing recommendations. This is a problem because instructional decisions can only be as good as the data on which they are based. We need training programs that enable us to guide school staff in selecting the appropriate assessments for specific needs. If diagnosticians were trained in data-based individualization or curriculum-based measures for instructional design rather than just how to dissect performance on subtests of cognitive processing (the PSW approach), they could be helping to drive instruction to improve student outcomes. The focus of an assessment for an LD should not be on a static test but be on learning, which is a moving target that cannot be measured in one day.

What feedback do you have for education funding agencies?

Implementing a system of academic interventions is challenging, especially after COVID-19, where social-emotional concerns and teacher shortages remain a top priority in many schools. Funding agencies should consider encouraging more research on policies and processes for the adoption of evidence-based interventions. Diagnosticians can be important partners in the effort.

This blog was authored by Sarah Brasiel (Sarah.Brasiel@ed.gov), program officer at NCSER. IES encourages special education researchers to partners with practitioners to submit to our research grant funding opportunities

As part of the IES 20th Anniversary, NCER is reflecting on the past, present, and future of adult education research. In this blog, Dr. Daphne Greenberg, Distinguished University Professor at Georgia State University, reflects on her connection to adult education and gives advice to researchers interested in this field. Dr. Greenberg has served as the principal investigator (PI) on three NCER grants, including the Center for the Study of Adult Literacy, and is also co-PI on three current NCER grants (1, 2, 3). She helped found the Georgia Adult Literacy Advocacy group and the Literacy Alliance of Metro Atlanta and has tutored adult learners and engaged in public awareness about adult literacy, including giving a TedTalk: Do we care about us? Daphne Greenberg at TEDxPeachtree.

What got you interested in adult education research?

During graduate school, I was a volunteer tutor for a woman who grew up in a southern sharecropper family, did not attend school, and was reading at the first-grade level. Her stories helped me understand why learning was important to her. For example, her sister routinely stole money from her bank account because she couldn’t balance her checkbook.

Over the years, I have grown to admire adult learners for their unique stories and challenges and am deeply impressed with their “grit” and determination even when faced with difficulties. When I watch a class of native-born adults reading below the 8th grade levels, I am inspired by them and yet deeply conflicted about our K-12 system and how many students aren’t getting what they need from it.

I think my childhood and family planted the seeds. My grandfather ran a grocery store but had only a third-grade education. My parents were immigrants who worked hard to navigate a new culture and language, and I struggled with reading in English and English vocabulary growing up. As a result, I understand how people hide and compensate for academic weaknesses.

Also, my brother has profound autism. As a child, I insisted that I could teach him many skills, and I did. This taught me patience and the joy one feels when even the smallest gain is made.

As an adult, I mess up idioms, use Hebraic sentence structure, and need help with editing. I also have a visual condition that causes me to miss letters when I read and write. These difficulties help me relate to the struggles of adult learners. I often feel deep embarrassment when I make mistakes. But I am very fortunate that I have colleagues who celebrate my strengths and honor my weaknesses. Not all of our adult learners are as fortunate.

Adult education serves students with significant needs and numerous goals—from preparing for employment or postsecondary education to acquiring skills needed to pass the citizenship exam or helping their children with homework. But the adult education system has less public funding than K-12 or postsecondary systems.

Many of the educators are part-time or volunteers and may not have a background in teaching—or at least in teaching adults. There just aren’t the same level of professional development opportunities, technological and print instructional resources, infrastructure, or supporting evidence-based research that other education systems have.

As a starting point, here are three things that I think researchers should know about adult learners:

• What it means to live in poverty. For example, I once worked with a researcher who, when told that adult learners wouldn’t have access to the internet, replied “That’s not an issue. They can take their laptops to a Starbucks to access the Internet.”
• That adult learners are motivated. The fact that they have inconsistent attendance does not mean that they are not motivated. It means that they have difficult lives, and if we were in their shoes, we would also have difficulty attending on a regular basis.
• That adult learners’ oral vocabulary often matches their reading vocabulary. If you want adult learners to understand something, such as informed consents, realize that their oral vocabulary often is very similar to their reading grade equivalencies and consider simplifying complex vocabulary and syntax structure.

Testing always takes longer to complete than anticipated. I never pull students out from classes for testing because their class time is so precious. So they have to be available after or before class to participate in research, and this can be problematic. We often need to reschedule an assessment because public transportation is late, a job shift suddenly changes, or a family member is sick.

Finding enough of particular types of students is difficult because sites often attract different demographics. For example, one site may have primarily 16- and 17-year-olds, another site may have mostly non-native speakers, and another site may have either lower- or higher-skilled adult learners.

Having a “clean” comparison group at the same site is challenging because of intervention “leakage” to nonintervention teachers.  Adult education teachers are often so hungry for resources that they may try to peak into classrooms while an intervention is in process, get access to materials, or otherwise learn about the intervention. Their desire for anything that might help students makes contamination a concern.

What areas of adult education could use more research?

I think that policymakers and practitioners would benefit from many areas of research, but two come to mind.

• How to measure outcomes and demonstrate “return”: Many funding agencies require “grade level” growth, but it can take years for skills to consolidate and manifest as grade level change. In the meantime, adults may have found a job, gotten promoted, feel more comfortable interacting with their children’s schools, voted for the first time, etc. Are we measuring the right things in the right way? Are we measuring the things that matter to students, programs, and society? Should life improvements have equal or even more weight than growth on standardized tests? After how much time should we expect to see the life improvements (months, years, decades)?
• How to create useful self-paced materials for adults who need to “stop-out”: Due to the complexities of our learners’ lives, many have to “stop-out” for a period before resuming class attendance. These adults would benefit from resources that they could use on their own, at their own pace during this time. What is the best practice for delivery of these types of resources? Does this “best practice” depend on the adult’s ability level? Does it depend on the content area?

Any final words for researchers new to adult education?

I extend a warm welcome to anyone interested in research with adult learners. You will discover that many adult learners are eager to participate in research studies. They feel good helping researchers with their work and are hopeful that their time will help them or be of help to future learners. I highly recommend that you collaborate with researchers and/or practitioners who are familiar with the adult education context to help smooth the bumps you will inevitably experience.

This blog was produced by Dr. Meredith Larson (Meredith.Larson@ed.gov), research analyst and program officer at NCER.

In celebration of IES’s 20th anniversary, we’re telling the stories of our work and the significant impact that—together—we’ve had on education policy and practice. As IES continues to look towards a future focused on progress, purpose, and performance, Dr. Matthew G. Springer of the University of North Carolina at Chapel Hill, discusses the merit pay debate and why the staying power of IES will continue to matter in the years to come.

Merit Pay for Teachers

There are very few issues that impact Americans more directly or more personally than education. The experience of school leaves people with deep memories, strong opinions, and a highly localized sense of how education works in a vast and diverse country. Bringing scientific rigor to a subject so near to people’s lives is incredibly important — and maddeningly hard. The idea of merit pay inspires strong reactions from politicians and the general public, but for a long time there was vanishingly little academic literature to support either the dismissive attitude of skeptics or the enthusiasm of supporters.

Given the stakes of the merit pay debate—the size of the nation’s teacher corps, the scale of local, state, and federal educational investments, and longstanding inequities in access to highly effective teachers—policymakers desperately need better insight into how teacher compensation might incentivize better outcomes. Critics worry merit pay creates negative competition, causes teachers to focus narrowly on incentivized outcomes, and disrupts the collegiate ethos of teaching and learning. On the other hand, supporters argue that merit pay helps motivate employees, attract and retain employees in the profession, and improve overall productivity.

Generating Evidence: The Role of IES-Funded Research

That’s precisely the kind of need that IES was designed to meet. In 2006, with the support of IES, I joined a group of research colleagues to launch the Project on Incentives in Teaching (POINT) to test some of the major theories around merit pay. We wanted to apply the most rigorous research design—a fully randomized control trial—to the real-world question of whether rewarding teachers for better student test scores would move the needle on student achievement.

But orchestrating a real-world experiment is much harder than doing an observational analysis. Creating a rigorous trial to assess merit pay required enormous diplomacy. There are a lot of stakeholders in American education, and conducting a meaningful test of different teacher pay regimes required signoff from the local branch of the teacher’s union, the national union, the local school board and elected leadership, and the state education bureaucracy all the way up to the governor. Running an experiment in a lab is one thing; running an experiment with real-world teachers leading real, live classrooms is quite another.

With IES support to carry out an independent and scientifically rigorous study, POINT was able to move forward with the support of the Metropolitan Nashville Public Schools. The results were not what most anticipated.

Despite offering up to \$15,000 annually in incentive pay based on a value-added measure of teacher effectiveness, in general, we found no significant difference in student performance between the merit-pay recipients and teachers who weren’t eligible for bonuses.

As with all good scientific findings, we strongly believed that our results should be the start of a conversation, not presented as the final word. Throughout my career, I’ve seen one-off research findings treated by media and advocacy organizations as irrefutable proof of a particular viewpoint, but that’s not how scientific research works. My colleagues and I took care to publish a detailed account of our study’s implementation, including an honest discussion of its methodological limitations and political constraints. We called for more studies in more places, all in an effort to contest or refine the limited insights we were able to draw in Nashville. “One experiment is not definitive,” we wrote. “More experiments should be conducted to see whether POINT’s findings generalize to other settings.”

How IES-Funded Research Informs Policy and Practice

Around the time of the release of our findings, the Obama administration was announcing another investment in the Teacher Incentive Fund, a George W. Bush-era competitive grant program that rewarded teachers who proved effective in raising test scores. We were relieved that our study didn’t prompt the federal government to abandon ship on merit pay; rather, they reviewed our study findings and engaged in meaningful dialogue about how to refine their investments. Ultimately, the Teacher Incentive Fund guidelines linked pay incentives with capacity-building opportunities for teachers—things like professional development and professional learning communities—so that the push to get better was matched by resources to make improvement more likely.

While we did not have education sector-specific research to support that pairing, it proved a critical piece of the merit pay design puzzle. In 2020, for example, I worked with a pair of colleagues on a meta-analysis of the merit pay literature. We synthesized effect sizes across 37 primary studies, 26 of which were conducted in the United States, finding that the merit pay effect is nearly twice as large when paired with professional development. This is by no means the definitive word, but it’s a significant contribution in the incremental advancement of scientific knowledge about American education. It is putting another piece of the puzzle together and moving the education system forward with ways to improve student opportunity and learning.

That’s the spirit IES encourages—open experimentation, modesty in drawing conclusions, and an eagerness to gather more data. Most importantly, we wanted our findings to spark conversation not just among academic researchers but among educators and policymakers, the people ultimately responsible for designing policy and implementing it in America’s classrooms.

Pushing that public conversation in a productive direction is always challenging. Ideological assumptions are powerful, interest groups are vocal, and even the most nuanced research findings can get flattened in the media retelling. We struggled with all of that in the aftermath of the POINT study, which arrived at a moment of increased federal interest in merit pay and other incentive programs.  Fortunately, the Obama administration listened to rigorous research findings and learned from the research funding arm of the U.S. Department of Education.

This is why the staying power of IES matters. The debates around education policy aren’t going away, and the need for stronger empirical grounding grows alongside the value of education and public investment. The POINT experiment didn’t settle the debate over merit pay, but we did complicate it in productive ways that yielded another critically important finding nearly eight years later. That’s a victory for education science, and a mark of progress for education policy.

Matthew G. Springer is the Robena and Walter E. Hussman, Jr. Distinguished Professor of Education Reform at the University of North Carolina at Chapel Hill School of Education.

This blog was produced by Allen Ruby (Allen.Ruby@ed.gov), Associate Commissioner for Policy and Systems Division, NCER.