Inside IES Research

Notes from NCER & NCSER

Towards a Better Understanding of Middle-Schoolers’ Argumentation Skills

What is the difference between fact and opinion? How do you find relevant evidence and use it to support a position? Every day, teachers help students practice these skills by fostering critical discussions, a form of argumentation that encourages students to use reasoning to resolve differences of opinion.

In their IES-funded study, Exploring and Assessing the Development of Students' Argumentation Skills, Yi Song and her colleagues are uncovering activities (both teacher led and technology supported) that can improve middle-school students’ ability to generate better oral and written arguments.

This project began in 2019 and is working in classrooms and with teachers and students. The researchers have created a series of videos that describe their work. In this series, Dr. Song and her co-PIs, Dr. Ralph Ferretti and Dr. John Sabatini, discuss why the project is important to education, how they will conduct the research plan, and how educators can apply what they are learning in classrooms.

 

 


For questions and more information, contact Meredith Larson (Meredith.Larson@ed.gov), Program Officer, NCER

Better Reading Comprehension When You Know That You Don’t Know

The more you already know about a topic, the easier it may be to comprehend and learn from texts about that topic. But knowledge has to start somewhere. So how can we help students learn from texts when they may have low background knowledge?

In their exploratory study, researchers from ETS found that lack of knowledge is not necessarily a barrier to comprehension. Rather, they suggest that students who can identify their lack of background knowledge are more likely to comprehend and learn new information than students who do not acknowledge they lack background knowledge. In other words, knowing that you might not know may lead to better outcomes.

To determine the role of background knowledge, the researchers pretested middle and high school students’ background knowledge through questions related to topics the students may have some but not complete knowledge of, such as ecology, immigration, and wind power. The pretest included an “I don’t know” option, along with correct and incorrect responses.

Students then took a scenario-based assessment in which they read multiple sources about each of the topics. This type of assessment mirrors real-world learning by encouraging readers to build their own interpretations of a topic, which helps researchers determine whether students comprehend what they read.

They found that students who selected “I don’t know” when answering background knowledge questions had better understanding of the content than those who provided wrong answers on these questions. In fact, students who selected “I don’t know” rather than answering incorrectly were nearly three times as likely to learn from sources that provided the correct information than students who had answered the pretest incorrectly. Students who selected “I don’t know” may also learn more than students who had a comparable level of weak background knowledge. The researchers suggest that the “I don’t know” readers may have set different reading goals prior to engaging with the sources than those who guessed incorrectly.

 

Possible Implications for Teaching and Learning

The results from this work support the idea that having and building background knowledge is key. Thus, teachers may want to assess existing knowledge and address knowledge gaps prior to instruction.

Teachers may also want to provide an “I don’t know” option or options that allow students to rate their level of certainty. Doing so may help teachers distinguish between students who recognize their own gaps in knowledge from those who may not be aware that they are wrong or that they simply do not know. This latter group of students may need more help in determining the accuracy of their judgments or may have incorrect knowledge that could interfere with learning.

The researchers further suggest that teachers may want to go beyond the role of background knowledge by teaching students how to set appropriate reading goals and use strategic reading approaches to learn new facts or correct existing misunderstandings.

 


The research reported here was conducted under NCER grant R305A150176: What Types of Knowledge Matters for What Types of Comprehension? Exploring the Role of Background Knowledge on Students' Ability to Learn from Multiple Texts.

This blog was written by Dr. Meredith Larson. Contact her for more information about this project.

Building a Reading Comprehension Measure for Postsecondary Students

Assessments of both U.S. adults and 12th-grade students indicate that millions of learners may have significant reading skill gaps. Because these students may lack the fundamental reading and comprehension skills needed to thrive in college, postsecondary institutions need valid reading measures that accurately determine the source of student difficulties.

An IES-funded research team is developing and validating such a measure: Multiple-choice Online Causal Comprehension Assessment for Postsecondary Students (MOCCA-College). MOCCA-College aims to assess the reading comprehension abilities of postsecondary students and distinguish between common comprehension difficulties. This information could help students, faculty, and programs better determine who might need what type of additional reading instruction.

The current version of MOCCA-College is still being validated, but it already contains components that may interest postsecondary institutions, faculty, and students. For example, it suggests classroom interventions based on a student’s results and allows for different user roles, such as student, faculty member, or administrator. 

Results from pilot work indicate that MOCCA-College can reliably distinguish between postsecondary readers with strong comprehension skills and those who may need to build these skills. MOCCA-College uses both narrative and expository texts to determine student performance. The results indicate that both types of passages measure a single dimension of ability, though narrative passages may more easily and accurately discriminate between those who have good comprehension skills and those who do not.

This finding is in keeping with meta-analysis work that finds a similar pattern for narrative and expository items. Narrative passages appear to consistently measure inferential comprehension more accurately than expository passages for both younger and older readers. This holds even after matching texts for readability and demands on background knowledge.

As the researchers continue to validate MOCCA-College, we will continue to learn more about the needs of postsecondary readers, as well as how to identify and address these needs.

 


This research and articles referenced above are supported through NCER grant R305A180417: Multiple-choice Online Causal Comprehension Assessment for Postsecondary Students (MOCCA-College).

Dr. Meredith Larson, program officer for postsecondary and adult education, wrote this blog. Contact her at Meredith.Larson@ed.gov for additional information about MOCCA-College and postsecondary teaching and learning research.

 

Recent Report Identifies Possible Categories of Adult Struggling Readers (and How to Help Them)

Nearly one in five U.S. adults aged 16 and over may struggle with basic literacy. These adults may struggle with any of the core components of reading, such as decoding, vocabulary, and comprehension. They may struggle for many different reasons—English is not their first language, possible cognitive declines from aging, or a lack of formal education. To identify the right instructional tools and curricula, we need to understand the varying needs of this heterogeneous group of adult struggling readers and design appropriate solutions.

In a recent report, IES-funded researchers conducted a latent class analysis of 542 adults (age 16- to 71-years old) enrolled in adult education programs whose reading scores indicate a reading level between the 3rd- and 8th-grade level. The analysis identified four possible subgroup categories of adult struggling readers based on their performance on lower-level competencies (phonological awareness, decoding, vocabulary) and higher-level competencies (comprehension, inferencing, background knowledge):

 

  • Globally Impaired Readers: adults who show difficulties in all competencies
  • Globally Better Readers: adults who are relatively strong in all competencies
  • Weak Decoders: readers who are relatively weaker in lower-level competencies but strong in higher-level competencies
  • Weak Language Comprehenders: readers who are strong in lower-level competencies but relatively weaker in higher-level competencies

 

On average, Weak Decoders were older than other categories, though Globally Impaired Readers were on average older than Globally Better Readers or Weak Language Comprehenders. Globally Better Readers and Weak Decoders included a larger proportion of native English speakers than the other two categories. Thus, both age and English proficiency may predict the pattern of strengths and weaknesses. However, having a high school diploma did not predict performance patterns.

Although Globally Better Readers tended to perform better on reading assessment than other categories, even this group of readers performed at the 6th-grade level on average. Thus, all groups of readers would benefit from additional instruction. The researchers suggest different approaches for addressing the needs of learners in the different categories. For example, Weak Language Comprehenders may benefit from technology-based solutions that help build their oral language competencies, whereas Globally Impaired Readers and Weak Decoders may benefit from direct instruction on decoding skills.

 


This research was conducted as part of the Center for the Study of Adult Literacy (CSAL): Developing Instructional Approaches Suited to the Cognitive and Motivational Needs for Struggling Adults funded in 2012 through NCER.

The abstract for the publication discussed above is available on ERIC; Identifying Profiles of Struggling Adult Readers: Relative Strengths and Weaknesses in Lower-Level and Higher-Level Competencies (Talwar, Amani; Greenberg, Daphne; Li, Hongli).

Dr. Meredith Larson, program officer for postsecondary and adult education, wrote this blog. Contact her at Meredith.Larson@ed.gov for additional information about CSAL and adult education research.

Research on Adult Literacy: A History of Investment in American Adults

Reading is fundamental, but it is also difficult to master, taking thousands of hours of instruction and practice. Roughly 52 percent of U.S. adults over the age of 16 may struggle with everyday literacy tasks. Of these adults, approximately 20 percent may perform at very low levels of literacy. For adults who are still mastering of this skill, the task can seem overwhelming.

Luckily, IES-funded researchers have been working towards solutions for adults with low basic reading skills and are creating and refining assessments, curricula, and software. These innovations aim to help adult learners, the instructors and tutors who work with them, and the programs that support them.

As part of our commemoration of National Adult Education and Family Literacy Week (September 20-26, 2020), we would like to recognize the history of adult literacy research at IES and its National Center for Education Research.  

Since 2004, IES-funded researchers have been developing assessments to help identify the needs of adults struggling with literacy and working on solutions to build adult literacy skills. This work fed into the measurement component of IES’s Reading for Understanding Initiative in 2010 and later returned back to addressing adult basic literacy measurement in 2016.

In 2012, IES funded the Center for the Study of Adult Literacy (CSAL), which developed a curriculum and technology for adults reading between the 3rd- and 8th-grade levels. CSAL demonstrates how adult literacy research benefits by integrating research conducted with students with disabilities and those in K-12 and postsecondary settings. In fact, the researchers pulled upon findings from eight prior IES grants funded by NCER and NCSER.

Our researchers are also developing a clearer picture of the adults who fall into the broad category of those with low literacy. They are leveraging the PIAAC data set to conduct exploratory work that informs both our understanding of those at the very low ends of literacy and also of whether basic skills may predict success in postsecondary career and technical education programs.

In 2020, IES funded additional development research to help refine an interactive, online reading comprehension program, AutoTutor for Adult Reading Comprehension (AT-ARC). Another project will recruit and train postdoctoral fellows to cultivate the next generation of researchers who can continue to build a research base for improving adult literacy outcomes.

Although IES researchers are making great strides to build knowledge, the field needs more information, and adult learners deserve tools and innovations developed for their specific needs and goals. IES hopes to continue to support such work.

 


To learn more about IES-wide efforts to understand and improve adult learners’ outcomes, visit the Adult Basic Skills topic page. Contact Dr. Meredith Larson for more information about the research supported by NCER.