IES Blog

Institute of Education Sciences

The Growing Reading Gap: IES Event to Link Knowledge to Action Through Literacy Data

On June 8 and 9, the Institute of Education Sciences (IES) and the Council of the Great City Schools (CGCS) will host a Reading Summit to address one of the most important issues confronting American education today: the declining reading performance of America’s lowest-performing students and the growing gap between low- and high-performing students.

At this 2-day virtual event, participants will explore the results of the National Assessment of Educational Progress (NAEP), as well as other IES data, and learn strategies to help educators and low-performing readers make progress.

Learn more about the summit’s agenda and speakers—including IES Director Mark Schneider, NCES Commissioner James L. Woodworth, and NCES Associate Commissioner Peggy Carr—and register to participate (registration is free).

In the meantime, explore some of the data NCES collects on K–12 literacy and reading achievement, which show that the scores of students in the lowest-performing groups are decreasing over time.

  • The National Assessment of Educational Progress (NAEP) administers reading assessments to 4th-, 8th-, and 12th-grade students. The most recent results from 2019 show that average reading scores for students in the 10th percentile (i.e., the lowest-performing students) decreased between 2017 and 2019 at grade 4 (from 171 to 168) and grade 8 (from 219 to 213) and decreased between 2019 and 2015 at grade 12 (from 233 to 228).
  • The Progress in International Reading Literacy Study (PIRLS) is an international comparative assessment that measures 4th-grade students’ reading knowledge and skills. The most recent findings from 2016 show that the overall U.S. average score (549) was higher than the PIRLS scale centerpoint (500), but at the 25th percentile, U.S. 4th-graders scored lower in 2016 (501) than in 2011 (510).
  • The Program for International Student Assessment (PISA) is a study of 15-year-old students’ performance in several subjects, including reading literacy. The 2018 results show that, although the overall U.S. average reading score (505) was higher than the OECD average score (487), at the 10th percentile, the U.S. average score in 2018 (361) was not measurably different from the score in 2015 and was lower than the score in 2012 (378).

NCES also collects data on young children’s literacy knowledge and activities as well as the literacy competencies of adults. Here are a few data collections and tools for you to explore:

This year, the Condition of Education includes a newly updated indicator on literacy activities that parents reported doing with young children at home. Here are some key findings from this indicator, which features data from the 2019 NHES Early Childhood Program Participation Survey:

In the week before the parents were surveyed,

  • 85 percent of 3- to 5-year-olds were read to by a family member three or more times.
  • 87 percent of 3- to 5-year-olds were told a story by a family member at least once.
  • 96 percent of 3- to 5-year-olds were taught letters, words, or numbers by a family member at least once.

In the month before the parents were surveyed,

  • 37 percent of 3- to 5-year-olds visited a library with a family member at least once.

Be sure to read the full indicator in the 2021 Condition of Education, which was released in May, for more data on young children’s literacy activities, including analyses by race/ethnicity, mother’s educational attainment, and family income.

Don’t forget to follow NCES on Twitter, Facebook, and LinkedIn to stay up-to-date on the latest findings and trends in literacy and reading data and register for the IES Reading Summit to learn more about this topic from experts in the field. 

 

By Megan Barnett, AIR

Towards a Better Understanding of Middle-Schoolers’ Argumentation Skills

What is the difference between fact and opinion? How do you find relevant evidence and use it to support a position? Every day, teachers help students practice these skills by fostering critical discussions, a form of argumentation that encourages students to use reasoning to resolve differences of opinion.

In their IES-funded study, Exploring and Assessing the Development of Students' Argumentation Skills, Yi Song and her colleagues are uncovering activities (both teacher led and technology supported) that can improve middle-school students’ ability to generate better oral and written arguments.

This project began in 2019 and is working in classrooms and with teachers and students. The researchers have created a series of videos that describe their work. In this series, Dr. Song and her co-PIs, Dr. Ralph Ferretti and Dr. John Sabatini, discuss why the project is important to education, how they will conduct the research plan, and how educators can apply what they are learning in classrooms.

 

 


For questions and more information, contact Meredith Larson (Meredith.Larson@ed.gov), Program Officer, NCER

Better Reading Comprehension When You Know That You Don’t Know

The more you already know about a topic, the easier it may be to comprehend and learn from texts about that topic. But knowledge has to start somewhere. So how can we help students learn from texts when they may have low background knowledge?

In their exploratory study, researchers from ETS found that lack of knowledge is not necessarily a barrier to comprehension. Rather, they suggest that students who can identify their lack of background knowledge are more likely to comprehend and learn new information than students who do not acknowledge they lack background knowledge. In other words, knowing that you might not know may lead to better outcomes.

To determine the role of background knowledge, the researchers pretested middle and high school students’ background knowledge through questions related to topics the students may have some but not complete knowledge of, such as ecology, immigration, and wind power. The pretest included an “I don’t know” option, along with correct and incorrect responses.

Students then took a scenario-based assessment in which they read multiple sources about each of the topics. This type of assessment mirrors real-world learning by encouraging readers to build their own interpretations of a topic, which helps researchers determine whether students comprehend what they read.

They found that students who selected “I don’t know” when answering background knowledge questions had better understanding of the content than those who provided wrong answers on these questions. In fact, students who selected “I don’t know” rather than answering incorrectly were nearly three times as likely to learn from sources that provided the correct information than students who had answered the pretest incorrectly. Students who selected “I don’t know” may also learn more than students who had a comparable level of weak background knowledge. The researchers suggest that the “I don’t know” readers may have set different reading goals prior to engaging with the sources than those who guessed incorrectly.

 

Possible Implications for Teaching and Learning

The results from this work support the idea that having and building background knowledge is key. Thus, teachers may want to assess existing knowledge and address knowledge gaps prior to instruction.

Teachers may also want to provide an “I don’t know” option or options that allow students to rate their level of certainty. Doing so may help teachers distinguish between students who recognize their own gaps in knowledge from those who may not be aware that they are wrong or that they simply do not know. This latter group of students may need more help in determining the accuracy of their judgments or may have incorrect knowledge that could interfere with learning.

The researchers further suggest that teachers may want to go beyond the role of background knowledge by teaching students how to set appropriate reading goals and use strategic reading approaches to learn new facts or correct existing misunderstandings.

 


The research reported here was conducted under NCER grant R305A150176: What Types of Knowledge Matters for What Types of Comprehension? Exploring the Role of Background Knowledge on Students' Ability to Learn from Multiple Texts.

This blog was written by Dr. Meredith Larson. Contact her for more information about this project.

Building a Reading Comprehension Measure for Postsecondary Students

Assessments of both U.S. adults and 12th-grade students indicate that millions of learners may have significant reading skill gaps. Because these students may lack the fundamental reading and comprehension skills needed to thrive in college, postsecondary institutions need valid reading measures that accurately determine the source of student difficulties.

An IES-funded research team is developing and validating such a measure: Multiple-choice Online Causal Comprehension Assessment for Postsecondary Students (MOCCA-College). MOCCA-College aims to assess the reading comprehension abilities of postsecondary students and distinguish between common comprehension difficulties. This information could help students, faculty, and programs better determine who might need what type of additional reading instruction.

The current version of MOCCA-College is still being validated, but it already contains components that may interest postsecondary institutions, faculty, and students. For example, it suggests classroom interventions based on a student’s results and allows for different user roles, such as student, faculty member, or administrator. 

Results from pilot work indicate that MOCCA-College can reliably distinguish between postsecondary readers with strong comprehension skills and those who may need to build these skills. MOCCA-College uses both narrative and expository texts to determine student performance. The results indicate that both types of passages measure a single dimension of ability, though narrative passages may more easily and accurately discriminate between those who have good comprehension skills and those who do not.

This finding is in keeping with meta-analysis work that finds a similar pattern for narrative and expository items. Narrative passages appear to consistently measure inferential comprehension more accurately than expository passages for both younger and older readers. This holds even after matching texts for readability and demands on background knowledge.

As the researchers continue to validate MOCCA-College, we will continue to learn more about the needs of postsecondary readers, as well as how to identify and address these needs.

 


This research and articles referenced above are supported through NCER grant R305A180417: Multiple-choice Online Causal Comprehension Assessment for Postsecondary Students (MOCCA-College).

Dr. Meredith Larson, program officer for postsecondary and adult education, wrote this blog. Contact her at Meredith.Larson@ed.gov for additional information about MOCCA-College and postsecondary teaching and learning research.

 

Recent Report Identifies Possible Categories of Adult Struggling Readers (and How to Help Them)

Nearly one in five U.S. adults aged 16 and over may struggle with basic literacy. These adults may struggle with any of the core components of reading, such as decoding, vocabulary, and comprehension. They may struggle for many different reasons—English is not their first language, possible cognitive declines from aging, or a lack of formal education. To identify the right instructional tools and curricula, we need to understand the varying needs of this heterogeneous group of adult struggling readers and design appropriate solutions.

In a recent report, IES-funded researchers conducted a latent class analysis of 542 adults (age 16- to 71-years old) enrolled in adult education programs whose reading scores indicate a reading level between the 3rd- and 8th-grade level. The analysis identified four possible subgroup categories of adult struggling readers based on their performance on lower-level competencies (phonological awareness, decoding, vocabulary) and higher-level competencies (comprehension, inferencing, background knowledge):

 

  • Globally Impaired Readers: adults who show difficulties in all competencies
  • Globally Better Readers: adults who are relatively strong in all competencies
  • Weak Decoders: readers who are relatively weaker in lower-level competencies but strong in higher-level competencies
  • Weak Language Comprehenders: readers who are strong in lower-level competencies but relatively weaker in higher-level competencies

 

On average, Weak Decoders were older than other categories, though Globally Impaired Readers were on average older than Globally Better Readers or Weak Language Comprehenders. Globally Better Readers and Weak Decoders included a larger proportion of native English speakers than the other two categories. Thus, both age and English proficiency may predict the pattern of strengths and weaknesses. However, having a high school diploma did not predict performance patterns.

Although Globally Better Readers tended to perform better on reading assessment than other categories, even this group of readers performed at the 6th-grade level on average. Thus, all groups of readers would benefit from additional instruction. The researchers suggest different approaches for addressing the needs of learners in the different categories. For example, Weak Language Comprehenders may benefit from technology-based solutions that help build their oral language competencies, whereas Globally Impaired Readers and Weak Decoders may benefit from direct instruction on decoding skills.

 


This research was conducted as part of the Center for the Study of Adult Literacy (CSAL): Developing Instructional Approaches Suited to the Cognitive and Motivational Needs for Struggling Adults funded in 2012 through NCER.

The abstract for the publication discussed above is available on ERIC; Identifying Profiles of Struggling Adult Readers: Relative Strengths and Weaknesses in Lower-Level and Higher-Level Competencies (Talwar, Amani; Greenberg, Daphne; Li, Hongli).

Dr. Meredith Larson, program officer for postsecondary and adult education, wrote this blog. Contact her at Meredith.Larson@ed.gov for additional information about CSAL and adult education research.