Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Developing Reading Comprehension Assessments Targeting Struggling Readers
Center: NCER Year: 2004
Principal Investigator: Sabatini, John Awardee: Educational Testing Service (ETS)
Program: Literacy      [Program Details]
Award Period: 4 years Award Amount: $1,572,635
Type: Measurement Award Number: R305G040065
Description:

Purpose: In the early 2000s, an estimated 26 percent of the 4.5 million 8th graders were below basic literacy. Among adults, about 40–44 million Americans had minimal or no proficiency in prose literacy. This project team addressed the pressing need for comprehension assessments for adolescent and adult struggling readers by developing a series of diagnostic assessments aimed at identifying struggling readers and then providing information about the specific skills or combinations of skills the readers lack. This tool enables teachers to focus instruction on the right skills and learners. The specific aims of the project were to (a) identify and quantify sources of difficulty for readers in texts, (b) develop assessments that can be used in education delivery settings to identify sources of reading comprehension difficulty, (c) identify and empirically examine which subsets of comprehension skills are linked to difficulties in understanding printed materials for individual struggling readers, and (d) determine how best to use this information to guide instruction.

Project Activities: The researchers applied natural language processing techniques to identify sources of text and task difficulty and used evidence-centered design modeling of multiple dimensions of student proficiencies. Together these techniques formed the foundation for developing and testing automated or assisted text and task generation and scoring tools and statistical profile and growth modeling. The researchers then applied and investigated technology-enhanced tools for administration, collection, and scoring of evidence.

Structured Abstract

Sample: The project study participants were readers spanning the developmental levels of high school and adult literacy students and included a range of skilled readers for comparison purposes. The participating schools had diverse, urban student populations with high percentages of struggling readers. The researchers regularly engaged and convened teachers to review, examine, and provide guidance concerning all phases of the assessment delivery system.

Research Design and Methods: The research plan proceeded in three phases. In phase I, the researchers applied natural language processing (NLP) techniques to identify sources of difficulty in a wide range of texts from existing corpora. They modeled the interactions of struggling reader profiles and developed task models using evidence-centered design (ECD) analysis techniques. Finally, they developed assessment tools, and piloted them to assess their viability. These activities continued iteratively throughout the project. In phase II, the researchers combined and administered tested task models to a larger sample of learners in a cross-sectional design to examine relationships between the components and profiles of learners. They used these findings to test and adjust the ECD model. In Phase III, they conducted a longitudinal, repeated measures design to examine how well the NLP tools predict text and task difficulty and how well the assessments accumulates individual profile information over time.

Related IES Projects: Assessing Reading for Understanding: A Theory-based, Developmental Approach (R305F100005), What Types of Knowledge Matters for What Types of Comprehension? Exploring the Role of Background Knowledge on Students' Ability to Learn from Multiple Texts (R305A150176), Exploring the onPAR Model in Developmental Literacy Education (R305A150193), Developing and Implementing a Technology-Based Reading Comprehension Instruction System for Adult Literacy Students (R305A200413)

Products and Publications

ERIC Citations: Find available citations in ERIC for this award here.

Select Publications:

Book chapters

Mislevy, R.J., and Sabatini, J. (2012). How Research on Reading and Research on Assessment are Transforming Reading Assessment (or if They Aren't, how They Ought to). In J.P. Sabatini, E.R. Albro, and T. O'Reilly (Eds.), Measuring Up: Advances in how we Assess Reading Ability (pp. 119–134). Lanham, MD: Rowan and Littlefield.

Sabatini, J.P. (2009). From Health/Medical Analogies to Helping Struggling Middle School Readers: Issues in Applying Research to Practice. In S. Rosenfield, and V. Berninger (Eds.), Translating Science-Supported Instruction into Evidence-Based Practices: Understanding and Applying the Implementation Process (pp. 285–316). New York: Oxford University Press.

Journal articles

Deane, P., Sheehan, K.M., Sabatini, J., Futagi, Y., and Kostin, I. (2006). Differences in Text Structure and its Implications for Assessment of Struggling Readers. Scientific Studies of Reading, 10(3): 257–275.

O'Reilly, T., Sabatini, J., Bruce, K., Pillarisetti, S., and McCormick, C. (2012). Middle School Reading Assessment: Measuring What Matters Under an RTI Framework. Reading Psychology Special Issue: Response to Intervention, 33(1): 162–189.

Nongovernment peer-reviewed report

Sabatini, J.P., Bruce, K., and Steinberg, J. (2013). SARA Reading Components Tests, RISE Form: Test Design and Technical Adequacy. Princeton, NJ: Educational Testing Service.


Back