Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Developing Reading Comprehension Assessments Targeting Struggling Readers
Center: NCER Year: 2004
Principal Investigator: Sabatini, John Awardee: Educational Testing Service (ETS)
Program: Literacy      [Program Details]
Award Period: 4 years Award Amount: $1,572,635
Type: Measurement Award Number: R305G040065

Purpose: This project will contribute to addressing the pressing need for comprehension assessments for adolescent and adult struggling readers. The primary goal is to develop a series of diagnostic assessments aimed at identifying struggling readers, and then providing information about the specific skills or combinations of skills the readers lack. This will enable teachers to focus instruction on the right skills and learners. The specific aims are: a) to identify and quantify sources of difficulty for readers in texts; b) to develop assessments that can be used in education delivery settings to identify sources of reading comprehension difficulty; and c) to identify and empirically examine which subsets of comprehension skills are linked to difficulties in understanding printed materials for individual struggling readers, and to determine how best to use this information to guide instruction.

Struggling readers for the purposes of this proposal are defined as individuals who would score Below Basic on the 8th grade National Assessment of Educational Progress (NAEP) or below Level 2 on the National Adult Literacy Survey. An estimated 26% of the 4.5 million eighth graders are below basic. Among adults, about 40-44 million Americans have minimal or no proficiency in prose literacy. To accomplish these aims, we have gathered together a diverse team and set of advanced technical capabilities, many unique to ETS, to address this challenge. We will apply Natural Language Processing techniques to the identification of sources of text and task difficulty. We will use Evidence-Centered Design modeling of multiple dimensions of student proficiencies. Together these techniques form the foundation for developing and testing automated or assisted text and task generation and scoring tools, and statistical profile and growth modeling. We will apply and investigate technology-enhanced tools for administration, collection, and scoring of evidence.

Population: The project study participants will be readers spanning the developmental levels of high school and adult literacy students and will include a range of skilled readers for comparison purposes. The participating schools have diverse, inner city student populations with high percentages of struggling readers. We will also regularly engage and convene teachers to review, examine, and provide guidance concerning all phases of the assessment delivery system.

Research Design and Methods: The Research Plan will proceed in three phases. In Phase I, we will apply Natural Language Processing techniques to identify sources of difficulty in a wide range of texts from existing corpora. We will model the interactions of struggling reader profiles and develop task models using Evidence-Centered Design analysis techniques. Finally, we will develop assessment tools, and pilot with a target population to assess their viability. These activities will continue iteratively throughout the project.

In Phase II, promising, tested task models will be combined and administered to a larger sample of learners in a cross-sectional design to examine relationships between the components and profiles of learners. This will also be used to test and adjust the ECD model.

In Phase III, a longitudinal, repeated measures design will be conducted to examine how well the NLP tools predict text and task difficulty and how well the assessments accumulates individual profile information over time.

Related IES Projects: Assessing Reading for Understanding: A Theory-based, Developmental Approach (R305F100005)


Book chapter

Mislevy, R.J., and Sabatini, J. (2012). How Research on Reading and Research on Assessment are Transforming Reading Assessment (or if They Aren't, how They Ought to). In J.P. Sabatini, E.R. Albro, and T. O'Reilly (Eds.), Measuring Up: Advances in how we Assess Reading Ability (pp. 119–134). Lanham, MD: Rowan and Littlefield.

Sabatini, J.P. (2009). From Health/Medical Analogies to Helping Struggling Middle School Readers: Issues in Applying Research to Practice. In S. Rosenfield, and V. Berninger (Eds.), Translating Science-Supported Instruction into Evidence-Based Practices: Understanding and Applying the Implementation Process (pp. 285–316). New York: Oxford University Press.

Journal article, monograph, or newsletter

Deane, P., Sheehan, K.M., Sabatini, J., Futagi, Y., and Kostin, I. (2006). Differences in Text Structure and its Implications for Assessment of Struggling Readers. Scientific Studies of Reading, 10(3): 257–275.

O'Reilly, T., Sabatini, J., Bruce, K., Pillarisetti, S., and McCormick, C. (2012). Middle School Reading Assessment: Measuring What Matters Under an RTI Framework. Reading Psychology Special Issue: Response to Intervention, 33(1): 162–189.

Nongovernment report, issue brief, or practice guide

Sabatini, J.P., Bruce, K., and Steinberg, J. (2013). SARA Reading Components Tests, RISE Form: Test Design and Technical Adequacy. Princeton, NJ: Educational Testing Service.