Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Developing and Validating Web-administered, Reading for Understanding Assessments for Adult Education
Center: NCER Year: 2016
Principal Investigator: Sabatini, John Awardee: University of Memphis
Program: Postsecondary and Adult Education      [Program Details]
Award Period: 4 years (9/1/2016-8/31/2020) Award Amount: $1,394,982
Type: Measurement Award Number: R305A190522

Previous Award Number: R305A160129
Previous Institution: Educational Testing Service (ETS)

Co-Principal Investigator: Tenaha O'Reilly

Purpose: A large percent of U.S. adults struggle to read even basic texts, but there are few valid assessments for this population, making it difficult to measure learning outcomes or improve instruction. The purpose of this project is to develop a digital assessment appropriate for such adults, in particular those reading between the 3rd- to 8th-grade levels. Such an assessment will not only help to determine an adult reader's strengths and weaknesses but also inform instruction and improve programs and institutional accountability.

Project Activities: Building off of research through previous Institute-funded grants that focused on younger readers (i.e., R305G04065 and R305F100005), the researchers will adapt existing assessments for use with adult populations. They will refine the items, validate them for the target population, and produce documentation for the use of the assessment.

Products: Researchers will produce a fully developed and validated, digital assessment for adults reading between the 3rd- to 8th-grade levels and peer-reviewed publications.

Structured Abstract

Setting: The research will take place in multiple adult literacy programs across the U.S.

Sample: The sample will include a nationally representative group of approximately 8000 adult learners.

Assessment: The assessment will provide information on adult learners' overall reading ability using an assessment framework that contains two major components: (1) assessments of reading component skills that measure foundational reading skills (e.g., decoding, word recognition, vocabulary, morphology, sentence processing) and (2) scenario-based assessments (SBA) that measure higher order comprehension skills (e.g., integration, evaluation, perspective-taking) in computerized reading environments. This assessment targets adults with reading skills ranging between the 3rd- to 8th-grade levels (e.g., those in adult basic education). The final assessment will include multiple sets of reading component batteries and SBA forms, vertical scales for the subtests as well as recommendations for appropriate use of the assessment. It will also measure performance moderators such as background knowledge, motivation, reading strategies, and self-regulation. The assessment will be delivered via computer and will allow instructors and programs to track student progress while providing information that could inform instructional practice.

Research Design and Methods: The researchers will leverage existing items and forms that they developed and field tested in previous Institute-funded projects (R305G04065, R305F100005) for adolescent students. Over a series of studies, they will refine, test, and finalize their assessment. First, the research team will adapt items and forms as necessary to be appropriate for adult readers, using an expert review panel to evaluate the content validity of the constructs and items for use with the adult population (Study 1). Then, they will then conduct a pilot study (Study 2) to determine the usability, feasibility, and basic psychometric properties of the tests and examine the comparability of the measures developed for adolescents with the adult population. Finally, they will conduct two studies to establish the validity of the assessment. In Study 3, they will conduct a large-scale study to establish vertical scales and national norms, and in Study 4, they will examine both convergent and divergent validity of the assessments.

Control Condition: Due to the nature of this project, this is no control group. However, adults' responses to un-adapted items (i.e., items that are the same as those previously validated with adolescent readers) will be compared to adolescents' responses.

Key Measures: The researchers will validate their assessment against the adolescent version of the component skills and SBA assessments, established tests for measuring component reading skills (e.g., Test of Word Reading Efficiency (TOWRE), Woodcock-Johnson III, Wide Range Achievement Test), and assessments commonly used by adult literacy programs, e.g., the Tests of Adult Basic Education (TABE).

Data Analytic Strategy: The researchers will use various analytic methods including classical test theory, item response theory, differential item functioning, dimensionality, factorial/measurement invariance, and vertical scaling.

Related IES Projects: Developing Reading Comprehension Assessments Targeting Struggling Readers (R305G04065); Assessing Reading for Understanding: A Theory-based, Developmental Approach (R305F100005)

Project Website:


Book Chapter

O'Reilly, T., Sabatini, J., & Wang, Z. (2018). Using Scenario-Based Assessments to Measure Deep Learning. In K. Millis, D. Long, J. Magliano, & K. Weimer (Eds.), Deep learning: Multi-disciplinary approaches (pp. 197–208). New York, NY: Routledge. Full text

Sabatini, J, O'Reilly, T., Dreier, K. & Wang, Z. (2019).Cognitive processing deficits associated with low literacy:  Differences between adult- and child-focused models. In D. Perin (Ed), The Wiley Handbook of Adult Literacy(pp. 15–39).  Hoboken, NJ: John Wiley & Sons. Full text

Journal article, monograph, or newsletter

Feller, D. P., Magliano, J., Sabatini, J., O'Reilly, T., & Kopatich, R. D. (2020). Relations between Component Reading Skills, Inferences, and Comprehension Performance in Community College Readers. Discourse Processes, 57(5–6), 473–490, DOI: 10.1080/0163853X.2020.1759175

Magliano, J. P., Higgs, K., Santuzzi, A., Tonks, S. M., O'Reilly, T., Sabatini, J., ... & Parker, C. (2020). Testing the Inference Mediation Hypothesis in a Post-Secondary Context Contemporary Educational Psychology, 61101867.

Smith, E. H., Hollander, J., Graesser, A. C., Sabatini, J., & Hu, X. (2021). Integrating SARA Assessment with Reading Comprehension Training in AutoTutor. English Teaching, 76(1), 17–29.