Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Developing and Validating Web-administered, Reading for Understanding Assessments for Adult Education
Center: NCER Year: 2016
Principal Investigator: Sabatini, John Awardee: University of Memphis
Program: Postsecondary and Adult Education      [Program Details]
Award Period: 4 years (9/1/2016 – 8/31/2020) Award Amount: $1,394,982
Type: Measurement Award Number: R305A190522
Description:

Previous Award Number: R305A160129
Previous Awardee: Educational Testing Service (ETS)

Co-Principal Investigator: O'Reilly, Tenaha

Purpose: A large percent of U.S. adults struggle to read even basic texts, but there are few valid assessments for adult learners who have underdeveloped reading literacy skills across the variety of educational settings that serve them. Consequently, there is limited support for diagnostic, progress monitoring, and outcome measurement that is aligned with empirically supported instruction. The purpose of this project was to develop a digital assessment system appropriate for use with adult learners, with special emphasis on adults reading between the third- to eighth-grade equivalency levels. Such an assessment system should not only help to determine an adult reader's strengths and weaknesses but also inform instruction and improve programs and institutional monitoring of student achievement.

Project Activities: Building upon research from previous IES-funded grants that focused on readers spanning grades 5 to 10 in schools (i.e., R305G04065 and R305F100005), the researchers adapted assessments for use with adult learners. They evaluated the items and forms for use with adults, investigated the assessment system's usability and feasibility, and conducted pilots and field tests to evaluate the psychometric properties and valid uses of the system. The researchers produced an assessment framework explaining the background, theoretical support, and design of the multiple types of tests produced. They also produced a technical report describing the samples, analyses, and results of field testing. Finally, they developed an online platform for administering, scoring, score reporting, and providing interpretive guidance and instructional recommendations to instructors.

Key Outcomes: The main findings of this project are as follows:

  • Education technology developers can integrate assessments into instructional technologies to improve engagement and learning. For example, the SARA assessment, which provides a diagnostic profile for individuals' reading strengths and weaknesses, can be used to guide the selection or measure progress of reading comprehension lessons in classroom or adaptive learning systems such as the AutoTutor-ARC, a digital technology that both adapts to learners' performances and engages them in an immersive learning experience using a three-way conversation with computer agents acting as a tutor and a peer to discuss texts (Hollander et al., 2023; Smith et al. 2021).
  • Foundational reading skills are also a substantial source of variability in postsecondary students' ability to perform academic reading tasks as is their ability to construct inferences (bridging and elaborative) while reading, with their ability to construct bridging inferences predicting their performance on measures of close comprehension and their ability to construct elaborative inferences predicting performance on the scenario-based assessment (Feller et al., 2020).

Structured Abstract

Setting: The research took place across multiple settings that serve adult learners with underdeveloped reading literacy skills across the U.S.

Sample: The sample of adult learners consisted of 1480 administrations of various sets of the system of tests. Learners were drawn from a variety of adult education settings across the country that differed both in which tests and subtest forms and items were completed by learners and in the educational settings from which the samples were drawn.

Assessment: The final SARA includes three major test types: (i) a brief router test with items covering vocabulary, morphology, and sentence processing; (ii) multiple six components skill test forms (word recognition and decoding, vocabulary, morphology, sentence processing, efficiency of reading, and traditional passage-based reading comprehension); and (iii) scenario-based reading comprehension tests that measure higher order comprehension skills (e.g., integration, evaluation, perspective-taking) in computerized reading environments. This system is accessed on a web-based platform hosted by Capti. The assessment is delivered via digital devices and allows instructors and programs to track student progress while providing information that could inform instructional practice. The platform features user-friendly test delivery; immediate, automated scoring; automated repeated test delivery for progress monitoring and outcome testing; and robust online score reporting. The platform also comes with an instructor and administrator management system with functions that include creating student rosters and groups, assigning students sets of tests by ability or test type, monitoring test administration and completion, controlling time limits, as well as generating individual and aggregate score reports with interpretive guides to support valid use of the assessments. The researchers constructed item response theory (IRT) based vertical scales for the subtests as well as recommendations for appropriate use.

Research Design and Methods: The researchers leverage existing items and forms that they developed and field tested in previous IES-funded projects (R305G04065, R305F100005) for adolescent students. Over a series of studies, they refined, tested, and finalized assessment with adequate psychometric properties for valid use with adult learners. These methods included adapting items and forms as necessary to be appropriate for adult readers, using an expert review panel to evaluate the content validity of the constructs and items for use with the adult population (study 1). Then, they conducted a pilot study (study 2) to determine the usability, feasibility, and basic psychometric properties of the tests and examine the comparability of the measures developed for adolescents with the adult population. In study 3, they conducted a field test study to establish vertical scales, and in study 4, they examined both convergent and divergent validity of the assessments.

Control Condition: Due to the nature of this project, there was no control group. However, adults' responses to un-adapted items (i.e., items that are the same as those previously validated with school age readers) were compared to school setting responses.

Key Measures: The researchers validated the assessment against the school age student version of the component skills and SBA assessments, established tests for measuring component reading skills (e.g., Woodcock-Johnson III).

Data Analytic Strategy: The researchers used various analytic methods including item analyses, classical test theory, item response theory, correlational and regression analyses, differential item functioning, and vertical scaling comparisons.

Related IES Projects: Developing Reading Comprehension Assessments Targeting Struggling Readers (R305G04065), Assessing Reading for Understanding: A Theory-based, Developmental Approach (R305F100005), Center for the Study of Adult Literacy (CSAL): Developing Instructional Approaches Suited to the Cognitive and Motivational Needs for Struggling Adults  (R305C120001), Exploring the onPAR Model in Developmental Literacy Education (R305A150193), Developing and Implementing a Technology-Based Reading Comprehension Instruction System for Adult Literacy Students (R305A20413), Scenario-Based Assessment in the Age of Generative AI: Making Space in the Education Market for Alternative Assessment Paradigm (R305T240021)

Products and Publications

ERIC Citations:  Find available citations in ERIC for this award here and here.

Project Website: https://adulted.autotutor.org/

Additional Online Resources and Information:

Select Publications

Feller, D. P., Magliano, J., Sabatini, J., O'Reilly, T., & Kopatich, R. D. (2020). Relations between component reading skills, inferences, and comprehension performance in community college readers. Discourse Processes, 57(5–6), 473–490, DOI: 10.1080/0163853X.2020.1759175.

Hollander, J., Sabatini, J., & Graesser, A. (2022). How item and learner characteristics matter in intelligent tutoring systems data. In International Conference on Artificial Intelligence in Education (pp. 520–523). Springer, Cham

Hollander, J., Sabatini, J., & Graesser, A. C. (2021). An intelligent tutoring system for improving adult literacy skills in digital environments. COABE Journal, 10(2), 59–64.

Hollander, J., Sabatini, J., Graesser, A., Greenberg, D., O'Reilly, T., & Frijters, J. (2023). Importance of learner characteristics in intelligent tutoring for adult literacy. Discourse Processes, 1–13.

Kaldes, G., Higgs, K., Lampi, J., Santuzzi, A., Tonks, S. M., O'Reilly, T., Sabatini, J. P., & Magliano, J. P. (2024). Testing the model of a proficient academic reader (PAR) in a postsecondary context. Reading and Writing, 1–40.

Magliano, J. P., Higgs, K., Santuzzi, A., Tonks, S. M., O'Reilly, T., Sabatini, J., ... & Parker, C. (2020). Testing the inference mediation hypothesis in a post-secondary context. Contemporary Educational Psychology, 61 101867.

Magliano, J. P., Talwar, A., Feller, D. P., Wang, Z., O'Reilly, T., & Sabatini, J. (2022). Exploring thresholds in the foundational skills for reading and comprehension outcomes in the context of postsecondary readers. Journal of Learning Disorders. 00222194221087387

O'Reilly, T., Sabatini, J., & Wang, Z. (2018). Using Scenario-Based Assessments to Measure Deep Learning. In K. Millis, D. Long, J. Magliano, & K. Weimer (Eds.), Deep learning: Multi-disciplinary approaches (pp. 197–208). New York, NY: Routledge.

Sabatini, J, O'Reilly, T., Dreier, K. & Wang, Z. (2019). Cognitive processing deficits associated with low literacy:  Differences between adult- and child-focused models. In D. Perin (Ed), The Wiley Handbook of Adult Literacy (pp. 15–39).  Hoboken, NJ: John Wiley & Sons.

Sabatini, J., Graesser, A., Hollander, J., & O'Reilly, T. (2023).  A framework of literacy development and how AI can transform theory and practice.  British Journal of Educational Technology, 54(5), 1174–1203.

Smith, E. H., Hollander, J., Graesser, A. C., Sabatini, J., & Hu, X. (2021). Integrating SARA assessment with reading comprehension training in AutoTutor. English Teaching, 76(1), 17–29.


Back