Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Assessing Reading Comprehension with Verbal Protocols and Latent Semantic Analysis
Center: NCER Year: 2004
Principal Investigator: Magliano, Joseph Awardee: Northern Illinois University
Program: Literacy      [Program Details]
Award Period: 4 years Award Amount: $1,560,506
Type: Measurement Award Number: R305G040055
Description:

Co-Principal Investigator(s): Millis, Keith

Purpose: In this project, the researchers proposed to develop and test a new automated on-line reading strategy assessment tool called the Reading Strategy Assessment Tool (R-SAT). In the early 2000s, research suggested that students' difficulty with reading does not arise from a deficient in component language skills, such as word decoding, but from not being able to form coherent representations of the text. Readers who adopt a low standard of coherence read passively, creating a sparse and perhaps incoherent representation. On the other hand, readers who have a high standard of coherence employ an effortful search for meaning in which they construct a coherent understanding of what the text is about. The goal of R-SAT was to help support readers' ability to construct more coherent representations.

Structured Abstract

THE FOLLOWING CONTENT DESCRIBES THE PROJECT AT THE TIME OF FUNDING

Sample: The proposed research will study comprehension skills in a population of college freshman and sophomores attending Northern Illinois University (NIU). NIU has a diverse population of students of vary academic abilities and provides an ideal population to study individual differences in reading comprehension abilities.

Measure: R-SAT will identify basic reading strategies and the extent to which the reader is actively engaged. R-SAT provides two very important advantages over existing reading tests that rely on a multiple-choice format. The first is that R-SAT provides an assessment of the level of coherence and reading strategies employed by the student, and the second is that it measures comprehension during reading. Because multiple-choice tests measure comprehension after reading, they are subject to test-taking strategies and processes that adversely affect the validity of the instrument. For example, a student may first read the questions before reading the test passage, or a listed answer to a question might actually trigger the correct answer.

Research Design and Methods: R-SAT builds upon innovative recent research that has used various computational approaches to analyze students' written or verbal comments solicited at critical points during the reading process. It has the potential to revolutionize how reading is assessed in the future. The computational approach uses latent semantic analysis in tandem with recent advancements in word-matching algorithms. R-SAT is based on recent published research that has shown that these computational approaches are able to classify reading strategies on par with trained human judges, and that scores based upon them predict measures of comprehension (recall, question answering). There will be three phases to the proposed research. Phase 1 will involve testing among methods of collecting and evaluating verbal protocols, identifying texts and sentences, collecting representative answers/verbal protocols (for norming purposes), and initial validity testing. The second phase involves implementing R-SAT as a web-based system, establishing its reliability and validity, testing among text formats, identifying its limitations, and comparing R-SAT to standard tests of comprehension. In the third phase, R-SAT will be incorporated into an existing web-based reading trainer (iSTART) and evaluated in a college course taken by freshmen and sophomores majoring in a wide variety of topics.

Related IES Projects: iSTART: Interactive Strategy Trainer for Active Reading and Thinking (R305G040046), Reading for Understanding Across Grades 6 through 12: Evidence-Based Argumentation for Disciplinary Learning (R305F100007), Using Computational Linguistics to Detect Comprehension Processes in Constructed Responses across Multiple Large Data Sets (R305A190063), Exploring the onPAR Model in Developmental Literacy Education (R305A150193)

Products and Publications

ERIC Citations: Find available citations in ERIC for this award here.

Select Publications:

Book chapters

Britt, M.A., Wiemer, K., Millis, K.K., Magliano, J.P., Wallace, P., and Hastings, P. (2012). Understanding and Reasoning With Text. In P. McCarthy, and C. Boonthum (Eds.), Cross-Disciplinary Advances in Applied Natural Language Processing: Issues and Approaches (pp. 133–154). Hershey, PA: IGI Global Publisher.

Magliano, J.P., and Perry, P.J. (2008). Individual Differences in Reading Proficiencies and Comprehension. In N.J. Salkind (Ed.), Encyclopedia of Educational Psychology, Volume 2 (pp. 511–517). Thousand Oaks, CA: Sage Publications, Inc.

Magliano, J.P., Millis, K.K., Ozuru, Y., and McNamara, D.S. (2007). A Multidimensional Framework to Evaluate Reading Assessment Tools. In D.S. McNamara (Ed.), Reading Comprehension Strategies: Theories, Interventions, and Technologies (pp. 107–136). Mahwah, NJ: Erlbaum.

McNamara, D.S., and Magliano, J.P. (2009). Self-Explanation and Metacognition: The Dynamics of Reading. In D.J. Hacker, J. Dunlosky, and A.C. Graesser (Eds.), Handbook of Metacognition in Education (pp. 60–81). Mahwah, NJ: Lawrence Erlbaum and Associates.

McNamara, D.S., and Magliano, J.P. (2009). Towards a Comprehensive Model of Comprehension. In B. Ross (Ed.), The Psychology of Learning and Motivation, Volume 51 (pp. 297–384). San Diego: Elsevier Academic Press.

Millis, K., Magliano, J., Wiemer-Hastings, K., Todaro, S., and McNamara, D.S. (2007). Assessing and Improving Comprehension With Latent Semantic Analysis. In T. Landauer, D.S. McNamara, S. Dennis, and W. Kintsch (Eds.), Handbook of Latent Semantic Analysis (pp. 207–225). Mahwah, NJ: Erlbaum.

Journal articles

Gilliam, S., Magliano, J.P., Millis, K.K., Levinstein, I., and Boonthum, C. (2007). Assessing the Format of the Presentation of Text in Developing a Reading Strategy Assessment Tool (R-SAT). Behavior Research Methods, Instruments, and Computers, 3: 199–204.

Magliano, J., Kurby, C.A., Magliano, J.P., Dandotkar, S., Woehrle, J., Gilliam, S., and McNamara, D.S. (2012). Changing how Students Process and Comprehend Texts With Computer-Based Self-Explanation Training. Journal of Educational Computing Research, 47(4): 429–459.

Magliano, J.P., Millis, K.K., Levinstein, I., and Boonthum, C. (2011). Assessing Comprehension During Reading With the Reading Strategy Assessment Tool (RSAT). Metacognition and Learning, 6(2): 131–154.

Millis, K.K., Magliano, J.P., and Todaro, S. (2006). Measuring Discourse-Level Processes With Verbal Protocols and Latent Semantic Analysis. Scientific Studies of Reading, 10(3): 225–240.

Munoz, B., Magliano, J.P., Sheridan, R., and McNamara, D.S. (2006). Typing Versus Thinking Aloud When Reading: Implications for Computer-Based Assessment and Training Tools. Behavior Research Methods, Instruments, and Computers, 38(2): 211–217.

Proceedings

Malladi, R., Levinstein, I.B., Boonthum, C., and Magliano, J.P. (2010). Summarization: Constructing an Ideal Summary and Evaluating a Student's Summary Using LSA. In Proceedings of the 23rd International Florida Artificial Intelligence Research Society (FLAIRS) Conference (pp. 295–296). Menlo Park, CA: AAAI Press.

Mylavarapu, S., Levinstein, I.B., Boonthum, C., Magliano, J.P., and Millis, K.K. (2010). Enhancing Protocol Evaluation Through Semantic Modification of Benchmarks. In Proceedings of the 23rd International Florida Artificial Intelligence Research Society (FLAIRS) Conference (pp. 297–298). Menlo Park, CA: The AAAI Press.


Back