Skip Navigation

Literacy

Grantees

- OR -

Investigator

- OR -

Goals

- OR -

FY Awards

- OR -

Developing Vocabulary in an Automated Reading Tutor

Year: 2008
Name of Institution:
Carnegie Mellon University
Goal: Development and Innovation
Principal Investigator:
Mostow, Jack
Award Amount: $2,379,658
Award Period: 3 years
Award Number: R305A080157

Description:

Co-Principal Investigators: Margaret McKeown, University of Pittsburgh, and Charles Perfetti, University of Pittsburgh

Purpose: Research indicates that explicit vocabulary instruction benefits students' word learning and comprehension of text. However, major instructional challenges remain, for example, determining how to teach enough words to matter and how to teach them so that they are actually learned and retained. The purpose of this project is to develop, iteratively refine, and evaluate the usability and feasibility of an automated tutorial intervention to help children in grades 2-3 learn vocabulary necessary to improve reading comprehension. The immediate goal is to teach words in a way that combines the efficacy of individual tutoring with the economy of automated instruction. The longer range purpose is to improve children's reading comprehension by expanding their vocabulary.

Project Activities: The research team will build on an existing computer-based program, the Reading Tutor that "listens" to children read by recording and analyzing students' verbal responses and provides individualized instruction and feedback. The redesigned tutor will integrate vocabulary instruction of key, age-appropriate words. During the development process, different types and contexts of instruction will be explored. The research team will also develop an algorithm that determines the correct level and frequency of vocabulary instruction and integrates this algorithm into the current tutor. The new tutor will undergo an iterative design that will be tested with 2nd and 3rd grade readers.

Products: Products include an updated reading tutor that teaches vocabulary and improves reading comprehension and published reports on the development of an automated tutorial intervention to improve elementary grade students' vocabulary.

Structured Abstract

Setting: The research setting is two elementary schools, one in and the other near Pittsburgh, Pennsylvania.

Population: The study population includes 2nd and 3rd grade students in low-income, urban and suburban areas with large percentages of African-American and white students in the Pittsburgh, Pennsylvania, area.

Intervention: The vocabulary intervention will be designed to help students learn important word meanings over the duration of the program with the targeted words found during assisted reading of authentic, connected text. The vocabulary intervention will be integrated into a pre-existing automatic, computerized reading tutor that uses speech recognition to listen to children read aloud and responds with spoken and graphical assistance. The redesigned tutor will take advantage of this ability and integrate vocabulary instruction. The type, frequency, and timing of the instruction will be varied and individualized for each student depending on his or her needs.

Research Design and Methods: The automated reading tutor's ability to "listen" enables novel, continuous assessments of students' reading progress. Its ability to vary its instruction enables it to administer randomized controlled trials; and its ability to log its interactions enables it to capture detailed, longitudinal data on the development of reading skills. The research team will take advantage of these abilities to iteratively design and test the improved reading tutor. During the iterative design process, the research team will use data from usability tests and randomized controlled trials to determine how to visually present the words most effectively (e.g., the interface), when to present the words, how frequently to present them, and in which contexts to present them.

The controlled studies will use within-subject designs to compare words that receive different types and amounts of instruction. The primary research methods range from informal usability testing to randomized, controlled, within-subject experimental trials embedded in the Reading Tutor. The purpose of these trials is to test the efficacy of different forms of tutorial instruction.

Student participation will consist primarily of using the Reading Tutor for 20-30 minutes daily in classrooms or school labs as part of their regular instruction over the entire school year. Students will complete a checklist vocabulary assessment and over the course of the sessions students will be exposed to specific vocabulary words in context. The tutor will present words in varying context and provide different levels of and types of training. Student responses will help the research team determine the optimal form and frequency of vocabulary training, thus shaping the design of the updated reading tutor.

Control Condition: Participants in the control condition will use the existing version of the Reading Tutor without the vocabulary component.

Key Measures: Measures external to the automated tutor will include a checklist of taught and untaught target words to measure response to instruction, plus the Word Comprehension subtest of the Woodcock Reading Mastery Test as a general measure of vocabulary. Key outcomes computed from the data logged by the automated tutor will include performance on questions included in instruction, and feasibility as quantified by various usage measures.

Data Analytic Strategy: The major data analytic technique that researchers will use is ordinal logistic regression. Binary logistic regression and T-tests will also be used.

Project Website: http://www.cs.cmu.edu/~listen/

Related IES Projects:Explicit Comprehension Instruction in an Automated Reading Tutor that Listens (R305B070458) andAccelerating Fluency Development in an Automated Reading Tutor (R305A080628)

Publications

Book chapter

Mostow, J., Aist, G., Bey, J., Chen, W., Corbett, A., Duan, W., Duke, N., Duong, M., Gates, D., González, J.P., Juarez, O., Kantorzyk, M., Li, Y., Liu, L., McKeown, M., Trotochaud, C., Valeri, J., Weinstein, A., and Yen, D. (2010). A Better Reading Tutor That Listens. In V. Aleven, J. Kay, and J. Mostow (Eds.), Intelligent Tutoring Systems (pp. 451). Heidelberg: Springer.

Mostow, J., Beck, J.E., Cuneo, A., Gouvea, E., Heiner, C., and Juarez, O. (2010). Lessons From Project LISTEN's Session Browser. In C. Romero, S. Ventura, S.R. Viola, M. Pechenizkiy, and R.S.J.D. Baker (Eds.), Handbook of Educational Data Mining (pp. 389–416). New York: CRC Press, Taylor and Francis Group.

Journal article, monograph, or newsletter

Chang, K. M., Nelson, J., Pant, U., & Mostow, J. (2013). Toward exploiting EEG input in a reading tutor. International Journal of Artificial Intelligence in Education, 22(1-2), 19-38. Full Text

Liu, L., Mostow, J., and Aist, G.S. (2013). Generating Example Contexts to Help Children Learn Word Meaning. Natural Language Engineering, 19(2): 187–212.

Mostow, J., Huang, Y. T., Jang, H., Weinstein, A., Valeri, J., & Gates, D. (2017). Developing, evaluating, and refining an automatic generator of diagnostic multiple choice cloze questions to assess children's comprehension while reading. Natural Language Engineering, 23(2), 245-294.

Proceeding

Aist, G., & Mostow, J. (2009). Predictable and Educational Spoken Dialogues: Pilot Results. In International Workshop on Speech and Language Technology in Education. (pp 141-144).

Duan, W., & Yates, A. (2010). Extracting glosses to disambiguate word senses. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (pp. 627-635).

Jang, H., and Mostow, J. (2012). Inferring Selectional Preferences From Part-Of-Speech N-Grams. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (pp. 377–386). Avignon, France: Association for Computational Linguistics.

Liu, L., Mostow, J., &  Aist, G. (2009). Automated Generation of Example Contexts for Helping Children Learn Vocabulary. In International Workshop on Speech and Language Technology in Education.(pp 129-132).

Mostow, J., and Beck, J.E. (2009). Why, What, and How to Log? Lessons from LISTEN. Cordoba, Spain: University of Cordoba, pp 269 – 278. Full text

Mostow, J., Chang, K. M., & Nelson, J. (2011). Toward exploiting EEG input in a reading tutor. In International conference on artificial intelligence in education (pp. 230-237). Springer, Berlin, Heidelberg.

Mostow, J., and Duan, W. (2011). Generating Example Contexts to Illustrate a Target Word Sense. In Proceedings of the 6th Workshop on Innovative Use of NLP for Building Educational Applications (pp. 105–110). Portland, OR: The Association for Computational Linguistics.

Mostow, J., & Jang, H. (2012). Generating diagnostic multiple choice comprehension cloze questions. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP (pp. 136-146).

Mostow, J., and Tan, B.H.L. (2010). AutoJoin: Generalizing an Example into an EDM Query. In Proceedings of the 3rd International Conference on Educational Data Mining (pp. 11–13). Pittsburgh, PA: Carnegie Learning Inc. Full text

Xu, Y., & Mostow, J. (2013). Using item response theory to refine knowledge tracing. In Educational Data Mining 2013.