Project Activities
Structured Abstract
Setting
Sample
Research design and methods
Control condition
Key measures
Data analytic strategy
People and institutions involved
IES program contact(s)
Products and publications
Products: Products include an updated reading tutor that teaches vocabulary and improves reading comprehension and published reports on the development of an automated tutorial intervention to improve elementary grade students' vocabulary.
Book chapter
Mostow, J., Aist, G., Bey, J., Chen, W., Corbett, A., Duan, W., Duke, N., Duong, M., Gates, D., González, J.P., Juarez, O., Kantorzyk, M., Li, Y., Liu, L., McKeown, M., Trotochaud, C., Valeri, J., Weinstein, A., and Yen, D. (2010). A Better Reading Tutor That Listens. In V. Aleven, J. Kay, and J. Mostow (Eds.), Intelligent Tutoring Systems (pp. 451). Heidelberg: Springer.
Mostow, J., Beck, J.E., Cuneo, A., Gouvea, E., Heiner, C., and Juarez, O. (2010). Lessons From Project LISTEN's Session Browser. In C. Romero, S. Ventura, S.R. Viola, M. Pechenizkiy, and R.S.J.D. Baker (Eds.), Handbook of Educational Data Mining (pp. 389-416). New York: CRC Press, Taylor and Francis Group.
Journal article, monograph, or newsletter
Chang, K. M., Nelson, J., Pant, U., & Mostow, J. (2013). Toward exploiting EEG input in a reading tutor. International Journal of Artificial Intelligence in Education, 22(1-2), 19-38. Full Text
Liu, L., Mostow, J., and Aist, G.S. (2013). Generating Example Contexts to Help Children Learn Word Meaning. Natural Language Engineering, 19(2): 187-212.
Mostow, J., Huang, Y. T., Jang, H., Weinstein, A., Valeri, J., & Gates, D. (2017). Developing, evaluating, and refining an automatic generator of diagnostic multiple choice cloze questions to assess children's comprehension while reading. Natural Language Engineering, 23(2), 245-294.
Proceeding
Aist, G., & Mostow, J. (2009). Predictable and Educational Spoken Dialogues: Pilot Results. In International Workshop on Speech and Language Technology in Education. (pp 141-144).
Duan, W., & Yates, A. (2010). Extracting glosses to disambiguate word senses. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (pp. 627-635).
Jang, H., and Mostow, J. (2012). Inferring Selectional Preferences From Part-Of-Speech N-Grams. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (pp. 377-386). Avignon, France: Association for Computational Linguistics.
Liu, L., Mostow, J., & Aist, G. (2009). Automated Generation of Example Contexts for Helping Children Learn Vocabulary. In International Workshop on Speech and Language Technology in Education.(pp 129-132).
Mostow, J., and Beck, J.E. (2009). Why, What, and How to Log? Lessons from LISTEN. Cordoba, Spain: University of Cordoba, pp 269 - 278. Full text
Mostow, J., Chang, K. M., & Nelson, J. (2011). Toward exploiting EEG input in a reading tutor. In International conference on artificial intelligence in education (pp. 230-237). Springer, Berlin, Heidelberg.
Mostow, J., and Duan, W. (2011). Generating Example Contexts to Illustrate a Target Word Sense. In Proceedings of the 6th Workshop on Innovative Use of NLP for Building Educational Applications (pp. 105-110). Portland, OR: The Association for Computational Linguistics.
Mostow, J., & Jang, H. (2012). Generating diagnostic multiple choice comprehension cloze questions. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP (pp. 136-146).
Mostow, J., and Tan, B.H.L. (2010). AutoJoin: Generalizing an Example into an EDM Query. In Proceedings of the 3rd International Conference on Educational Data Mining (pp. 11-13). Pittsburgh, PA: Carnegie Learning Inc. Full text
Xu, Y., & Mostow, J. (2013). Using item response theory to refine knowledge tracing. In Educational Data Mining 2013.
Project website:
Related projects
Supplemental information
Co-Principal Investigators: Margaret McKeown, University of Pittsburgh, and Charles Perfetti, University of Pittsburgh
The controlled studies will use within-subject designs to compare words that receive different types and amounts of instruction. The primary research methods range from informal usability testing to randomized, controlled, within-subject experimental trials embedded in the Reading Tutor. The purpose of these trials is to test the efficacy of different forms of tutorial instruction.
Student participation will consist primarily of using the Reading Tutor for 20-30 minutes daily in classrooms or school labs as part of their regular instruction over the entire school year. Students will complete a checklist vocabulary assessment and over the course of the sessions students will be exposed to specific vocabulary words in context. The tutor will present words in varying context and provide different levels of and types of training. Student responses will help the research team determine the optimal form and frequency of vocabulary training, thus shaping the design of the updated reading tutor.
Questions about this project?
To answer additional questions about this project or provide feedback, please contact the program officer.