Project Activities
The researchers will use a dialog interaction system called Questioning the Author (QTA) to help students learn and integrate new concepts with what they already know to deepen and expand the knowledge that was presented in class. QTA uses systematic dialog interaction to foster deep learning. QTA, which has been shown to improve reading comprehension, is in widespread use in U.S. classrooms. The virtual tutoring system will closely resemble the tutorial dialogs produced by human tutors trained in the QTA method. Initially, the researchers will develop and refine the system with the help of teachers, the FOSS developers, QTA experts, and third, fourth and fifth grade students in a large city school district in Colorado. The students will reside in eight classrooms in each of three grade levels. During the evaluation phase of the project, students will be randomly assigned to receive either standard classroom instruction and support, classroom instruction with support that incorporates QTA dialogs in a large group, small-group tutoring with QTA with a trained human tutor, or small group interaction with the computer-based QTA tutoring system. Scores from pre- and post- measures and follow-up measures will be obtained and analyzed.
Structured Abstract
Setting
Research will be conducted in a large city school district in Colorado.
Sample
Participants include 672 third-, fourth- and fifth-grade students from eight classrooms in a large city school district in Colorado. These students vary in ethnic, racial, and economic composition.
In the classroom Question the Author (QTA) dialog condition, students will discuss science concepts in QTA dialogs managed by either trained regular classroom teachers or by researcher project tutors. Classroom QTA dialogs may include multimedia presentations to illustrate concepts that students are finding difficult. Presentation of materials remains constant, as does time-on-task; the independent variable in the study is individual tutoring either by a computer or a human, and the comparison with the no-treatment control classes.
Research design and methods
Initially, the researchers will develop and refine the computer-based QTA tutoring system with the help of teachers, the FOSS developers, QTA experts, and third-, fourth- and fifth-grade students. As part of the development process, the researchers will compare student experiences and learning gains on six science subjects for students who interact with both human and virtual tutors in standard classroom or small group settings. The researchers will use observational and interview measures about usability and likeability as well as pre-post assessment measures within correlational and quasi-experimental designs, in order to inform the development of the virtual tutor. In the final year of the project, the researchers will assess the feasibility and potential of three dialog treatments by comparing the learning of science modules in an experimental design with third-, fourth- and fifth-grade students.
Control condition
Standard classroom instruction using the FOSS curriculum units will be delivered in the control condition.
Key measures
The development of the interventions will be studied with both formative and summative assessments, including the Colorado CSAP statewide test of science administered to all students in grade 5.
Data analytic strategy
Researchers will analyze performance on content-related pre-, post-, and delayed post-assessments for generalization of learning. Additionally, the pre, post and follow-up measures will be analyzed with hierarchical linear modeling growth modeling tools.
People and institutions involved
IES program contact(s)
Products and publications
The outcomes of this study include a fully developed system integrating QTA instruction with FOSS, as well as published reports describing the development and preliminary evaluation of the effects of using tutorial dialogs during elementary school science instruction.
Publications:
Book chapter
Nielsen, R.D., Ward, W., and Martin, J.H. (2008). Soft Computing in Intelligent Tutoring Systems and Educational Assessment. In B. Prasad (Ed.), Soft Computing Applications in Business (pp. 201-230). Heidelberg, Germany: Springer-Verlag.
Journal article, monograph, or newsletter
Bolaños, D., Cole, R.A., Ward, W.H., Tindal, G.A., Hasbrouck, J., and Schwanenflugel, P.J. (2013). Human and Automated Assessment of Oral Reading Fluency. Journal of Educational Psychology, 105(4): 1142-1151.
Bolanos, D., Cole, R.A., Ward, W.H., Tindal, G.A., Schwanenflugel, P.J., and Kuhn, M.R. (2013). Automatic Assessment of Expressive Oral Reading. Speech Communication, 55(2): 221-236.
Nielsen, R.D., Ward, W., and Martin, J.H. (2009). Recognizing Entailment in Intelligent Tutoring Systems. Natural Language Engineering: Special Issue on Textual Entailment, 15(4): 479-501.
Ward, W., Cole, R., Bolaños, D., Buchenroth-Martin, C., Svirsky, E., and Weston, T. (2013). My Science Tutor: A Conversational Multimedia Virtual Tutor. Journal of Educational Psychology, 105(4): 1115-1125.
Nongovernment report, issue brief, or practice guide
Nielsen, R.D., Boyer, K., Heilman, M., Lin, C., Pino, J., and Stent, A. (2009). Methods and Metrics in Evaluation of Question Generation. Arlington, VA: National Science Foundation.
Proceeding
Nielsen, R.D. (2008). Question Generation: Proposed Challenge Tasks and Their Evaluation. In Proceedings of the Workshop on the Question Generation Shared Task and Evaluation Challenge.
Nielsen, R.D., Becker, L., and Ward, W. (2008). TAC 2008 CLEAR RTE System Report: Facet-Based Entailment. In In Proceedings of the Text Analysis Conference (pp. 1-8). Gaithersburg, MD: National Institute of Standards and Technology.
Nielsen, R.D., Buckingham, J., Knoll, G., Marsh, B., and Palen, L. (2008). A Taxonomy of Questions for Question Generation. In Proceedings of the Workshop on the Question Generation Shared Task and Evaluation Challenge. Arlington, VA: National Science Foundation.
Nielsen, R.D., Ward, W., and Martin, J.H. (2008). Automatic Generation of Fine-Grained Representations of Learner Response Semantics. In Proceedings of the Ninth International Conference on Intelligent Tutoring Systems (pp. 173-183). Heidelberg, Germany: Springer.
Nielsen, R.D., Ward, W., and Martin, J.H. (2008). Classification Errors in a Domain-Independent Assessment System. In Proceedings of the Third Workshop on Innovative Use of Natural Language Processing for Building Educational Applications, at the Forty-Sixth Annual Meeting of the Association for Computational Linguistics (pp. 10-18). Stroudsburg, PA: Association for Computational Linguistics.
Nielsen, R.D., Ward, W., and Martin, J.H. (2008). Learning to Assess Low-Level Conceptual Understanding. In Proceedings of the Twenty-First International Artificial Intelligence Researchers Society Conference (FLAIRS-08) (pp. 427-432). Menlo Park, CA: Association for the Advancement of Artificial Intelligence.
Nielsen, R.D., Ward, W., Martin, J.H., and Palmer, M. (2008). Annotating Students' Understanding of Science Concepts. In Proceedings of the Sixth International Language Resources and Evaluation Conference (pp. 341-348). Paris: European Language Resources Association.
Nielsen, R.D., Ward, W., Martin, J.H., and Palmer, M. (2008). Extracting a Representation From Text for Semantic Analysis. In Proceedings of the Forty-Sixth Annual Meeting of the Association for Computational Linguistics and the Human Language Technologies Conference (pp. 241-244). Stroudsburg, PA: Association for Computational Linguistics.
Additional project information
Previous award details:
Related projects
Supplemental information
Co-Principal Investigator: Ron Cole
Questions about this project?
To answer additional questions about this project or provide feedback, please contact the program officer.