|Title:||Exploring Studies to Derive Policies for Adaptive Natural-Language Tutoring in Physics|
|Principal Investigator:||Katz, Sandra||Awardee:||University of Pittsburgh|
|Program:||Cognition and Student Learning [Program Details]|
|Award Period:||3 years (9/1/13 - 8/31/16)||Award Amount:||$1,430,755|
Co-Principal Investigators: Michael Ford, Pamela Jordan
Purpose: Students' failure to grasp basic scientific concepts and apply these concepts to problem solving has been a persistent challenge, especially in physics education. Many educators and education policymakers have looked to intelligent tutoring systems (ITSs) as a means of providing cost-effective, individualized instruction to students with the potential to improve their conceptual understanding of and problem-solving skills in math and science. The key to developing highly effective, adaptive ITSs is to derive decision rules that can guide an automated tutor in making pedagogical choices, such as when to initiate a hint and what kind of hint to provide as well as whether to tell a student a particular piece of domain knowledge or guide the student in co-constructing that knowledge. The goal of this project is to formulate hypotheses and an associated set of preliminary decision rules for several tutorial choices related to physics. The decision rules suggested by this research will guide adaptive scaffolding in natural-language dialogue systems. In addition, this research has the potential to inform instructional dialogue in one-on-one tutoring and classroom settings.
Project Activities: The researchers will conduct five experiments to evaluate a set of malleable factors of tutoring. The factors were chosen based on the following criteria: (1) the factors all challenge commonly held beliefs about what makes tutoring effective; (2) the researchers observed many instances of each option associated with each malleable factor; and (3) correlational analyses from the researchers' prior IES–funded work (Improving a Natural-Language Tutoring System that Engages Students in Deep Reasoning Dialogues about Physics) suggest that at least one option per factor predicts learning for high ability students, low ability students, or both types of students. The researchers will manipulate automated dialogues in a computer-based tutoring system in order to determine if some tutoring decisions are better than others for physics content, and whether students' self-efficacy and aptitude play a role in determining optimal tutoring decisions.
Products: The products of this project will be preliminary evidence of promising policies to guide adaptive scaffolding in natural dialogue systems. In addition, strategies for instructional dialogue in one-on-one tutoring and classroom settings will be presented. Peer-reviewed publications will also be produced.
Setting: Six high schools from urban, suburban, and parochial school districts in Pennsylvania will participate in the study. The research will be conducted in the high schools' computer labs or at the research team's laboratory at the University of Pittsburgh.
Sample: For the first study, the participants are 70 high school students currently taking physics. For each of the remaining four studies, participants will be 100 high school students currently taking physics.
Intervention: The research team will identify effective malleable factors of tutoring in order to inform the development of a future ITS intervention and more broadly to inform one-on-one tutoring and classroom instructional practices.
Research Design and Methods: The research team will conduct five studies to address particular malleable factors of tutoring. Each study will address a different factor. For each study, a feature of Rimac, the natural-language tutoring system for high school physics developed under a previously-funded IES grant (Improving a Natural-Language Tutoring System that Engages Students in Deep Reasoning Dialogues about Physics), will be manipulated based on the malleable factor of interest for that particular study. For all studies, there will be two conditions and the design will be between-subjects, so each student will be randomly assigned to only one condition. Participants in all studies will first complete a pre-test and self-efficacy survey, will then interact with a set of problems in Rimac and engage in automated dialogues that will vary depending on the experimental condition, and will finally complete a post-test. Study 1 will measure the gain score from pre-test to post-test for overall physics knowledge and Studies 2-5 will measure overall gain score as well as gain scores for different types of knowledge such as conditional reasoning, ability to instantiate abstractions, ability to break a whole into its parts, ability to specify quantities in terms of appropriate units, directional specifiers, and planning ability.
Control Condition: Across the experiments, the control or comparison condition will vary as a function of the research question.
Key Measures: Key measures include researcher-designed pre-tests and post-tests which will measure the specific knowledge components that the dialogue focused on. When possible, researchers plan to reuse questions from the Force Concept Inventory, which targets known misconceptions. The Physics Self-Efficacy Questionnaire and PSAT scores will be collected to address questions about different types of students.
Data Analytic Strategy: For each study, multiple regression will be used to analyze the effect of the manipulated independent variable on learning outcomes. In Studies 2–5, self-efficacy will be included as a second quantitative moderator variable and the analysis will be performed at multiple levels.
Related IES Projects: Improving a Natural-Language Tutoring System that Engages Students in Deep Reasoning Dialogues about Physics (R305A100163)
Journal article, monograph, or newsletter
Jordan, P., Albacete, P., and Katz, S. (2015). When is it Helpful to Restate Student Responses Within a Tutorial Dialogue System?. Artificial Intelligence in Education, 9112: 658–661.
Lipschultz, M., Litman, D., Katz, S., Albacete, P., and Jordan, P. (2014). Predicting Semantic Changes in Abstraction in Tutor Responses to Students. International Journal of Learning Technology, 9(3): 281–303.
Jordan, P., Albacete, P. and Katz, S. (2015). Exploring the Effects of Redundancy within a Tutorial Dialogue System: Restating Students' Responses. In 16th Annual SIGdial Meeting on Discourse and Dialogue (pp. 51–59). Prague, CZ: Special Interest Group on Discourse and Dialogue SIGdial.
Jordan, P., Albacete, P., and Katz, S. (2016). Exploring Contingent Step Decomposition in a Tutorial Dialogue System. In 24th Conference on User Modeling, Adaptation and Personalization (UMAP). Halifax, NS Canada: ACM.
Katz, S., Albacete, P., and Jordan, P. (2016). Do Summaries Support Learning from Post Problem Reflective Dialogues?. In 13th International Conference on Intelligent Tutoring Systems (ITS 2016) (pp. 519–520). Zagreb, HR: Springer International Publishing.
Katz, S., Jordan, P., and Albacete, P. (2016). Exploring How to Adaptively Apply Tutorial Dialogue Tactics. In Proceedings of the 16th IEEE International Conference on Advanced Learning Technologies (ICALT 2016) (pp. 36–38). Austin, TX: IEEE .