Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Using Adaptive Practice to Improve Recall and Understanding in Postsecondary Anatomy and Physiology
Center: NCER Year: 2019
Principal Investigator: Pavlik Jr., Philip Awardee: University of Memphis
Program: Postsecondary and Adult Education      [Program Details]
Award Period: 3 Years (07/01/2019 – 06/30/2022) Award Amount: $1,240,151
Type: Development and Innovation Award Number: R305A190448
Description:

Co-Principal Investigators: Banker, Amanda M.; Olney, Andrew M.

Purpose: The project team developed and refined an online platform called the Mobile Fact and Concept Training System (MoFaCTS). This platform aimed to support community college students taking introductory courses in anatomy and physiology. Many students find these introductory courses challenging, and difficulties in understanding the material often hinder their progress in obtaining certificates or degrees. MoFaCTS sought to address these issues by enhancing students' understanding and retention of course material. MoFaCTS used a unique approach called "adaptively sequenced formative practice exercises" to aid learning. In simpler terms, the system provided a series of practice questions that adapted to each student's level of understanding. These questions helped students grasp essential vocabulary and concepts more effectively. Additionally, the platform offered detailed feedback after each exercise, allowing students to form clearer mental pictures of what they were studying. In this way, the project aimed to make challenging course materials more accessible, thereby helping students to succeed academically.

Project Activities: The project team improved the MoFaCTS learning system by incorporating materials from existing textbooks. They automated the creation of interactive, fill-in-the-blank practice exercises designed to help students understand and remember the course content better. These exercises were unique for each student, arranged in a particular order based on an artificial intelligence (AI) algorithm. This AI was specifically engineered to provide exercises at just the right level of difficulty to enhance learning, based on a model that anticipated how well a student would perform on each exercise. This made it possible to tailor the exercises more closely to each student's learning needs. To ensure that the system was effective and easy to use, the researchers incorporated two additional features: a teacher interface and a student progress report. The teacher interface enabled educators to see detailed reports about their students' performance on the practice exercises, thus providing valuable insights into their learning paths. The student progress report, on the other hand, allowed students to monitor their own performance, promoting a sense of ownership and accountability in their learning.

The team conducted usability tests and interviews with both teachers and students to assess the system's effectiveness. The feedback indicated that the MoFaCTS system was user-friendly and met the needs of its intended audience. A pilot study was also carried out to determine if the system could positively impact exam scores, which was the primary metric used to evaluate its success. The results showed promise, indicating that the MoFaCTS system has the potential to significantly improve educational outcomes.

Key Outcomes: The main findings of this project are as follows:

The team developed –

Structured Abstract

Setting: The classroom research took place in a community college in Tennessee.

Sample: Approximately 900 community college students and 4 instructors participated throughout the project, and 25 classrooms comprising 433 students participated in the pilot study over 4 semesters.

Intervention: The research team worked on an innovative online learning platform called the Mobile Fact and Concept Training System (MoFaCTS). This system was tailored specifically for content-area reading practice, focusing in this instance on anatomy textbooks used in community college courses. MoFaCTS utilized a sophisticated artificial intelligence (AI) algorithm to create a personalized learning experience for students. The AI chose questions based on their importance to the course material and arranged them in an order calculated to optimize learning. The system was built on proven memory principles like the spacing effect, which suggests that learning is more effective when it's spaced out over time.

In MoFaCTS, an instructor would create an assignment, generating a website link for students, who would read it and complete the exercise. MoFaCTS automatically generated questions ("cloze items"—sentences with missing words or phrases), which students had to fill in. MoFaCT would adapt based on student performance. For example, if a student answered a question correctly, that question was less likely to be repeated. Conversely, if the student got the question wrong, MoFaCTS would initiate an auto-generated tutorial dialogue to help clarify the concept. Additionally, MoFaCTS included a teacher interface to track the progress of each student, offering detailed reports on their practice exercises and a student-facing progress report feature for students to see how many questions they had mastered, among other indicators of their progress.

Research Design and Methods: The development of the Mobile Fact and Concept Training System (MoFaCTS) unfolded over three overlapping phases: system building, refinement, and pilot testing. In the system-building phase, the researchers constructed the essential components of the platform, such as the authoring and dialog modules as well as the question-creating elements known as "cloze items." They also sought expert opinions and feedback from teachers to enhance both the content and the functionality of MoFaCTS. They conducted small design studies with students to evaluate the system's usability, and they conducted interviews with teachers to ensure the system would be feasible to implement in real-world classrooms.

During the refinement phase, the project team focused on improving the system based on the feedback collected. They expanded the database of questions, also known as "question items," and enhanced the feedback mechanisms within the platform. One key improvement was the addition of "refutational feedback," which corrects misconceptions, and "dialog feedback," which engages students in a conversation-like interaction to help them understand the material better.

Finally, in the pilot testing phase, the researchers conducted a comprehensive study using a "delayed treatment design." This means that at first, no classroom used MoFaCTS for their initial two course exams. Later, some classrooms—determined by random selection of instructors—started using MoFaCTS for their remaining four exams, while others began using it only for their last two exams. This design allowed the researchers to compare how the use of MoFaCTS impacted exam scores over time, providing valuable data on the system's efficacy.

Control Condition: The delayed treatment group served as the control condition for between-group comparisons.

Key Measures: For the usability and feasibility studies, the researchers developed their own survey measures, building off of existing work, such as the Motivated Strategies for Learning Questionnaire and the Technology Acceptance Model. For the pilot studies, the key measure of effect was the student exam grades.

Data Analytic Strategy: The researchers used mixed models to analyze the effect of conditions on intermediate exam grades, linear models to analyze the effect of conditions on final grade, and mediation models to analyze the effect of one-sided noncompliance with treatment. Student modeling was conducted with specialized logistic regression-based approaches to accomplish the adaptive practice scheduling algorithms.

Cost Analysis: This cost analysis provides a detailed breakdown of the expenses associated with implementing the Mobile Fact and Concept Training System (MoFaCTS) educational software for one year in community college anatomy and physiology courses. The analysis focuses on server and technology support costs for running the software at three different scales: 3 classes, 10 classes, and 100 classes, with each class accommodating 20 students. The server costs are based on Amazon Web Services and amount to $282.07 per year for all class sizes. Tech support costs vary from $150 per year for 3 classes to $650 per year for 100 classes. Backup support is a fixed cost of $25 per year. The total yearly costs thus range from $457.07 for 3 classes to $957.07 for 100 classes. Notably, the cost per pupil decreases significantly as the scale increases, from $7.62 per pupil in a 3-class setup to just $0.48 per pupil in a 100-class setup. While the main cost driver is technology support, economies of scale make the software increasingly cost-effective as it is expanded to more classes. However, it's important to note that the cost estimates for 100 classes are speculative and require further validation.

Related IES Projects: Bridging the Bridge to Algebra: Measuring and Optimizing the Influence of Prerequisite Skills on a Pre-Algebra Curriculum (R305B070487), Center for the Study of Adult Literacy (CSAL): Developing Instructional Approaches Suited to the Cognitive and Motivational Needs for Struggling Adults (R305C120001)

Products and Publications

ERIC Citations: Find available citations in ERIC for this award here.

Publicly Available Data: https://pslcdatashop.web.cmu.edu/Project?id=858

Project Website: https://mofacts.optimallearning.org/

Additional Online Resources and Information: https://github.com/memphis-iis/mofacts-ies/wiki

Select Publications:

Banker, A. M., Pavlik Jr, P. I., Olney, A., & Eglington, L. G. (2022). Online Tutoring System (MoFaCTS) for Anatomy and Physiology: Implementation and Initial Impressions. HAPS Educator, 26(2), 44–54. Full text

Eglington, L. G., & Pavlik Jr, P. I. (2019). Predictiveness of Prior Failures is Improved by Incorporating Trial Duration. JEDM Journal of Educational Data Mining, 11(2), 1–19. Full text

Eglington, L. G., & Pavlik Jr, P. I. (2020). Optimizing practice scheduling requires quantitative tracking of individual item performance. NPJ science of learning, 5(1), 1–10. Full text

Eglington, L. G., & Pavlik, P. I. (2022). How to Optimize Student Learning Using Student Models That Adapt Rapidly to Individual Differences. International Journal of Artificial Intelligence in Education, published online (22 pages). Full text

Graesser, A. C., Greenberg, D., Olney, A., & Lovett, M. W. (2019). Educational Technologies that Support Reading Comprehension for Adults Who Have Low Literacy Skills. In The Wiley Handbook of Adult Literacy (pp. 471–493).

Hu, X., Cai, Zhiqiang, & Olney, A. M. (2019). Semantic Representation and Analysis (SRA) and its Application in Conversion-Based Intelligent Tutoring Systems (CbITS). In R. Feldman (Ed.), Learning Science: Theory, Research, and Practice (pp. 103–126). McGraw-Hill Education.

Olney, A. M. (2021). Generating response-specific elaborated feedback using long-form neural question answering. In Proceedings of the Eighth ACM Conference on Learning @ Scale, 27–36. Full Text

Olney, A. M. (2021). Paraphrasing academic text: A study of back-translating anatomy and physiology with transformers. In I. Roll, D. McNamara, S. Sosnovsky, R. Luckin, & V. Dimitrova (Eds.), Proceedings of the 22nd International Conference on Artificial Intelligence in Education (pp. 279–284). Springer International Publishing. Full text

Olney, A. M. (2021). Sentence selection for cloze item creation: A standardized task and preliminary results. In T. W. Price & S. San Pedro, Joint Proceedings of the Workshops at the 14th International Conference on Educational Data Mining, Vol. 3051, LDI–6. CEUR-WS.org. Full text

Olney, A. M. (2022). Assessing Readability by Filling Cloze Items with Transformers. In M. M. Rodrigo, N. Matsuda, A. I. Cristea, & V. Dimitrova, Proceedings of the 22nd International Conference on Artificial Intelligence in Education (pp. 307–318). Springer International Publishing. Full text

Olney, A. M. (2022). Generating Multiple Choice Questions with a Multi-Angle Question Answering Model. In S. E. Fancsali & V. Rus (Eds.), Proceedings of the 3rd Workshop of the Learner Data Institute, The 15th International Conference on Educational Data Mining (EDM 2022) (pp. 18–23). Full text

Olney, A. M. (2023). Generating multiple choice questions from a textbook: LLMs match human performance on most metrics. In S. Moore, J. Stamper, R. Tong, C. Cao, Z. Liu, X. Hu, Y. Lu, J. Liang, H. Khosravi, P. Denny, A. Singh, & C. Brooks (Eds.), Proceedings of Empowering Education with LLMs—The Next-Gen Interface and Content Generation, Tokyo, Japan, July 7, 2023. CEUR-WS.org.

Olney, A. M., Gilbert, S. B., & Rivers, K. (2021). Preface to the special issue on creating and improving adaptive learning: smart authoring tools and processes. International Journal of Artificial Intelligence in Education, 32, 1–3. Full text

Pavlik Jr, P. I., & Eglington, L. G. (2023). Automated Search for Logistic Knowledge Tracing Models. In F. Mingyu, K. Tanja, & T. Partha (Eds.), Proceedings of The 16th International Conference on Educational Data Mining (pp. 17–27). R305A190448

Pavlik Jr, P. I., Olney, A. M., Banker, A., Eglington, L., & Yarbro, J. (2020). The Mobile Fact and Concept Textbook System (MoFaCTS). In CEUR workshop proceedings (Vol. 2674). Full text

Pavlik Jr., P. I., & Eglington, L. (2021). The Mobile Fact and Concept Textbook System (MoFaCTS) Computational Model and Scheduling System. In 22nd International Conference on Artificial Intelligence in Education (AIED 2021) Third Workshop on Intelligent Textbooks (pp. 1–15). In CEUR workshop proceedings (Vol. 2895). Full text

Pavlik Jr., P. I., & Zhang, L. (2022). Using autoKC and Interactions in Logistic Knowledge Tracing. In Proceedings of The Third Workshop of the Learner Data Institute, The 15th International Conference on Educational Data Mining (EDM 2022) (pp. 1–6). Full text

Pavlik Jr., P. I., Eglington, L., & Zhang, L. (2021). Automatic Domain Model Creation and Improvement. In C. Lynch, A. Merceron, M. Desmarais, & R. Nkambou (Eds.), Proceedings of The 14th International Conference on Educational Data Mining (pp. 672–676). Full text

Pavlik, P. I., Eglington, L. G., & Harrell-Williams, L. M. (2021). Logistic knowledge tracing: A constrained framework for learner modeling. IEEE Transactions on Learning Technologies, 14(5), 624–639.

Rus, V., Olney, A. M., & Graesser, A. C. (2023). Deeper learning through interactions with students in natural language. In B. du Boulay, A. Mitrovic, & K. Yacef (Eds.), Handbook of Artificial Intelligence in Education (pp. 250–272). Edward Elgar Publishing. doi.org/10.4337/9781800375413.00021

Yarbro, J. T., & Olney, A. M. (2021). Contextual definition generation. In S. A. Sosnovsky, P. Brusilovsky, R. G. Baraniuk, & A. S. Lan (Eds.), Proceedings of the Third International Workshop on Intelligent Textbooks (Vol. 2895, pp. 74–83). CEUR-WS.org. Full text

Yarbro, J. T., & Olney, A. M. (2021). WikiMorph: Learning to decompose words into morphological structures. In I. Roll, D. McNamara, S. Sosnovsky, R. Luckin, & V. Dimitrova (Eds.), Proceedings of the 22nd International Conference on Artificial Intelligence in Education (pp. 406–411). Springer International Publishing. Full text


Back