Skip to main content

Breadcrumb

Home arrow_forward_ios Information on ... arrow_forward_ios Using Web-Based ...
Home arrow_forward_ios ... arrow_forward_ios Using Web-Based ...
Information on ...
Grant Closed

Using Web-Based Cognitive Assessment Systems for Predicting Student Performance on State Exams

NCER
Program: Education Research Grants
Program topic(s): Science, Technology, Engineering, and Mathematics (STEM) Education
Award amount: $1,386,161
Principal investigator: Kenneth Koedinger
Awardee:
Carnegie Mellon University
Year: 2003
Award period: 3 years (09/01/2003 - 08/31/2006)
Project type:
Development and Innovation
Award number: R305K030140

Purpose

This project developed and studied the ASSISTments system, a web-based artificial intelligence program that provides students with math practice. Given the limited classroom time available in middle school math classes, teachers are compelled to choose between time spent assisting students’ development and time spent assessing students’ math knowledge and skills. To address this issue, ASSISTments integrates assistance and assessment by offering instruction to students while providing a more detailed assessment of their math skills to the teacher.

Project Activities

This project developed the web-based ASSISTments system that monitors and assesses students’ math skills while providing computer based tutoring. The research team conducted a series of studies to assess (1) the ability to predict a student’s year-end Massachusetts state standardized test scores (MCAS) performance in math using multiple domains of math skills and knowledge, (2) whether the system is effective at teaching as it assesses, and (3) effects of using the ASSISTments program on MCAS.

Structured Abstract

Setting

This research took place in middle schools in Massachusetts and Pennsylvania.

Sample

Over the four years of the grant, data was collected from 4,825 students and 72 teachers from 14 schools. Each of the studies used data from a subset of this sample. 

Intervention

ASSISTments is an artificial intelligence program designed to support math learning. Each week when students worked on the program, it "learned" more about the students' abilities and provided increasingly accurate predictions of how they would do on a standardized mathematics test. The system also identified the difficulties that individual students and the class as a whole were having. Teachers could then use this detailed feedback to focus their instruction on the particular difficulties identified by the system. ASSISTments also provided students with intelligent tutoring assistance while the assessment information was being collected. Students worked on ASSISTments for about 20 minutes per week.

Research design and methods

Multiple studies were conducted on ASSISTments. First, the research team conducted an exploratory study that compared how different models of students’ knowledge and skills – with varying levels of granularity –  predicted MCAS performance. Next, the team conducted four experimental studies. For each, the research team randomly assigned subsets of students to treatment and control conditions, designed to test specific hypotheses related to whether the system effectively teaches as it assesses. Finally, the researchers conducted a quasi-experimental study to examine the effect of using the ASSISTments program on year-end grade 7 Massachusetts state standardized test scores (MCAS) compared to not using the program. 

Control condition

Each of the four experiments had a different control condition, designed to isolate the specific hypothesis being tested. For experiment 1, treatment students received scaffolding questions and control students were given hints. In experiment 2, the researchers examined whether adding multimedia to the ASSISTment system led to better student learning, using a “with-multimedia,” experimental condition and a “without-multimedia” control condition. In experiment 3, the researchers examined whether students learned better with tutors using general concepts (treatment) versus non-general concepts (control). In experiment 4, the researchers examined whether students learned better through the use of worked examples (treatment) versus conventional problems (control). The quasi-experimental study included a comparison group of students who did business-as-usual math practice with no access to ASSISTments. 

Key measures

The exploratory and quasi-experimental studies used MCAS as the main outcome measure. 

Data analytic strategy

The exploratory study used Bayesian networks to compare the performance of the domain skill models. The four experimental studies compared outcomes for the treatment and control conditions. The quasi-experimental study used ANCOVA analyses to compare the performance of middle schools using ASSISTments to middle schools not using the system. 

Key outcomes

  • ASSISTments was developed to support student-facing math practice, assessment, and tutoring supports. 
  • The team developed a real time reporting system for teachers and experimental analysis tools to support researchers in testing different tutoring strategies (Feng and Heffernan 2007).
  • Multiple studies were conducted on ASSISTments to explore models for predicting student performance on end-of-year exams (Ayers and Junker 2008), to explore students’ motivational and attitudinal patterns associated with gaming behavior and the implications for the design of interactive learning environments (Baker et al 2008), and to test the potential for using technology to provide students instruction during assessment and to give teachers fast and continuous feedback on student progress (Koedinger, McLaughlan, and Heffernan 2010).

People and institutions involved

IES program contact(s)

Christina Chhin

Products and publications

Project website:

https://www.assistment.org/

Publications:

Book chapter

Ayers, E., and Junker, B.W. (2006). Do Skills Combine Additively to Predict Task Difficulty in Eighth Grade Mathematics?. In J. Beck, E. Aimeur, and T. Barnes (Eds.), Educational Data Mining: Papers From the 2006 AAAI Workshop(pp. 14-20). Menlo Park, CA: AAAI Press.

Cen, H., Koedinger, K., and Junker, B.W. (2007). Is Over Practice Necessary?: Improving Learning Efficiency With the Cognitive Tutor Through Educational Data Mining. In R. Luckin, K. Koedinger, and J. Greer (Eds.), Artificial Intelligence in Education—Building Technology Rich Learning Contexts That Work (pp. 511-518). Amsterdam: IOS Press.

Feng, M., Heffernan, N.T., and Koedinger, K.R. (2005). Looking for Sources of Error in Predicting Students' Knowledge. In J.E. Beck (Ed.), Educational Data Mining: Papers From the 2005 AAAI Workshop (pp. 54-61). Menlo Park, CA: AAAI Press.

Junker, B.W. (2007). Using On-Line Tutoring Records to Predict End-of-Year Exam Scores: Experience With the Assistments Project and MCAS 8th Grade Mathematics. In R.W. Lissitz (Ed.), Assessing and Modeling Cognitive Development in School: Intellectual Growth and Standard Settings (pp. 1-34). Maple Grove, MN: JAM Press.

Nuzzo-Jones, G., Walonoski, J.A., Heffernan, N.T., and Livak, T. (2005). The Extensible Tutor Architecture: A New Foundation for ITS. In C.K. Looi, G. Mccalla, B. Bredeweg, and J. Breuker (Eds.), Artificial Intelligence in Education—Supporting Learning Through Intelligent and Socially Informed Technology (pp. 902-904). Amsterdam: IOS Press.

Pardos, Z., Feng, M., Heffernan, N.T., and Heffernan-Linquist, C. (2007). Analyzing Fine-Grained Skill Models Using Bayesian and Mixed Effect Methods. In R. Luckin, K. Koedinger, and J. Greer (Eds.), Artificial Intelligence in Education—Building Technology Rich Learning Contexts that Work (pp. 626-628). Amsterdam: IOS Press.

Razzaq, L., Feng, M., Heffernan, N.T., Koedinger, K., Nuzzo-Jones, G., Junker, B.W., Macasek, M.A., Rasmussen, K.P., Turner.T.E., and Walonoski, J.A. (2007). A Web-Based Authoring Tool for Intelligent Tutors: Blending Assessment and Instructional Assistance. In N. Nedjah, L.D. Mourelle, M.N. Borges, and N.N. Almeida (Eds.), Intelligent Educational Machines: Methodologies and Experiences (pp. 23-49). New York: Springer.

Razzaq, L., Feng, M., Nuzzo-Jones, G., Heffernan, N.T., Koedinger, K.R., Junker, B., Ritter, S., Knight, A., Aniszczyk, C., Choksey, S., Livak, T., Mercado, E., Turner, T.E., Upalekar, R., Walonoski, J.A., Macasek, M.A., and Rasmussen, K.P. (2005). Blending Assessment and Instructional Assisting. In C.K. Looi, G. Mccalla, B. Bredeweg, and J. Breuker (Eds.), Artificial Intelligence in Education—Supporting Learning Through Intelligent and Socially Informed Technology (pp. 555-562). Amsterdam: IOS Press.

Razzaq, L., Heffernan, N.T., and Lindeman, R.W. (2007). What Level of Tutor Interaction is Best?. In R. Luckin, K. Koedinger, and J. Greer (Eds.), Artificial Intelligence in Education—Building Technology Rich Learning Contexts That Work (pp. 222-229). Amsterdam: IOS Press.

Rose, C., Donmez, P., Gweon, G., Knight, A., Junker, B., Cohen, W., Koedinger, K., and Heffernan, N. (2005). Automatic and Semi-Automatic Skill Coding With a View Towards Supporting On-Line Assessment. In C.K. Looi, G. McCalla, B. Bredeweg, and J. Breuker (Eds.), Artificial Intelligence in Education—Supporting Learning Through Intelligent and Socially Informed Technology (pp. 571-578). Amsterdam: IOS Press.

Turner, T.E., Macasek, M.A., Nuzzo-Jones, G., Heffernan, N.T., and Koedinger, K. (2005). The Assessment Builder: A Rapid Development Tool for ITS. In C.K. Looi, G. Mccalla, B. Bredeweg, and J. Breuker (Eds.), Artificial Intelligence in Education—Supporting Learning Through Intelligent and Socially Informed Technology(pp. 929-931). Amsterdam: IOS Press.

Journal article, monograph, or newsletter

Ayers, E., and Junker, B.W. (2008). IRT Modeling of Tutor Performance to Predict End-of-Year Exam Scores. Educational and Psychological Measurement, 68(6): 972-987.

Baker, R., Walonoski, J., Heffernan, N., Roll, I., Corbett, A., and Koedinger, K.R. (2008). Why Students Engage in "Gaming the System" Behavior in Interactive Learning Environments. Journal of Interactive Learning Research, 19(2): 185-224.

Feng, M, Heffernan, N., Heffernan, C., and Mani, M. (2009). Using Mixed-Effects Modeling to Analyze Different Grain-Sized Skill Models in an Intelligent Tutoring System. IEEE Transactions on Learning Technologies, 2(2): 79-92.

Feng, M., and Heffernan, N.T. (2006). Informing Teachers Live About Student Learning: Reporting in the Assistment System. Technology, Instruction, Cognition, and Learning, 3(1): 115-128.

Feng, M., and Heffernan, N.T. (2007). Towards Live Informing and Automatic Analyzing of Student Learning: Reporting in Assistment System. Journal of Interactive Learning Research, 18(2): 207-230.

Heffernan, N., Koedinger, K., and Razzaq, L. (2008). Expanding the Model-Tracing Architecture: A 3rd Generation Intelligent Tutor for Algebra Symbolization. The International Journal of Artificial Intelligence in Education, 18(2): 153-178.

Koedinger, K.R., McLaughlin, E.A., and Heffernan, N.T. (2010). A Quasi-Experimental Evaluation of an On-Line Formative Assessment and Tutoring System. Journal Of Educational Computing Research, 43(4): 489-510.

Ostrow, K.S., Heffernan, N.T., and Williams, J.J. (2017). Tomorrow's EdTech Today: Establishing a Learning Platform as a Collaborative Research Tool for Sound Science. Teachers College Record,, 119(3): 1-36.

Nongovernment report, issue brief, or practice guide

Cen, H., Koedinger, K., and Junker, B. (2005). Automating Cognitive Model Improvement by a Search and Logistic Regression.Menlo Park, CA: AAAI Press. Macasek, M.A., and Heffernan, N.T. (2006). Towards Enabling Collaboration in Intelligent Tutoring Systems WPI Technical Report #CS-TR-06-07.Worcester, MA: Worchester Polytechnic Institute.

Nuzzo-Jones, G., Macasek, M.A., Walonoski, J., Rasmussen K.P., and Heffernan, N.T. (2006). Common Tutor Object Platform: An E-Learning Software Development Strategy (WPI Technical Report #CS-TR-06-08).Worchester, MA: Worchester Polytechnic Institute.

Proceeding

Anozie, N.O., and Junker, B.W. (2006). Predicting End-of-Year Accountability Assessment Scores From Monthly Student Records in an Online Tutoring System. In Proceedings of the 21st National Conference on Artificial Intelligence (pp. 1-6). Menlo Park, CA: AAAI Press.

Cen, H., Koedinger, K.R., and Junker, B. (2006). Learning Factors Analysis: A General Method for Cognitive Model Evaluation and Improvement. In Proceedings of the 8th International Conference on Intelligent Tutoring Systems (pp. 164-175). Berlin, Germany: Springer-Verlag.

Feng, M., Beck, J., and Heffernan, N.T. (2009). Using Learning Decomposition and Bootstrapping With Randomization to Compare the Impact of Different Educational Interventions on Learning. In Proceedings of the 2nd International Conference on Educational Data Mining (pp. 51-60). Cordoba, Spain: Educational Data Mining.

Feng, M., Beck, J., Heffernan, N., Beck, J., and Koedinger, K. (2008). Can we Predict Which Groups of Questions Students will Learn From?. In Proceedings of the 1st International Conference on Education Data Mining(pp. 218-225). Montreal, Canada: Educational Data Mining.

Feng, M., Heffernan, N.T., and Beck, J. (2009). Using Learning Decomposition to Analyze Instructional Effectiveness in the Assistment System. In Proceedings of the 14th International Conference on Artificial Intelligence in Education (AIED-2009) (pp. 523-530). Amsterdam: IOS Press.

Feng, M., Heffernan, N.T., and Koedinger, K.R. (2006). Addressing the Testing Challenge With a Web-Based E-Assessment System That Tutors as it Assesses. In Proceedings of the 15th International World Wide Web Conference (pp. 307-316). Edinburgh, Scotland: World Wide Web Conference Committee (IW3C2).

Feng, M., Heffernan, N.T., and Koedinger, K.R. (2006). Predicting State Test Scores Better With Intelligent Tutoring Systems: Developing Metrics to Measure Assistance Required. In Proceedings of the 8th International Conference on Intelligent Tutoring Systems(pp. 31-40). Berlin, Germany: Springer-Verlag.

Kardian, K., and Heffernan, N.T. (2006). Knowledge Engineering for Intelligent Tutoring Systems: Assessing Semi-Automatic Skill Encoding Methods. In Proceedings of the 8th International Conference on Intelligent Tutoring Systems (pp. 735-737). Berlin, Germany: Springer-Verlag.

Mendicini, M., Heffernan, N., and Razzaq, L. (2008). Comparing Classroom Problem-Solving With No Feedback to Web-Based Homework Assistance. In Proceedings of the 9th International Conference on Intelligent Tutoring Systems (pp. 426-437). Berlin, Germany: Springer-Verlag.

Pardos, Z.A., Heffernan, N.T., Anderson, B., and Heffernan, C. (2006). Using Fine-Grained Skill Models to Fit Student Performance With Bayesian Networks. In On-Line Proceedings of the Workshop on Educational Data Mining at the 8th International Conference on Intelligent Tutoring Systems (pp. 5-12). New York: Springer.

Pardos, Z.A., Heffernan, N.T., Anderson, B., and Heffernan, C.L. (2007). The Effect of Model Granularity on Student Performance Prediction Using Bayesian Networks. In Proceedings of the 11th International Conference on User Modeling (pp. 435-439). Berlin Heidelberg: Springer.

Razzaq, L., and Heffernan, N.T. (2006). Scaffolding vs. Hints in the Assistment System. In Proceedings of the 8th International Conference on Intelligent Tutoring Systems (pp. 635-644). Berlin, Germany: Springer-Verlag.

Razzaq, L., and Heffernan, N.T. (2008). Towards Designing a User-Adaptive Web-Based E-Learning System. In Proceedings of the 2008 Conference on Human Factors in Computing Systems(pp. 3525-3530). Florence, Italy: ACM 2008.

Walonoski, J., and Heffernan, N.T. (2006). Detection and Analysis of Off-Task Gaming Behavior in Intelligent Tutoring Systems. In Proceedings of the 8th International Conference on Intelligent Tutoring Systems(pp. 382-391). Berlin, Germany: Springer-Verlag.

Walonoski, J., and Heffernan, N.T. (2006). Prevention of Off-Task Gaming Behavior in Intelligent Tutoring Systems. In Proceedings of the 8th International Conference on Intelligent Tutoring Systems (pp. 722-724). Berlin, Germany: Springer-Verlag.

Related projects

Making Longitudinal Web-Based Assessments Give Cognitively Diagnostic Reports to Teachers, Parents, and Students While Employing Mastery Learning

R305A070440

An Efficacy Study of Online Mathematics Homework Support: An Evaluation of the ASSISTments Formative Assessment and Tutoring Platform

R305A120125

Evaluating the Effectiveness of ASSISTments for Improving Math Achievement

R305A170243

Efficacy of ASSISTments Online Homework Support for Middle School Mathematics Learning: A Replication Study

R305A170641

Revisions to the ASSISTments Digital Learning Platform to Expand Its Support for Rigorous Education Research

R305N210049

Talking Math: Improving Math Performance and Engagement Through AI-Enabled Conversational Tutoring

R305T240029

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

Tags

Data and AssessmentsEducation TechnologyK-12 EducationMathematicsStudents

Share

Icon to link to Facebook social media siteIcon to link to X social media siteIcon to link to LinkedIn social media siteIcon to copy link value

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

You may also like

Blue zoomed in IES logo
Other

Strengthening School Supports for Kinship Caregive...

February 26, 2026
Read More
Zoomed in IES logo
Other

Expanding School Supports for Kinship Caregivers a...

January 16, 2026
Read More
Zoomed in IES logo
Fact Sheet/Infographic/FAQ

Pohnpei Department of Education Instructional Mode...

Author(s): U.S. Department of Education
Read More
icon-dot-govicon-https icon-quote