Skip Navigation

Education Technology

Grantees

- OR -

Investigator

- OR -

Goals

- OR -

FY Awards

- OR -

An Efficacy Study of Online Mathematics Homework Support: An Evaluation of the ASSISTments Formative Assessment and Tutoring Platform

Year: 2012
Name of Institution:
SRI International
Goal: Efficacy and Replication
Principal Investigator:
Roschelle, Jeremy
Award Amount: $3,498,460
Award Period: 4 years (4/1/2012-3/31/2016)
Award Number: R305A120125

Description:

Co-Principal Investigator: Heffernan III, Neil T.

Purpose: The purpose of this study is to evaluate the efficacy of the fully developed intervention, ASSISTments. ASSISTments is an online formative assessment and tutoring platform in mathematics that provides coached, practice problem-solving support for students and cognitive diagnostic reports to teachers; supports students' mathematics homework completion; and facilitates differentiated instruction.

Project Activities: In this study, researchers will test the efficacy of the intervention using a randomized experimental design with two cohorts of teachers. Teachers will spend their first year in the study implementing the intervention and learning how to use it. During the second year of the intervention, student performance data will be used to evaluate the efficacy of the intervention. Each teacher will participate in the study for two years. Teachers will receive professional development training and support on the use of ASSISTments both prior to and throughout the school year, where they will use the system for homework at least four times per week. The technology in ASSISTments enables teachers to assign customized homework to their students, aligned to individual student needs. Teachers will also assign "mastery" problem sets that organize practice to facilitate the achievement of proficiency. Students will complete their homework using the system. While doing homework in ASSISTments, students will receive support including immediate feedback on the correctness of their answers and extensive tutoring.

Products: The products of this project will be evidence of the efficacy of the ASSISTments program for students and peer reviewed publications.

Structured Abstract

Setting: The study will be conducted in middle/junior high schools in the state of Maine.

Sample: The sample for this study includes teachers and students drawn from approximately 104 grade 7 classrooms in 52 public middle/junior high schools. The schools will be recruited in two cohorts with a 1-year delay in-between to allow the project team time to recruit.

Intervention: Teachers and students in grade 7 will use the ASSISTments, developed with funding from prior IES Development grants to support their nightly homework. Teachers will assign nightly homework online and receive cognitive diagnostic reports to facilitate their review of individual student homework and adapt their instruction accordingly. In Maine, all middle school students have individual laptops. Students will complete their homework on their laptop computers and receive (1) immediate feedback on their answer, (2) individualized tutoring and hint messages on difficult problems, (3) mastery problem sets that adjust to their mastery status of knowledge, and (4) automatic reassessment of a subset of skills to help improve their retention of previously mastered skills. Teachers will receive intensive professional development to support them using the reports as a formative assessment tool and parents will receive reports about their children's progress and homework performance.

Research Design and Methods: The study will use a randomized experimental design, with participation occurring over two years for each teacher. In the first year, teachers will implement and become familiar with the intervention; only the second year's usage of ASSISTments will be used in evaluating the effects of the intervention on student performance. There will be two cohorts of teachers, with Cohort 1 participating over Years 1-2 and Cohort 2 over Years 2-3. The unit of random assignment will be schools. A total of 52 schools will be recruited, and they will be randomly assigned to either the treatment or control condition.

Control Condition: Teachers in schools assigned to the control condition will use the instructional practices they are currently using or that are made available to them during the course of the study, including all traditional and formative assessment practices, other than ASSISTments.

Key Measures: Statewide math test scores from the New England Common Assessment (NECAP) will be the primary measure. Because the NECAP is designed to assess student learning from the prior teaching year at the beginning of the next school year, researchers will also administer a nationally normed mathematics achievement test, TerraNova, at the end of grade 7 for both cohorts. Additionally, researcher-developed pre- and posttests will be used for two focal units in grade 7 curriculum to obtain proximal measures of achievement.

Data Analytic Strategy: A three-level hierarchical linear regression model (students nested within teachers within schools) will be used to account for the effect of clustering on the variance structure of the data. When Cohort 2 completes participation in the study (i.e., at the end of Year 3), the data from both cohorts will be combined and analyzed. Moderator analyses will examine the impact of the intervention on the learning of students with low-baseline math achievement, special education, ELL, and different socioeconomic backgrounds. Finally, mediation analyses will examine the link between teachers' use of the ASSISTments and student homework completion rate.

Related IES Projects: Using Web-based Cognitive Assessment Systems for Predicting Student Performance on State Exams (R305K030140) and Making Longitudinal Web-Based Assessments Give Cognitively Diagnostic Reports to Teachers, Parents, and Students While Employing Mastery Learning (R305A070440)

Publications

Journal article, monograph, or newsletter

Heffernan, N.T., Ostrow, K.S., Kelly, K., Selent, D., Van Inwegen, E.G., Xiong, X., and Williams, J.J. (2016). The Future of Adaptive Learning: Does the Crowd Hold the Key?. International Journal of Artificial Intelligence in Education, 26(2): 615–644.

Ostrow, K. S., Wang, Y., and Heffernan, N. T. (2017). How Flexible is Your Data? A Comparative Analysis of Scoring Methodologies Across Learning Platforms in the Context of Group Differentiation. Journal of Learning Analytics, 4(2): 91–112.

Ostrow, K.S., Heffernan, N.T., and Williams, J.J. (2017). Tomorrow's EdTech Today: Establishing a Learning Platform as a Collaborative Research Tool for Sound Science. Teachers College Record,, 119(3): 1–36.

Roschelle, J. Feng, M. Murphy, R.F., and Mason, C.A. (2016). Online Mathematics Homework Increases Student Achievement. AERA Open, 2(4).

Proceeding

Adjei, S.A., Botelho, A.F., and Heffernan, N.T. (2016). Predicting Student Performance on Post-Requisite Skills Using Prerequisite Skill Data: An Alternative Method for Refining Prerequisite Skill Structures. In Proceedings of the Sixth International Conference on Learning Analytics and Knowledge (pp. 469–473). New York: ACM.

Feng, M (2014). Towards uncovering the mysterious world of math homework. In Proceedings of the 7th International Conference on Educational Data Mining (pp. 425–426).

Feng, M., and Roschelle, J. (2016). Predicting student's standardized test score using online homework. In eedings of the Third (2016) ACM Conference on Learning @ Scale (pp. 213–216).

Feng, M., Roschelle, J., Bhanot, R. and Mason, C (2016). Investigating gender differences on homework in middle school mathematics. In Proceedings of the 9th International Conference on Educational Data Mining (pp. 364–369).

Feng, M., Roschelle, J., Heffernan, N., Fairman, J., and Murphy, R. (2014). Implementation of an intelligent tutoring system for online homework support at large Scale. In Proceedings of the 12th International Conference on Intelligent Tutoring Systems, (pp. 561–566).

Feng, M., Roschelle, R., Murphy, R. and Heffernan, N. (2014). Using analytics for improving implementation fidelity in a large scale efficacy trial. In Learning and becoming in practice: The International Conference of the Learning Sciences (ICLS) (pp. 527–534).

Kehrer, P., Kelly, K.M., and Heffernan, N.T. (2013). Does Immediate Feedback While Doing Homework Improve Learning?. In Proceedings of the Twenty-Sixth International Florida Artificial Intelligence Research Society Conference (pp. 542–545).

Lang, C., Heffernan, N., Ostrow, K., and Wang, Y. (2015). The Impact of Incorporating Student Confidence Items Into an Intelligent Tutor: A Randomized Controlled Trial. In Proceedings of the 8th International Conference on Educational Data Mining (pp. 144–149). Madrid, Spain: Educational Data Mining.

Ostrow, K., Donnelly, C., Adjei, S., and Heffernan, N. (2015). Improving Student Modeling Through Partial Credit and Problem Difficulty. In Proceedings of the Second (2015) ACM Conference On Learning@ Scale (pp. 11–20). New York, NY: ACM.

Ostrow, K., Donnelly, C., and Heffernan, N. (2015). Optimizing Partial Credit Algorithms to Predict Student Performance. In Proceedings of the 8th International Conference on Educational Data Mining (pp. 404–407).

Ostrow, K., Heffernan, N., Heffernan, C., and Peterson, Z. (2015). Blocking vs. Interleaving: Examining Single-Session Effects Within Middle School Math Homework. In Proceedings of the 17th International Conference, AIED 2015 (pp. 338–347). Switzerland: Springer International Publishing.

Ostrow, K.S., Selent, D., Wang, Y., Van Inwegen, E.G., Heffernan, N. T., and Williams, J.J. (2016). The Assessment of Learning Infrastructure (ALI): The Theory, Practice, and Scalability of Automated Assessment. In Proceedings of the Sixth International Conference on Learning Analytics and Knowledge (pp. 279–288). New York, NY: ACM.

Selent, D., Patikorn, T., and Heffernan, N. (2016). ASSISTments Dataset from Multiple Randomized Controlled Experiments. In Proceedings of the Third ACM Conference on Learning at Scale (pp. 181–184). Edinburgh, UK: ACM.

Wang, Y., Heffernan, N.T., and Heffernan, C. (2015). Towards Better Affect Detectors: Effect of Missing Skills, Class Features and Common Wrong Answers. In Proceedings of the Fifth International Conference on Learning Analytics And Knowledge (pp. 31–35). New York: ACM.

Wang, Y., Ostrow, K., Beck, J., and Heffernan, N. (2016). Enhancing the Efficiency and Reliability of Group Differentiation Through Partial Credit. In Proceedings of the Sixth International Conference on Learning Analytics and Knowledge (pp. 454–458). New York: ACM.