Skip to main content

Breadcrumb

Home arrow_forward_ios Information on ... arrow_forward_ios Intelligent Dia ...
Home arrow_forward_ios ... arrow_forward_ios Intelligent Dia ...
Information on ...
Grant Closed

Intelligent Diagnostic Assessment Platform (i-DAP) for High School Statistics Education

NCER
Program: Education Research Grants
Program topic(s): Education Technology
Award amount: $1,399,950
Principal investigator: Ying (Alison) Cheng
Awardee:
University of Notre Dame
Year: 2018
Award period: 5 years 10 months (09/01/2018 - 06/30/2024)
Project type:
Development and Innovation
Award number: R305A180269

Purpose

In this project, researchers will develop an online formative assessment product for non-advanced placement (non-AP) statistics courses that provides real-time feedback to high school students and teachers, and adjusts in difficulty to meet the level of individual students. There is a growing demand for quality statistics education in high school, and for technologies that personalize learning opportunities for students at different skill levels.

Project Activities

Through prior research, the research team developed an item bank and an adaptive testing product for AP statistics that generated formative assessment results for individual students and hints to support learning. In this project, the researchers will develop a formative assessment product for non-AP statistics. The stages of the project will include content development and software prototyping, rounds of data collection to inform iterative refinements until a final version is produced, and measuring the impact on student engagement and learning.

Structured Abstract

Setting

The research will take place in five high schools located in two different suburban and mid-sized towns in Indiana and Ohio.

Sample

The research will include 400 high school students from five schools.

Intervention

Through prior research, the team developed an item bank and a computerized adaptive testing system for AP statistics. The system generated formative assessment results for individual students and ways to address needs.  In this project, the team will apply these insights by building a system for non-AP high-school statistics classes with assignments mapped to Common Core standards for statistics and probability. After assignments are completed, the product will generate formative feedback on mastery as well as student deficiencies. Teachers will have access to individual diagnostic and group-level reports.

Research design and methods

The team will first develop content, including assignments, comprehensive assessments, learning materials and modules. Following, the team will develop the assessment delivery engine. After development is complete, the team will evaluate the usability and feasibility of the software, validate the measures, and refine the product based on feedback from teachers and students.  After development is complete, the researchers will use a randomized control trial to examine whether the product shows promise for improving student learning of statistics and probability. The researchers will randomly assign half of the students to use the product or not. In the final year of the study, the researchers will complete analyses, dissemination, and materials for implementation.

Control condition

The treatment group will take all the assignments, and receive diagnostic feedback and personalized learning materials. The control group will take all the assignments and will receive a summative score for their performance.

Key measures

The researchers will use think-alouds and user-provided surveys to generate feedback on usability, feasibility, and fidelity of implementation. For the pilot study, researchers will employ a combination of standardized measures including a comprehensive learning assessment that covers common core standards, researcher developed content knowledge assessments and student engagement, and data from the student-user logs to monitor individual trajectories on problem sets.

Data analytic strategy

The research team will employ mediation models to isolate the effect of the components of the platform on student learning.

People and institutions involved

IES program contact(s)

Jennifer Schellinger

Project contributors

Jennifer Kaminski

Co-principal investigator
Wright State University

Cheng Liu

Co-principal investigator

Jaroslaw Nabrzyski

Co-principal investigator

Products and publications

Researchers will produce a fully develop a formative assessment product, the Intelligent Diagnostic Assessment Platform (i-DAP) for high school non-AP statistics, produce peer-reviewed publications, and provide data and resources to inform future research.

Publications:

Grantee publications can be found in ERIC here.

Denner, M., Xu, X., Ober, T. M., Pei, B., & Cheng, Y. (2024). Predicting Response Latencies on Test Questions Based on Features of the Questions. In Machine Learning in Educational Sciences: Approaches, Applications and Advances (pp. 113-128). Singapore: Springer Nature Singapore.

Kaminski, J. (2024). Preservice teachers’ understanding of mathematical equivalence. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 46).

Le, A. T., Ober, T. M., & Cheng, Y. (2024). Validation of a procrastination scale: A multimethod–multimodal approach. Translational Issues in Psychological Science.

Lu, Y., Ober, T. M., Liu, C., & Cheng, Y. (2022, July). Application of neighborhood components analysis to process and survey data to predict student learning of statistics. In 2022 International Conference on Advanced Learning Technologies (ICALT) (pp. 147-151). IEEE.

Ober, T. M., Carter, M. F., Coggins, M. R., Filonczuk, A., Kim, C., Hong, M. R., & Cheng, Y. (2022). Adaptation to remote teaching during spring 2020 amidst COVID-19: perspectives of advanced placement statistics teachers. Computers in the Schools, 39(4), 342-372.

Ober, T. M., Cheng, Y., Carter, M. F., & Liu, C. (2024). Leveraging performance and feedback‐seeking indicators from a digital learning platform for early prediction of students' learning outcomes. Journal of Computer Assisted Learning, 40(1), 219-240.

Ober, T. M., Cheng, Y., Carter, M. F., & Liu, C. (2023). Disruptiveness of COVID-19: Differences in course engagement, self-appraisal, and learning. AERA Open, 9, 23328584231177967.

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

Tags

Education TechnologyMathematics

Share

Icon to link to Facebook social media siteIcon to link to X social media siteIcon to link to LinkedIn social media siteIcon to copy link value

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

You may also like

Zoomed in IES logo
Workshop/Training

Innovation Science for Education Analytics (ISEA)

January 01, 2026
Read More
Student using blocks and a computer to learn math concepts.
Blog

Building Fraction Sense in Middle School

October 06, 2025 by IES Staff
Read More
Zoomed in IES logo
Cooperative agreement

National Research Center on Advanced Education

Award number: R305C250014
Read More
icon-dot-govicon-https icon-quote