Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Intelligent Diagnostic Assessment Platform (i-DAP) for High School Statistics Education
Center: NCER Year: 2018
Principal Investigator: Cheng, Ying Awardee: University of Notre Dame
Program: Education Technology      [Program Details]
Award Period: 4 years (09/01/2018-06/30/2022) Award Amount: $1,399,950
Type: Development and Innovation Award Number: R305A180269

Co-Principal Investigators:  Liu, Cheng; Nabrzyski, Jaroslaw; Kaminiski, Jennifer  


Purpose: In this project, researchers will develop an online formative assessment product for non-advanced placement (non-AP) statistics courses that provides real-time feedback to high school students and teachers, and adjusts in difficulty to meet the level of individual students. There is a growing demand for quality statistics education in high school, and for technologies that personalize learning opportunities for students at different skill levels.

Project Activities: Through prior research, the research team developed an item bank and an adaptive testing product for AP statistics that generated formative assessment results for individual students and hints to support learning. In this project, the researchers will develop a formative assessment product for non-AP statistics. The stages of the project will include content development and software prototyping, rounds of data collection to inform iterative refinements until a final version is produced, and measuring the impact on student engagement and learning.

Products: Researchers will produce a fully develop a formative assessment product, the Intelligent Diagnostic Assessment Platform (i-DAP) for high school non-AP statistics, produce peer-reviewed publications, and provide data and resources to inform future research.


Setting: The research will take place in five high schools located in two different suburban and mid-sized towns in Indiana and Ohio.

Sample: The research will include 400 high school students from five schools.

Intervention: Through prior research, the team developed an item bank and a computerized adaptive testing system for AP statistics. The system generated formative assessment results for individual students and ways to address needs.  In this project, the team will apply these insights by building a system for non-AP high-school statistics classes with assignments mapped to Common Core standards for statistics and probability. After assignments are completed, the product will generate formative feedback on mastery as well as student deficiencies. Teachers will have access to individual diagnostic and group-level reports.

Research Design and Methods: The team will first develop content, including assignments, comprehensive assessments, learning materials and modules. Following, the team will develop the assessment delivery engine. After development is complete, the team will evaluate the usability and feasibility of the software, validate the measures, and refine the product based on feedback from teachers and students.  After development is complete, the researchers will use a randomized control trial to examine whether the product shows promise for improving student learning of statistics and probability. The researchers will randomly assign half of the students to use the product or not. In the final year of the study, the researchers will complete analyses, dissemination, and materials for implementation.

Control Condition: The treatment group will take all the assignments, and receive diagnostic feedback and personalized learning materials. The control group will take all the assignments and will receive a summative score for their performance.

Key Measures: The researchers will use think-alouds and user-provided surveys to generate feedback on usability, feasibility, and fidelity of implementation. For the pilot study, researchers will employ a combination of standardized measures including a comprehensive learning assessment that covers common core standards, researcher developed content knowledge assessments and student engagement, and data from the student-user logs to monitor individual trajectories on problem sets.

Data Analytic Strategies: The research team will employ mediation models to isolate the effect of the components of the platform on student learning.