Project Activities
Through prior research, the research team developed an item bank and an adaptive testing product for AP statistics that generated formative assessment results for individual students and hints to support learning. In this project, the researchers will develop a formative assessment product for non-AP statistics. The stages of the project will include content development and software prototyping, rounds of data collection to inform iterative refinements until a final version is produced, and measuring the impact on student engagement and learning.
Structured Abstract
Setting
The research will take place in five high schools located in two different suburban and mid-sized towns in Indiana and Ohio.
Sample
The research will include 400 high school students from five schools.
Through prior research, the team developed an item bank and a computerized adaptive testing system for AP statistics. The system generated formative assessment results for individual students and ways to address needs. In this project, the team will apply these insights by building a system for non-AP high-school statistics classes with assignments mapped to Common Core standards for statistics and probability. After assignments are completed, the product will generate formative feedback on mastery as well as student deficiencies. Teachers will have access to individual diagnostic and group-level reports.
Research design and methods
The team will first develop content, including assignments, comprehensive assessments, learning materials and modules. Following, the team will develop the assessment delivery engine. After development is complete, the team will evaluate the usability and feasibility of the software, validate the measures, and refine the product based on feedback from teachers and students. After development is complete, the researchers will use a randomized control trial to examine whether the product shows promise for improving student learning of statistics and probability. The researchers will randomly assign half of the students to use the product or not. In the final year of the study, the researchers will complete analyses, dissemination, and materials for implementation.
Control condition
The treatment group will take all the assignments, and receive diagnostic feedback and personalized learning materials. The control group will take all the assignments and will receive a summative score for their performance.
Key measures
The researchers will use think-alouds and user-provided surveys to generate feedback on usability, feasibility, and fidelity of implementation. For the pilot study, researchers will employ a combination of standardized measures including a comprehensive learning assessment that covers common core standards, researcher developed content knowledge assessments and student engagement, and data from the student-user logs to monitor individual trajectories on problem sets.
Data analytic strategy
The research team will employ mediation models to isolate the effect of the components of the platform on student learning.
People and institutions involved
IES program contact(s)
Project contributors
Products and publications
Researchers will produce a fully develop a formative assessment product, the Intelligent Diagnostic Assessment Platform (i-DAP) for high school non-AP statistics, produce peer-reviewed publications, and provide data and resources to inform future research.
Publications:
Grantee publications can be found in ERIC here.
Denner, M., Xu, X., Ober, T. M., Pei, B., & Cheng, Y. (2024). Predicting Response Latencies on Test Questions Based on Features of the Questions. In Machine Learning in Educational Sciences: Approaches, Applications and Advances (pp. 113-128). Singapore: Springer Nature Singapore.
Kaminski, J. (2024). Preservice teachers’ understanding of mathematical equivalence. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 46).
Le, A. T., Ober, T. M., & Cheng, Y. (2024). Validation of a procrastination scale: A multimethod–multimodal approach. Translational Issues in Psychological Science.
Lu, Y., Ober, T. M., Liu, C., & Cheng, Y. (2022, July). Application of neighborhood components analysis to process and survey data to predict student learning of statistics. In 2022 International Conference on Advanced Learning Technologies (ICALT) (pp. 147-151). IEEE.
Ober, T. M., Carter, M. F., Coggins, M. R., Filonczuk, A., Kim, C., Hong, M. R., & Cheng, Y. (2022). Adaptation to remote teaching during spring 2020 amidst COVID-19: perspectives of advanced placement statistics teachers. Computers in the Schools, 39(4), 342-372.
Ober, T. M., Cheng, Y., Carter, M. F., & Liu, C. (2024). Leveraging performance and feedback‐seeking indicators from a digital learning platform for early prediction of students' learning outcomes. Journal of Computer Assisted Learning, 40(1), 219-240.
Ober, T. M., Cheng, Y., Carter, M. F., & Liu, C. (2023). Disruptiveness of COVID-19: Differences in course engagement, self-appraisal, and learning. AERA Open, 9, 23328584231177967.
Questions about this project?
To answer additional questions about this project or provide feedback, please contact the program officer.