Skip to main content

Breadcrumb

Home arrow_forward_ios Information on IES-Funded Research arrow_forward_ios Student Reasoning Patterns in Scien ...
Home arrow_forward_ios ... arrow_forward_ios Student Reasoning Patterns in Scien ...
Information on IES-Funded Research
Grant Open

Student Reasoning Patterns in Science (SPIN-Science)

NCER
Program: Education Research Grants
Program topic(s): Cognition and Student Learning
Award amount: $1,999,682
Principal investigator: Lei Liu
Awardee:
Educational Testing Service (ETS)
Year: 2024
Award period: 4 years (07/01/2024 - 06/30/2028)
Project type:
Measurement
Award number: R305A240356

Purpose

In this project, the researchers will collaboratively develop, refine, and validate an artificial intelligence (AI) supported classroom assessment tool to measure middle-school students' reasoning patterns when engaging in the practice of scientific argumentation about ecosystem phenomena. The AI-supported assessments will target the measurement of key components required by the practice of scientific argumentation and support various students' reasoning patterns when students engage in argumentation. The measure will embed AI models in classroom assessments to afford real-time diagnosis of students' reasoning patterns and immediate feedback. This project aims to address three needs the field has: (1) a validated means to assess students' scientific argumentation ability since the adoption of the Next Generation Science Standards, (2) tools to measure English learners' (EL) language uses in reasoning and scientific argumentation, and (3) an instrument that measures and tracks in real time how well students make use of class instruction to develop argumentation ability over extended time.

Project Activities

The researchers will iteratively develop artificial intelligence (AI) supported classroom assessments and refine them through usability and two rounds of classroom studies (pilot and field studies). These assessments will delineate various reasoning patterns in students' arguments. The researchers also will expand the AI models to capture intended meaning in a broad range of linguistic features, including those of ELs, when they are engaged in scientific argumentation activities. The team will conduct a series of validation studies to investigate the cognitive, inferential, and instructional validity of the AI-supported assessments of student argumentation.

Structured Abstract

Setting

The research will take place in middle schools in California, South Carolina, and New Jersey. The researchers will select schools with significant English learner (EL) student populations to participate in the pilot and field studies.

Sample

Approximately 60 grade 6 to 8 math classrooms will participate in the piloting and experimental classroom studies. Participating schools will have at least 25 percent minorities and 25 percent low-income students.

Assessment

The assessment developed in this project will measure students' scientific argumentation competence and apply natural language processing analysis to measure key argumentation components. Evidence-centered design approach will be applied to ensure the validity of assessments.

Research design and methods

The researchers will develop the assessment tool through two parallel strands of iterative development, feedback, and refinement. The first strand focuses on the development and refinement of the AI-based tool, and the second strand aims to develop assessment tasks that allow students to demonstrate their argumentation skills in science practices. The researchers will first collect cognitive validity evidence in a cognitive lab study with 50 middle school students (including 20 ELs) to gather user experiences and student and teacher interviews to evaluate the design of the intervention and explore how well the AI-based tool functions in terms of classifying reasoning patterns and understanding how students respond to the AI feedback. To investigate inferential validity, researchers will conduct two rounds of classroom studies (pilot and field studies) to explore how teachers and students use the tool and determine how reasoning patterns change when students are engaged with tasks that include AI feedback tools and teacher dashboards. The pilot study will include 2 or 3 middle school teachers and approximately 200 students (including at least 50 ELs). The field study will include approximately 1,000 students and 18 science teachers. To examine instructional validity, researchers will conduct classroom observations from selected classes from the pilot and field studies to investigate how teachers use the AI-supported assessments and how they make use of the student data generated from these assessments to inform their instruction to facilitate students' scientific argumentation. To triangulate with the classroom observation data, the researchers will conduct interviews with selected science and EL teachers to exam how well teachers perceive and use the AI-supported assessments to support their students' scientific argumentation learning, and how well differential decisions and actions are supported by these assessment tasks.

Control condition

Due to the nature of the research design, there is no control condition.

Key measures

The AI-supported assessments will target the measurement of key components required by the practice of scientific argumentation—including claims, grounds or evidence, and rebuttal—and support various students' reasoning patterns when engaging in argumentation in the context of making sense of ecosystem phenomena. The tool will assess students constructed responses using items from the California Science Test (CAST) database to categorize potential student reasoning patterns when they made sense of specific ecosystem phenomena.

Data analytic strategy

The researchers will use standard natural language processing and psychometric methods to validate the AI models (predictive machine learning models validated against human-annotated data). For the usability study, they will transcribe and code students' think-aloud data for both construct irrelevant thinking and construct-relevant thinking (in this case focusing on the scientific argumentation components). They will use qualitative analysis to dig into rich information collected from the interviews to investigate if the reasoning pattern classified by the AI-supported assessment is indeed how students reason through the practice of scientific argumentation. To investigate whether the argumentation assessment contextualized in the ecosystem area is unidimensional, they will perform exploratory factor analysis to evaluate the dimensionality of the assessment. In addition, they will use Rasch models to examine the measurement properties of the assessments. Finally, they will use a multidimensional item response theory framework that is the multidimensional random coefficient multinomial logit framework to investigate the construct validity of the argumentation assessment items.

Cost analysis strategy

The researchers will conduct an ingredients methods approach cost analysis to identify factors contributing to total and net cost associated with implementing the AI-based tool, including server costs, device availability, and usage patterns.

People and institutions involved

IES program contact(s)

Lara Faust

Education Research Analyst
NCER

Project contributors

Xiaoming Zhai

Co-principal investigator

Yi Song

Co-principal investigator

Dante Cisterna-Alburquerque

Co-principal investigator
Educational Testing Service (ETS)

Products and publications

This project will result in a fully developed and validated AI-supported classroom assessment tool, to measure middle-school students' reasoning patterns when engaging in the practice of scientific argument about ecosystem phenomena. The project will also result in peer-reviewed publications and presentations as well as additional dissemination products that reach education stakeholders such as practitioners and policymakers.

Publications:

ERIC Citations: Find available citations in ERIC for this award here.

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

Tags

Education TechnologyK-12 EducationScienceSTEM

Share

Icon to link to Facebook social media siteIcon to link to X social media siteIcon to link to LinkedIn social media siteIcon to copy link value

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

You may also like

Zoomed in IES logo
Workshop/Training

Data Science Methods for Digital Learning Platform...

August 18, 2025
Read More
Zoomed in Yellow IES Logo
Grant

Longitudinal Relations Among Social Contexts, Bull...

Award number: R305A230406
Read More
Zoomed in IES logo
First Look / ED TAB

Revenues and Expenditures for Public Elementary an...

Author(s): National Center for Education Statistics (NCES)
Publication number: NCES 2025302
Read More
icon-dot-govicon-https icon-quote