Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: DAT-CROSS: Developing Assessments and Tools to Support the Teaching and Learning of Science Crosscutting Concepts
Center: NCER Year: 2017
Principal Investigator: Hmelo-Silver, Cindy Awardee: Indiana University
Program: Science, Technology, Engineering, and Mathematics (STEM) Education      [Program Details]
Award Period: 3 years (09/29/2017 – 09/28/2020) Award Amount: $1,395,722
Type: Measurement Award Number: R305A170634
Description:

Previous Award Number: R305A170456
Previous Awardee: Purdue University

Co-Principal Investigator: Lei Liu (ETS)

Purpose: The purpose of this project is to develop and validate computer-based summative assessments that focus on the science crosscutting concepts of systems and system models and structure and function. Consistent with the Next Generation Science Standards (NGSS), the assessments developed from this project will integrate these crosscutting concepts with two disciplinary core ideas (e.g. ecosystems or Earth systems), as well as science practices (e.g. developing and using models or engaging in argumentation). At the same time, the team will hypothesize and begin to validate learning progressions for the targeted crosscutting concepts of systems thinking, and structure and function.

Project Activities: The researchers will develop and validate computer-based summative assessments that focus on the crosscutting concepts of systems and system models and structure and function, and integrate them into disciplinary core ideas (e.g. ecosystems or Earth systems), as well as science practices (e.g. developing and using models or engaging in argumentation). At the same time, the team will hypothesize and validate learning progressions for the targeted crosscutting concepts of systems and system models, and structure and function. The learning progressions will help guide the analysis of student responses and provide a framework for the teaching and learning of crosscutting concepts.

Products: The products from this project include summative assessments of crosscutting concepts, as well as validated learning progressions on students' mastery of these concepts to help provide a framework for the teaching and learning of science concepts. The research team will also produce peer reviewed publications.

Structured Abstract

Setting: The study will take place in diverse middle and high school classrooms in urban, suburban, and rural schools across the Midwest and East Coast of the U.S.

Sample: For smaller scale usability studies, 15 students each in grades 6, 8, and 10 from suburban areas of Indiana and New Jersey will participate. For the pilot studies in schools, 200 students equally divided among grades 6, 8, and 10 in urban and suburban schools in Indiana will participate. For larger field studies, the research team will deliver assessments to 600 students across the country including rural, suburban and urban schools across the Midwest and East coast.

Intervention/Assessment: The researchers will develop computer-based tasks with real world scenarios that have three parts: (1) introduction to the scenario, (2) an exploration of a virtual simulated system (e.g. an ecosystem or an atmospheric system), and (3) construction of a model or argument using data generated from the simulation. Assessments on a single disciplinary core idea will take no more than 15 to 20 minutes in order to make the assessments practical for implementation purposes. Parallel forms of the assessment will be developed for each discipline (e.g., earth and life science) and core idea. For life sciences, there will be 4 different 15–20 minutes tasks in total. For earth science, there will be similar parallel forms, but students will explore different earth systems (e.g., the hydrosphere and the geosphere). The team will build a total of 2 tasks for earth science, and it will take on a form and structure similar to the life science tasks. In total, at least 6 new tasks in science will be developed. The tasks will be designed to measure three integrated dimensions of science knowledge, consistent with the Next Generation Science Standards: core ideas, science practices, and crosscutting concepts.

Research Design and Methods: The development and testing of the assessments will involve the following phases to ensure validity and reliability: 1) storyboarding, 2) panel review, 3) usability testing, 4) pilot testing, and 5) field studies. During the storyboarding process, the team will design and specify every screen presented to students in a simulation. The researchers will discuss the content and structure of the task with computer programmers, and seek feedback from external reviewers. An external panel of six experts will review the structure, language, directions, navigation, and potential student responses of the assessments. The panel will also review the learning progression documents for appropriateness and plausibility.

The team will carry out usability testing to evaluate the sentence structure, language, task directions, and user interphase with students when interacting with the programmed tasks. In this phase, the team will conduct think-aloud studies with 15 students each in grades 6, 8, and 10.

To explore the working functionality of the tasks and to begin the initial validation of scoring rubrics and learning progressions in schools, the team will deploy fully programmed tasks in 4 classes each at grades 6, 8, and 10. Responses from 200 student in grades 6, 8, and 10 will be collected. During this pilot test, the team will administer the assessment over two 60-minute administrations. The researchers will also conduct cognitive interviews using semi-structured protocols with 10 students each from grades 6, 8, and 10 who completed the assessments. In addition, teacher surveys will also be administered to obtain formative feedback on the assessments.

Finally, the team will gather more data related to the hypothesized learning progressions during the field test, and will perform more rigorous statistical analyses of student responses to confirm validity and reliability of assessment tasks. The team plans to collect a total of 600 student responses, with 200 responses each at grades 6, 8, and 10. Four tasks will be administered to each student. The field test will consist of two 60-minute administrations, with two tasks presented in each.

Control Condition: There is no control group for this study.

Key Measures: Researchers will have measures to help assess the usability, validity, and reliability of the assessment includes teacher and student interviews, and student responses to the assessment within the system.

Data Analytic Strategy: The research team will use item response theory based psychometric models in the quantitative analyses of student responses, including polytomous Rasch family models when validating the learning progressions.


Back