Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Developing and Testing Multi-Component Computer-Based Assessment Tasks for the Next Generation Science Standards
Center: NCER Year: 2016
Principal Investigator: Wilson, Mark Awardee: University of California, Berkeley
Program: Science, Technology, Engineering, and Mathematics (STEM) Education      [Program Details]
Award Period: 4 years (7/1/2016-6/30/2020) Award Amount: $1,400,000
Type: Measurement Award Number: R305A160320

Co-Principal Investigator: Jonathan Osborne (Stanford University)

Purpose: The purpose of this project is to develop, revise, and validate online science assessments focusing on scientific argumentation in the domains of Structure of Matter and Ecology. Although student facility with science practices has been studied, there is limited research on how these practices may be supported within a classroom environment, and, in particular, how to readily assess students' ability to engage in scientific practice in an online environment. When it comes to engaging in scientific practices, argumentation is the practice which lies at the core of the model of science presented in the Framework for K–12 Science Education by the National Research Council. Therefore, the focus of the current project is on the assessment of scientific argumentation in two specific science domains: Structure of Matter and Ecology.

Project Activities: In a prior IES grant (R305A100692), the researchers developed and tested a learning progression for scientific argumentation, using a set of pencil and paper items to assess the constructs, and analyzed the evidence to refine and confirm the model. In this project, the researchers will further develop and validate materials to assess scientific argumentation, a central feature of the Next Generation Science Standards, and further refine the learning progressions for Structure of Matter and Ecology. The researchers will convert the existing paper-and-pencil based assessments to online assessments for easier and wider adoption in science classrooms which, wherever possible, can be graded by a computer.

Products: The products include an online assessment to measure students' scientific argumentations skills within the domains of Structure of Matter and Ecology, and peer reviewed publications.

Structured Abstract

Setting: The study will take place in an urban school district in California.

Sample: The sample will include teachers and students from Grades 8 to 10. The students are from a diverse, urban school district with 27% of students speaking English as a second language and 58% receiving free or reduced price lunch.

Assessment: The researchers will use the BEAR Assessment System and its online companion (BASS) to develop and refine assessment materials. The BEAR Assessment System (BASS) is an integrated approach to developing assessments that provides meaningful interpretations of student work relative to the cognitive and developmental goals of the domain. It is grounded in four building blocks (e.g., progress map, item design, outcome space, and measurement model) which guide assessment development. In the current project, the researchers will iterate through the four building blocks to gather high quality empirical evidence to improve the assessments.

The researchers will take advantage of the computerized delivery to create new item types and new item templates that combine the measurement of both student competence with scientific argumentation and the features of the disciplinary core ideas in Structure of Matter and Ecology. Researchers will use the computer scored items as formative assessments to provide instant feedback to students and teachers, as well as using the items for summative assessments.

Research Design and Methods: In Year 1, the researchers will work with teachers to modify and develop additional items to address any possible gaps in coverage of the learning progressions. An initial pilot of approximately 30 students using a think-aloud interview format will be conducted. By the end of Year 1, the researchers will pilot-test a small set of assessment tasks in electronic format.

In Year 2, revisions will be made to the Year 1 assessment items and materials, and further testing will occur with approximately 300 students. An additional round of data will be collected in Year 2 using a larger group of 1,000-1,500 students who will take all the available items under test-like conditions online. By the end of Year 2, the tasks that began as paper–and-pencil will be operational in their computer-based form along with other tasks developed from templates. The researchers will refine methods for conducting computer-based scoring of the items that have been transferred from paper-and-pencil format.

In Year 3, based on the feedback from the project's advisory board, researchers will conduct additional analyses if needed. If necessary, the researchers would administer a final trial of the same scope as the one in Year 2.

Control Condition: There is no control condition for this project.

Key Measures: Key measures include students' performance on the computer-based assessment items, interviews with students and teachers, and samples of classroom instructional materials.

Data Analytic Strategy: The researchers will use a unidimensional item response theory model to investigate the quality of the individual constructs. In addition, the team will utilize the multidimensional random coefficients multinomial model (and other models depending on the data presentation) to investigate the complex nature of the relationships between the constructs within each learning progression.

Examples of the online items for scientific argumentation developed with funding from IES grant Learning Progressions in Middle School Science Instruction and Assessment (R305A100692) are available at