Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Navigating Scientific and Statistical Reasoning in Adolescence
Center: NCER Year: 2017
Principal Investigator: Shah, Priti Awardee: University of Michigan
Program: Cognition and Student Learning      [Program Details]
Award Period: 4 years (07/01/2017 – 06/30/2021) Award Amount: $1,399,758
Type: Exploration Award Number: R305A170489
Description:

Purpose: The purpose of this project is to identify promising strategies for teaching middle and high school students to critically evaluate scientific evidence presented in everyday contexts, along with new knowledge about the factors that affect the development of robust scientific evaluation skills. Individuals must learn to distinguish between good science, bad science, and pseudoscience so that they can make well-informed decisions about health, behavior, and policy. A major goal of K-12 science education is to teach students inquiry skills necessary to critically evaluate scientific evidence; however, the inquiry skills learned in the science classroom do not transfer well to everyday scientific reasoning contexts. The outcome of this project will be the identification of promising instructional strategies for improving everyday scientific reasoning for transfer to a variety of contexts.

Project Activities: Six studies will systematically identify effective instructional methods for learning evaluation skills about scientific evidence. Each study compares one or more instructional factor(s) to a control condition that is similar in all respects. In Years 1 and 2, the research team will conduct three studies focusing on causality bias as a key threat to validity, and will seek to identify effective training examples for critically assessing claims about correlational data. In Years 3 and 4, the research team will conduct three studies that combine the most effective factors the first three studies. In addition, two of the studies conducted in Years 3 and 4 will incrementally incorporate the other two target threats to validity (selection bias and overgeneralizing error).

Products: Researchers will produce preliminary evidence of potentially promising instructional methods for teaching students to evaluate scientific evidence in a variety of contexts as well as peer-reviewed publications.  

Structured Abstract

Setting: Participating schools are located in urban and suburban areas of Michigan.

Sample: Approximately 480 seventh- through ninth-grade students will participate in the first four studies (120 in each study). 90 seventh- through ninth-grade students will participate in the fifth study. 60 seventh- through ninth-grade students will participate in the sixth study.

Intervention: Due to the exploratory nature of this project, there is no intervention. The research team will identify instructional methods for learning evaluation skills about scientific evidence. This work will inform the development of interventions designed to improve students' ability to evaluate scientific evidence.

Research Design and Methods: Six studies will systematically identify effective instructional methods for learning evaluation skills about scientific evidence. Each study compares one or more instructional factor(s) to a control condition that is similar in all respects. In Years 1 and 2, the research team will conduct three studies focusing on causality bias as a key threat to validity, and will seek to identify effective training examples for critically assessing claims about correlational data. In Years 3 and 4, the research team will conduct three studies that combine the most effective factors the first three studies. In addition, two of the studies conducted in Years 3 and 4 will incrementally incorporate the other two target threats to validity (selection bias and overgeneralizing error). In all studies, participants complete a pre-test, then will learn abstract rules for reasoning about threats to validity and evaluate tutorial ‘media reports' containing scientific findings, and finally will complete a post-test. In the final two studies, students will complete a delayed post-test three months after receiving instruction.

Control Condition: Each study compares one or more instructional factor(s) with a control condition similar in all respects. The control condition differs based on the research question being asked.

Key Measures: The key measure is the change in students' pretest and posttest performance on a researcher developed task where students must evaluate scientific evidence in fictional media articles.

Data Analytic Strategy: Researchers will use analysis of variance and general linear model methods to compare pre- to post-test changes in performance on evidence evaluation tasks for varied instructional and control conditions.


Back