|Title:||Designing Contrasting Cases for Inductive Learning|
|Principal Investigator:||Schwartz, Daniel||Awardee:||Stanford University|
|Program:||Cognition and Student Learning [Program Details]|
|Award Period:||3 years (7/1/2014 - 6/30/2017)||Award Amount:||$1,367,916|
Purpose: The purpose of this project is to develop and test a theory for how to pick sets of problems within science, technology, engineering, and math (STEM) domains that help students understand the quantitative (functional) structure of empirical phenomena. Based on prior research, problem sets that incorporate contrasting cases can foster an appreciation of deep structure, flexibility, transfer, and preparation for future learning. Contrasting cases exhibit systematic variation, which is useful for inducing similarities and differences. Crucially, a vetted theory for how to pick the right contrasting cases is missing, which means that practitioners’ selection of examples is often based on intuition rather than science. In this work, the factors that determine the optimal sets of contrasting cases are systematically explored. Researchers plan to use the findings from this project to inform the development of guidance for instructional designers to pick contrasting cases for many types of STEM instruction that involve examples, including direct instruction, worked examples, analogy, and guided discovery.
Project Activities: Throughout the project, laboratory studies with community college students will examine basic principles of case selection, and classroom studies will determine if these principles work in middle school science classrooms. Multiple laboratory studies and at least one classroom study will be conducted in each project year. In Year 1, studies will compare learning from the theoretically minimum number of cases needed to provide sufficient information for inducing the functional relation to learning under conditions that violate these principles. In Year 2, studies will compare the theoretically minimum number of cases to contrasting cases that oversample or accentuate dimensional variability. In Year 3, studies will examine the factors of contrasting cases that help students learn 2nd-order functional relations.
Products: The product of this project will be a tested theory for choosing effective contrasting cases for many forms of instruction that involve examples, including direct instruction, worked examples, analogy, and guided-discovery. Peer reviewed publications will also be produced.
Setting: The studies will take place in a laboratory at a California university and in approximately 5-10 middle school classrooms per year at semi-urban schools within two school districts in California.
Sample: For the nine laboratory studies, participants include approximately 600 students enrolled at a two-year community college. For each study, around 60-70 students will participate. For the classroom studies (of which there will be at least 3), participants include approximately 400 7th-8th grade students from schools with high ethnic and SES diversity. For each study, around 125 students will participate.
Intervention: This project will seek to develop a theory for choosing effective contrasting cases. To do this, researchers will identify the factors of contrasting cases that lead to the best student learning outcomes. The findings from this research may lead to the development of instructional interventions for use in STEM classrooms.
Research Design and Methods: The research method is experimental. Over three years, the research team will conduct approximately nine laboratory studies and three classroom studies to test laboratory results in situ. The experimental comparisons will comprise variations in the cases that students receive. Most studies use a 2x2 between-subject factorial design to isolate the principles for the design of contrasting cases. Random assignment will be used for laboratory studies and stratified random assignment will be used for classroom studies (using science and math achievement as the stratification variable). In general, participants in all conditions will receive similar task demands with the key treatment involving the contrasting cases they receive. Afterwards, all participants will complete post-tests to determine what they learned from the contrasting cases.
Control Condition: The control condition will vary as a function of the research question being addressed. In some experiments, there is no formal control, and instead there are multiple comparison conditions. In some laboratory studies, participants in the control condition complete filler tasks, and in other laboratory studies, they are presented with contrasting cases that violate or do not have the principles predicted to improve learning. In some classroom studies, participants in the control condition engage in standard classroom instruction, and in other classroom studies, participants complete a version of the contrasting cases activities that does not include the factor (or factors) predicted to improve learning.
Key Measures: The measures used in these studies will be designed by the research team and will be tailored to the content students will be exposed to during learning. One class of measures will evaluate students’ conceptual understanding by assessing their ability to solve new problems, to explain a formula, and their preparation for future learning (i.e., do students exhibit superior subsequent learning on new topics?). The second class of measures will emphasize fluency and will assess students’ abilities to apply the formula and solve word problems. The research team will also collect process measures (students’ solutions to the contrasting cases activities, video recordings of students completing the activities, and students’ explanations of what they are thinking and doing at select points) and engagement measures designed by the research team to ensure the activities are working as planned. Different studies use different combinations of measures; however, all studies include pre- and post-tests.
Data Analytic Strategy: The data analytic strategy will vary depending on the research design and data type. In general, the research team will use multivariate analysis of variance, with experimental treatments as between-subjects factors and time of test as a within-subject factor. The research team will use subordinate multivariate analysis of covariance to examine correlations between performance during learning and post-test as well as effects of pre-existing conditions. In the classroom studies, if students participate as part of an intact group, the research team will use mixed models with group and teacher as nesting variables.
Schwartz, D.L., Tsang, J.M., and Blair, K.P. (2016). The ABCs of How We Learn: 26 Scientifically Proven Approaches, How They Work, and When to Use Them. W.W. Norton & Company, Inc.
Journal article, monograph, or newsletter
Chin, D.B., Chi, M., and Schwartz, D.L. (2016). A Comparison of Two Methods of Active Learning in Physics: Inventing a General Solution Versus Compare and Contrast. Instructional Science, 44(2): 177–195.
Shemwell, J., Chase, C.C., and Schwartz, D.L. (2015). Seeking the General Explanation: A Test of Inductive Activities for Learning and Transfer. Journal of Research in Science Teaching, 52(1): 58–83.