Skip Navigation

Impacts of a Problem-Based Instruction Approach to Economics on High school StudentsImpacts of a Problem-Based Instruction Approach to Economics on High school Students

Study design

The sample includes 78 teachers and around 6,400 students (in two cohorts of 3,200) from 66 high schools in California and Arizona. Through district curriculum leads and social study department chairs, Regional Educational Laboratory (REL) West contacted teachers to participate in the study. The sample includes nearly three times as many male as female teachers. Teachers, both treatment and control, had an average of about 14 years teaching, with 7 years teaching economics. Around 70 percent of the students are in grade 12, and 30 percent in grade 11. More than half of the sample students are of non-Hispanic descent, with an even distribution by gender. The intervention and data collection phases, in 2007 and 2008, are complete, and analysis is under way. The findings will be disseminated in late 2009.

The study has an experimental trial design in which teachers are randomly assigned to treatment or control groups. Teachers serve as the unit of randomization, and students, the primary unit of observation, are nested within teachers. The recruitment process demanded random assignment both within and between schools. When two or more teachers in a school agreed to participate, they were randomly assigned within the school. When only one teacher in a school agreed to participate—the typical case—the teacher was randomly assigned at the school level. Thus, using a teacher-level random assignment design, the study employs the school as a blocking factor when there are two or more teacher participants per school and a constructed stratum as a blocking factor when there is one teacher participant per school.

With this design, teachers in the same school can be assigned to either condition, so they are asked not to collaborate or share materials. Researchers designed the study anticipating between one and three economics teachers per school. Given the pedagogical changes required for full implementation, the study was conducted over one summer (2007) and two consecutive academic semesters (fall 2007 and spring 2008). Treatment teachers received the professional development intervention in the summer and additional support from the developer and master economics teachers in the following two semesters of using the new instruction approach. Data collection for teacher measures covered a full academic year, while students, in two cohorts, were followed for one academic semester, the full length of the course (table 1).

Treatment teachers received a five-day professional development program in summer 2007; control teachers received a delayed treatment the following summer (2008). The implementation was the classroom instruction program in economics for a one-semester course. In fall 2007 students in grades 11 and 12 received the problem-based economics curriculum or the typical curriculum—as did a second group of students enrolled in economics in spring 2008. Students in the second cohort had the advantage of having teachers who had practiced the instruction approach.

A strength of the design is that the impact estimates can be calculated on the treatment teachers who could implement the curriculum more than once—in fall 2007 and again in spring 2008. But the impact will include the fact that the second implementation will have been with second-semester seniors, whose imminent graduation may have distracted them during the posttest administration. This could attenuate some effects of the intervention. In addition, differences in program impacts for student subgroups are examined only with exploratory analyses.

Table 1.Study characteristics and data collection schedule

Group Students Teachers
Performance assessment Test of economic literacy Surveys Test of economic literacy Surveys
Fall 2007 student cohort December September / December September / December na na
Spring 2008 student cohort May January / May January / May na na
Teachers na na na August 2007 / May 2008 August 2007 / May 2008
na is not applicable.

With 78 teachers and a minimum of 50 students per semester in each two-class section (a total of around 6,400 students, or 3,200 per cohort), the sample size is sufficient for detecting program impacts in the range of 0.19–0.22 standard deviation units on academic outcomes for students. One way to interpret the magnitude of this effect is to compare it to the progress students make during an academic year. Hill et al. (2008) report that grade 10 students' scores on norm-referenced tests increase by 0.19 standard deviation in reading and 0.14 standard deviation in math over a calendar year. For teacher outcomes the sample size is sufficient for detecting larger impacts (0.57 standard deviation), but such impacts at the more proximal teacher level would be expected to produce smaller subsequent impacts at the more distal student level.

Return to Index