By Thomas Wei, Evaluation Team Leader
As researchers, we take little pleasure when the programs and policies we study do not find positive effects on student outcomes. But as J.K. Rowling, author of the Harry Potter series, put it: there are “fringe benefits of failure.” In education research, studies that find no effects can still reveal important lessons and inspire new ideas that drive scientific progress.
On November 2, The Institute of Education Sciences (IES) released a new brief synthesizing three recent large-scale random assignment studies of teacher professional development (PD). As a nation we invest billions of dollars in PD every year, so it is important to assess the impact of those dollars on teaching and learning. These studies are part of an evaluation agenda that IES has developed to advance understanding of how to help teachers improve.
One of the studies focused on second-grade reading teachers, one on fourth-grade math teachers, and one on seventh-grade math teachers. The PD programs in each study emphasized building teachers’ knowledge of content or content-specific pedagogy. The programs combined summer institutes with teacher meetings and coaching during the school year. These programs were compared to the substantially less intensive PD that teachers typically received in study districts.
All three studies found that the PD did not have positive impacts on student achievement. Disappointing? Certainly. But have we at least learned something useful? Absolutely.
For example, the studies found that the PD did have positive impacts on teachers’ knowledge and some instructional practices. This tells us that intensive summer institutes with periodic meetings and coaching during the school year may be a promising format for this kind of professional development. (See the graphic above and chart below, which are from a snapshot of one of the studies.)
But why didn’t the improved knowledge and practice translate to improved student achievement? Educators and researchers have long argued that effective teachers need to have strong knowledge of the content they teach and know how best to convey the content to their students. The basic logic behind the content-focused PD we studied is to boost both of these skills, which were expected to translate to better student outcomes. But the findings suggest that this translation is actually very complex. For example, at what level do teachers need to know their subjects? Does a third-grade math teacher need to be a mathematician, just really good at third-grade math, or somewhere in between? What knowledge and practices are most important for conveying the content to students? How do we structure PD to ensure that teachers can master the knowledge and practice they need?
When we looked at correlational data from these three studies, we consistently found that most of the measured aspects of teachers’ knowledge and practice were not strongly related to student achievement. This reinforces the idea that we may need to find formats for PD that can boost knowledge and practice to an even larger degree, or we may need to find other aspects of knowledge and practice for PD to focus on that are more strongly related to student achievement. This is a critical lesson that we hope will inspire researchers, developers, and providers to think more carefully about the logic and design of content-focused PD.
Scientific progress is often incremental and painstaking. It requires continual testing and re-testing of interventions, which sometimes will not have impacts on the outcomes we care most about. But if we are willing to step back and try to connect the dots from various studies, we can still learn a great deal that will help drive progress forward.