Skip Navigation

Developing an Evidence Base for Researcher-Practitioner Partnerships

Mark Schneider, Director of IES | July 30, 2018

I recently attended the annual meeting of the National Network of Education Research-Practice Partnerships. I was joined by well over 100 others who represented a wide swath of partnerships (RPPs), most supported by IES funds. When it comes to research, academic researchers and practitioners often have different needs and different time frames. On paper, RPPs look like a way to bridge that divide.

Over the last few years, IES has made some large investments in RPPs. The Institute's National Center for Education Research runs an RPP grant competition that has funded over 50 RPPs, with an investment of around $20 million over the last several years. In addition, the evaluation of state and local programs and policies competition has supported partnerships between researchers and state and local education agencies since 2009.

But the biggest investment in RPPs, by far, has been through the Regional Educational Laboratories. In the 2012–2017 REL funding cycle, 85 percent of the REL's work had to go through "alliances", which often coordinated several RPPs and themselves emphasized research to practice partnerships. In the current funding cycle, RELs have created over 100 RPPs, and the bulk of REL's work—upwards of 80 percent—is done through them.

Back of the envelope calculations show that IES is currently spending over $40 million per year on REL RPPs. Add to that the hundreds of millions of dollars invested in alliances in the previous REL contract plus the RPP and state policy grant competitions and this constitutes a very big bet.

Despite the fact that we have invested so much in RPPs for over half a decade, we have only limited evidence about what they are accomplishing.

Consider the report that was just released from the National Center for Research in Policy and Practice. Entitled A Descriptive Study of the IES Researcher-Practitioner Partnership Program, it is exactly what it says it is: a descriptive study. Its first research goal focused on the perceived benefits of partnerships and the second focused on partnership contexts.

But neither of these research goals answers the most important question: what did the partnerships change, not just in terms of research use or service delivery, but in what matters the most, which is improved outcomes for students.

Despite IES' emphasis on evidence-based policy, right now RPPs are mostly hope-based. As noted, some research has documented a few of the processes that seem to be associated with better functioning RPPs, such as building trust among partners and having consultative meetings. Research has not, however, helped identify the functions, structures, or processes that work best for increasing the impact of RPPs.

The Institute is planning an evaluation of REL-based RPPs. We know that it will be difficult and imperfect. With over $200 million invested in the last REL cycle, with over 100 REL-based RPPs currently operating, and with $40+ million a year supporting RPPs, we assume that there's lots of variation in how they are structured, what they are doing, and ultimately how successful they are in improving student outcomes. With so many RPPs and so much variation, our evaluation will focus on the "what works for whom and under what circumstances" type questions: Are certain types of RPPs better at addressing particular types of problems? Are there certain conditions under which RPPs are more likely to be successful? Are there specific strategies that make some RPPs more successful than others? Are any successful RPP results replicable?

Defining success will not be simple. A recent study by Henrick et al. identifies five dimensions by which to evaluate RPPs—all of which have multiple indicators. Since it's not likely that we can adequately assess all five of these dimensions, plus any others that our own background research uncovers, we need to make tough choices. Even by focusing on student outcomes, which we will, we are still left with many problems. For example, different RPPs are focused on different topics—how can we map reasonable outcome measures across those different areas, many of which could have different time horizons for improvement?

Related to the question of time horizons for improvement is the question of how long it takes for RPPs to gain traction. Consider three of arguably the most successful RPPs in the nation: The Chicago Consortium was launched in 1990; the Baltimore consortium, BERC, in fall 2006; and the Research Alliance for New York City Schools in 2008. In contrast, IES' big investment in RPPs began in 2012. How much time do RPPs need to change facts on the ground? Since much of the work of the earliest alliances was focused on high school graduation rates and college access, 6 years seems to be a reasonable window for assessing those outcomes, but other alliances were engaged in work that may have longer time frames.

The challenges go on and on. But one thing is clear: we can't continue to bet tens of millions of dollars each year on RPPs without a better sense of what they are doing, what they are accomplishing, and what factors are associated with their success.

The Institute will soon be issuing a request for comments to solicit ideas from the community on the issues and indicators of success that could help us inform our evaluation of the RPPs. We look forward to working with you to provide a stronger evidence base identifying what works for whom in RPPs.

Mark Schneider
Director, IES