The NASEM panel on IES' research centers is entering the home stretch, and we should all be looking forward to its recommendations. I hope that you have attended some of the public meetings or communicated with the panel (I have been following their work carefully and weighing in frequently—perhaps to the chagrin of the panel directors and staff). There are a few issues the panel is dealing with that are on my mind, and I hope the panel comes up with some actionable recommendations—but I am also interested in thoughts from the field and invite input from researchers, educators, policymakers, and other members of our community.
What follows is intended as a prompt to elicit responses from you about three of those issues. Discussion of these topics will help IES think about its response to the NASEM panel, including how we change next year's RFAs.
Involving SEAs/LEAs in IES-sponsored research
For so many reasons, it is hard to imagine that much education research can be done in the absence of strong links between researchers and education agencies. However, the best way to build those links is unclear.
We do know this: Given the great variation between states and school districts across the country, there is no "one size fits all" approach. So, experimentation and trial and error must be at the core of how we try to increase SEA/LEA involvement. But, as an applied science agency devoted to changing facts on the ground, IES is committed to supporting partnerships that embody IES' raison d'être: improving learner outcomes.
We have been trying some new approaches—like requiring that education agency personnel serve as PIs on grant applications and trying to ensure that any new networks we set up include SEA/LEA personnel—but we need better ideas about what future collaborations should look like. As we think about needed changes, we keep in mind one of President Obama's oft quoted lines and the title of this blog: "Better is good."
Peer review panels
As you all know, peer reviews are foundational to how IES decides which grants to fund. To paraphrase Winston Churchill: Peer review is the worst way to judge grant applications except for all the others that have been tried.
I have been intrigued by the grant lottery system that some agencies around the world, including NIH, have been trying. Note that there is a screening process to ensure that only qualified applications enter the lottery, but lots of the garment rending and teeth gnashing required to get to the final list is avoided. Since we need to go through an extensive process to make any changes to our review process, no need to worry just yet about any experiments with a lottery system.
We have been working hard to diversify the peer review pool—in terms of race/ethnicity and other factors. Academics are the main pool from which we draw reviewers—and they are the most reliable and willing recruits. They get some professional credit for serving on IES panels, they can make the time to do it as a service to the field or as professional development for themselves, and they appreciate the admittedly small stipend we pay. But academics can be very traditional and are trained to be hypercritical—often resulting in a strong status quo bias in evaluations. (When I was head of the political science department at Stony Brook University, a faculty spouse remarked that academics spend 90 percent of their time ripping each other to shreds and 10 percent of their time giving each other awards. Perceptive, but I think the ratio was more like 95 to 5.)
We have tried to bring more state/local education officials into the review process, but a perennial obstacle is that when state legislatures are in session, they are on call 24/7. We have tried to bring entrepreneurs into the review process, with only moderate success: they often feel lost when confronted with the rituals of our "religion." One successful attempt to broaden perspectives was in the recruitment of panelists for our recent transformative research program. Individuals from foundations headed two (Kumar Garg, from Schmidt Futures and Bror Saxberg, from the Chan Zuckerberg Initiative), and Jan Plass, a professor of learning science at NYU, headed the third. Some panel members came from backgrounds other than American universities. The lessons we learned in putting together the panels for the transformative research initiative should help us to change other panels, introducing new and different perspectives into our review process. Better is good, but we'd like to do more.
IES faces multiple challenges in getting the work it supports into widespread use. We will continue to fund basic research, but as an applied science agency, we must focus relentlessly on improving learning outcomes. Scaling up interventions and products that work is essential to getting better outcomes.
There are many challenges to scaling up—and they are not unique to IES. Nor are these problems unique to a single discipline. Reality can be tough on our models and products. When I was an active researcher, I often joked about how much human beings got in the way of my work. I would create powerful, parsimonious models only to watch them be destroyed by the obstinate behavior of real people doing what they wanted rather than what my model demanded.
But even with these reminders to be humble in our ambitions, there are things we can do to improve the chances of successful scaling. First, too few of our projects are replicated, and without replication we cannot learn more precisely about what works for whom under what conditions. Without better information about target audiences (and evidence about the effects of interventions on them), scaling is difficult and perhaps doomed to fail. Further, many interventions fail when taken to scale because implementation is hard. This is something that IES and many others have been trying to deal with more systematically through implementation science (listen, for example, to this podcast).
Ofttimes, IES-sponsored researchers are simply not interested in scaling up or commercializing their products and interventions. The incentives for academics (with an historic emphasis on publications) do not align well with large scale adoption of inventions/interventions. We have made some progress on this front by requiring more detailed dissemination plans that put less weight on academic publications and more weight on presentations to practitioners and product development. Scaling and commercialization require different skills than those most academic researchers possess. We have included more guidance in our RFAs about bringing people with those skills onto research teams.
We recently engaged SRI Ventures, the venture capital fund of SRI International, to help IES explore approaches to scalability for the work we support. One of the original goals was to develop an approach to embedding entrepreneurs in selected projects to test the viability of such relationships. The project grew into something more interesting. We will soon release the report that SRI Ventures produced for us (a version of which is available here), and we have invited them to share their work at this winter's PI meeting.
I have long been a fan of our SBIR program, run by Ed Metz—and the overlap between Ed's work and SRI's is considerable. We have started to implement some of the ideas from SBIR and SRI Ventures, such as asking grant applicants to think about market size for their products and to differentiate their proposals from existing products. We need to figure out how better to build more of our work on that foundation.
One final "heads up" about scaling. As many of you know, we have been investing in networks, linking together, for example, developers and researchers, state/local policymakers and researchers, and teams of researchers focused on similar problems. Currently, each of these networks has a "network lead" to help coordinate the activities of the various network members. We are thinking about introducing a "scaling team" into a future competition that will provide expertise on how to determine product-user fit, identify stakeholder needs, and scale products in the education sector. More to come!
Clearly, many of these ideas are nascent and will go through more vetting, adding needed specificity and working out both obvious and not-so-obvious kinks. And remember, with the NASEM report coming soon and our 20th anniversary year just around the corner, opportunities for substantial changes in IES' "business as usual" will present themselves. So even more than usual, I welcome your comments. Write to me at email@example.com.