Skip Navigation

The Path Forward for NCADE, Part I

Mark Schneider, Director of IES | November 28, 2023

With less than six months to the end of my term as director of IES, I have been thinking about what to focus on during my remaining time in office. I believe that setting out guidelines to ensure the success of NCADE is where I can best contribute.

For over two decades, IES has tried to tackle a wide range of R&D using just a limited set of tools and approaches. By adopting and adapting tools that are already deployed at other ARPAs around government, IES via NCADE will be able to more systematically invest in breakthrough ideas using a modern R&D framework designed to better handle risk, facilitate transition from research to practice, support scaling of successful interventions, and explore innovations that the current education R&D system does not now easily support.

We know this transformation is important. The Department of Education has been a consistent and strong proponent of establishing NCADE as a new center within IES. Many outside organizations have been equally strong supporters, in particular the Alliance for Learning Innovation (ALI). The reauthorization of the Education Sciences Reform Act (ESRA), the legislation that governs IES, is a possible avenue for establishing NCADE, as is the NEED Act.

While I have written about NCADE before, ongoing discussions with the department, ALI, Congress, and other stakeholders have helped identify and refine a future for NCADE. In a few planned blogs, I will lay out some of my thoughts on the building blocks that will lay the strongest foundation for NCADE.

A strong foundation for NCADE will require an applied problem-solving focus

IES is a mission-driven science agency housed in the Department of Education. This location and our authorizing legislation (ESRA) make clear that IES must focus on improving education outcomes for learners across the lifespan. Basic research to identify effective programs, practices, and policies is one of the most important avenues for improvement—but it is not enough.

A case in point: IES-funded research on literacy has contributed to our understanding of how students learn to read, but the persistently poor reading performance of American students shows that we need more than basic research to identify what works for whom and under what conditions. We must ensure that the interventions identified or developed through research are feasible and usable in practice and are designed in ways that facilitate scaling up to improve learning outcomes.

Taking the foundational research that IES has invested in over the past 20+ years and applying it to solve problems of practice will be core to the NCADE mission. NCADE should also have the freedom to invest in basic research and to generate new scientific insights, but to reap the benefits of the growing body of foundational knowledge about how students learn, it must focus more on establishing pathways to adoption and scalability than do IES's two research centers (NCER and NCSER).

Needed: Flexible authorities to take informed risks and bring in new and different R&D teams

One of the most consistent themes in discussions with current and past program managers from DARPA—the granddaddy of all ARPA-like federal agencies—and leaders of the next generation of ARPAs is the need for NCADE to nurture a willingness to take informed risks and learn from failure.

Here's a simple truth: most interventions fail. That's as true of social or medical interventions as it is of educational ones. We need to recognize that failure is a fact of research and policy; how we respond to failure is critical.

I believe that one of the most important changes in how we think about failure is the need to "fail fast." For me, Jim Manzi's work is still among the best published works that describe how and why failing fast is essential to the process of discovering changes that can improve outcomes. But as Manzi and others make clear, it's not failing fast per se that matters—it's how fast you learn from these failures and how you use that information for improvement.

Digital learning platforms, such as those we support through SEERNet, provide a means for rapid cycle testing of interventions—providing opportunities for failing and learning fast. For example, one SEERNet team will test the effectiveness of a set of perceptual cues designed to support math problem solving, leveraging the wide reach and ease of testing multiple versions of student support within the ASSISTments platform. Similarly, the XPRIZE we supported demonstrated the ability of such platforms to execute rapid-cycle experiments followed by replications that help target the students for whom an intervention will most likely succeed.

Learning fast from failure is critical. Ulrich Boser and his team at the Learning Agency helped start a discussion group focused on "Compiling a list of program failure modes [of ARPA agencies]." Contributors include former DARPA program managers, all of whom bring invaluable perspectives on how to build a more robust learning culture in ARPA agencies. (To join the group, please contact Ulrich Boser at the Learning Agency.)

Here's an example of the kind of information found on that email chain. This comes from Paul Cohen of the University of Pittsburgh, a DARPA program manager from 2013–17, and this is his take on learning from failure:

In AI, where I work, the gulf between our aspirations and our accomplishments is huge. DARPA has funded countless programs in natural language, planning, situational awareness, common sense, machine learning and so on, most of which made incremental progress but didn't "solve the problem." I thought my job in designing programs was to pose challenges that are out of reach but pull people in a good direction. None achieved the stated goals, so they were failures in some sense, but they made more progress than any of us thought possible, so they were successes in some sense . . . Our job is not to say programs succeed or fail — none entirely succeeds or fails — but to identify the valuable nuggets they produce.

Clearly, taking informed risks, failing fast, and learning from failure will be essential to a mindset conducive to the future success of NCADE, but building a system around those precepts requires a systematic and intentional approach that could itself fail.

The importance of program managers

Central to the success of ARPA models is the role of program managers (PMs). ARPAs rely on entrepreneurial PMs to identify, craft, and execute projects. Critically, these PMs are not career employees in the ARPA agency. Rather, they have defined terms, usually five years, to ensure they are connected to the latest developments in their area of expertise. They are also hired based on their ability to structure their proposed programs in line with the "Heilmeier Catechism."

This shifts a lot of power to program managers and the overall fate of NCADE will depend on its ability to hire strong PMs. This also means that, like other ARPA agencies, NCADE will need different hiring authority compared to that found in most other federal agencies, including IES. In turn, this requires specific legislative language creating such authorities for IES (which the NEED Act seems likely to include).

As part of a science agency, NCADE will almost certainly continue to emphasize the hiring of PhDs—but NCADE will need to consider how to expand the fields we recruit from (especially looking to the learning sciences, data science, and engineering) and we will need to make sure that the program managers hired understand the importance of the Heilmeier Catechism for structuring what NCADE should be doing.

The importance of timeliness

The lack of timeliness is likely the single most common complaint that IES receives—and, no surprise, those complaints have intensified since the pandemic, when schools and school districts needed advice about how to reverse severe learning loss now.

Most of IES's funding goes to academic researchers or large contract shops (such as RTI or AIR), where a PhD is a coin of the realm. Most of IES's program officers also have PhDs. In a shared world view, precision is highly valued and academic researchers spend much of their time producing ever more precise estimates, often using new and obscure estimators. But getting to this high level of precision requires time, often a lot more time. I call this the "fifth decimal problem," in which academic kudos accrue to whoever gets to that level of precision first.

The real world of education practice usually doesn't need such precision—indeed, we mostly don't even need to get to the first decimal point, let alone the fifth. What policy makers need is timely data to inform solutions to the real problems they face.

Increasing timeliness will require some changes in how we support research activities, some of which we are already experimenting with. For example, in our transformative research program, we have multiple "gates" built into continued funding decisions—teams that don't clear those gates can lose their funding. In the same transformative research program, we have expanded the traditional conception of research-practice partnerships (RPPs) to include tech firms as a third partner. We hope that this will inject a concern for scalability into decisions from day 1 of the grant. These changes are experiments designed to increase the timeliness of our work while ensuring its relevance to policy makers.

NCADE is just beginning a long journey. Given the complexity of that journey, there will be successes and failures. We are fully committed to celebrating success but also to heeding the lessons inherent in our inevitable failures.

I will explore other building blocks of NCADE in future blogs. I will also be discussing responses to the recent RFI we posted in the October Request for Information on Potential New Program, From Seedlings to Scale (S2S). The RFI was issued to help us set future directions for ARPA-like education programs. I have received great feedback so far, and I am looking forward to engaging more fully with the field's reactions to the proposed directions we described in the RFI.

If you have any thoughts to share on how to ensure the success of NCADE, please feel free to reach out to me: mark.schneider@ed.gov