As many of you know, as part of its 20th Year Anniversary, IES has been undertaking an extensive modernization of its approach to R&D. Through this work, we are learning to speed up the testing of innovations, learning how to fail fast, and learning how to replicate successful innovations across an increasingly heterogeneous population—all to learn what works for whom under what conditions. There's the old joke: "cheaper, faster, better—pick two." But a strong, well-built education research infrastructure may allow us to progress on all three.
As many of you also know, one of the prongs of our approach to modernization is to invest in prize competitions. I have written before about the XPRIZE Digital Learning Competition. (A quick update on progress: there are 10 teams now in the pilot phase). And although it's less glitzy than the XPRIZE, we also recently completed a competition on automated scoring to explore how to transition scoring of data from the National Assessment of Education Progress (NAEP) from its historical use of human scoring to widely accepted automated scoring.
Automated Scoring Challenge was a very well-defined, quick turnaround competition. We had half a dozen contestants whose item-specific automated scoring algorithms came within striking distance of the baseline (the scores generated by human raters). This competition provided a lot of information regarding the types of questions and models that could be best implemented as NAEP moves to automated scoring. We asked the winning teams for estimated costs for implementing their automated scoring system. My favorite response was from two (then) students, who asked for just a few thousand dollars so they could buy more powerful computers (a lot less than the usual multi-million-dollar bids we get from other providers). The best part? The prize purse that spurred these innovators was only $50,000.
This summer, IES will launch two new Learning Acceleration Challenges designed to identify and test interventions with the potential to dramatically improve math and science achievement. I have written about these before, so this is an update with specific links rather than a "great reveal." But more importantly, I want the field to know that the schedule for these challenges is fast paced, so if you are interested in competing, you should start thinking now about assembling a team and about working with schools that can provide the NWEA data required for measuring student success.
The first competition will focus on digital fraction interventions that improve upper elementary math performance for students with disabilities. These are usually the lowest achieving student groups reported by NAEP, and, by all indications, have suffered the most during the Covid shutdowns.
While we do not yet have post-Covid numbers, we know that in 2019, 54% of 4th graders with disabilities scored Below NAEP Basic in mathematics compared to only 15% of their peers without disabilities. Success in these upper elementary school years is critical because students who struggle to master whole numbers, rational numbers, and fractions are likely to face significant challenges both in further math coursework (such as algebra) but also with employment and life skills.
The second competition is designed to identify highly effective interventions that improve outcomes for the lowest performing students in middle grades science. The need is great. In the most recent NAEP science assessment, over 40% of all 12th graders—and almost 70% of Black 12th graders, over half of Hispanic 12th graders, and over 70% of 12th graders with disabilities—performed Below NAEP Basic. We must start earlier in a student's academic trajectory to ensure all students are proficient before graduating from high school. Scientific literacy is critical for both the nation's competitiveness and individual capacity to navigate an increasingly scientific and technological world. That so many American students are about to enter adulthood, college, the job market, or the military with less than a basic understanding of science is cause for profound concern.
Registration for the Learning Acceleration Challenges will begin this summer (most likely August), with the goal of deploying interventions during the 2022-23 school year (most likely November). Given SEER's call for using high-quality, widely available measures, the efficacy of those interventions will be determined via a rigorous evaluation using NWEA® MAP® Growth™. Each challenge features a $500,000 grand prize, awarded if an intervention meets specific thresholds. Each challenge anticipates awarding at least $150,000 to a first-prize winner and at least $75,000 to a runner up. The Science Prize may offer up to $250,000 to recognize out-of-school-time interventions.
Additional details about challenge criteria—developed in concert between IES, challenge administrator Luminary Labs, and subject matter experts—will be posted to Challenge.gov once finalized.
This is an exciting opportunity to test promising interventions in these two areas of national need and I hope that these competitions will provide further insights into how best to use challenge competitions for modernizing our work.
For more information, please visit the IES challenge website, found at 2022 IES Learning Acceleration Challenges. For more information about the challenges, or to be put on our list for regular challenge updates, contact us at Challenges.IES@ed.gov. And, of course, feel free to reach out to me at mark.schneider@ed.gov.