Skip Navigation

Gathering Speed with SEERNet, Prize Competitions, and More

Mark Schneider, Director of IES | November 9, 2021

On Friday, October 22, we celebrated the launch of the SEERNet digital learning network. This is the latest IES effort to modernize, accelerate, and scale up education research. I was invited to provide some introductory remarks. For those of you who missed the event, this blog captures the gist of my comments, providing an update on many of IES' current and planned initiatives and how SEERNet fits with them.

SEERNet is designed to speed up the pace of research on digital learning. Digital Promise is the lead for this network, which includes ASU Online, ASSISTments, Mathia, OpenStax, and Terracotta. This is an impressive "all-star" cast dedicated to improving the use of digital learning platforms in both K–12 and postsecondary education. SEERNet will focus on the development of tools and processes to support collaborations with researchers, who will be able to implement a wide range of designs to address research questions or conduct replication studies within these platforms. The benefits of such a model are clear: to the extent the network succeeds, research will be faster, cheaper, and more broadly representative of different learning contexts. At the same time, developing informed consent processes and maintaining student privacy will be central to SEERNet's work.

Key to the success of SEERNet will be the number and range of research studies that it supports. Next year, we plan to hold a competition inviting research teams to propose studies using any of the platforms included in the network, with a start date in September 2023.

While SEERNet has the potential to increase the speed of research and to reduce the costs of education research, it should be seen in the context of other changes we are tackling at IES. Below are some of the modernization efforts IES is pursuing. Some have already been launched and others are in various stages of planning.

Transformative research: We are actively funding "high risk, high reward" projects that could change the usual way we conduct education research. We ran one competition last fiscal year and are planning the next one, which will build on the lessons learned from this year's process. We received over 130 applications in response to the transformative research RFA and funded four. This may seem like a low hit rate but is probably about right given what the transformative RFA set out to do.

Replications: Science relies heavily on "direct replications," in which experiments are repeated using the same protocols and ingredients to see if the results can be reproduced. IES is not particularly interested in this type of replication, but rather in ones in which researchers vary the conditions and parameters of an experiment or intervention to assess the extent to which the findings in, for example, population "A" are found again in population "B." This class of replications is central to the work IES funds: by systematically varying the context or components of interventions we hope to get better answers to IES' central question: "what works for whom under what conditions." We have funded separate competitions in NCER and NCSER to focus on these types of replications and will continue to do so.

These replications are often called "conceptual replications," but that term violates all my rules about plain English and hurts my eardrums. I have been searching for a different, more easily understood term. IES often refers to these as "systematic replications," but we have been experimenting with some other possibilities: "Modified replication," "adapted replication," or should we look for a term that emphasizes transferability? Bonus points for anyone who is willing to join in this renaming effort and the work that will be needed to spread use of the new term—just a reminder: it was this kind of crowd sourcing that produced the SEER acronym.

Data science capacity: We are working on establishing an IES-wide center for excellence in education data science, which will help modernize our approach to data collection and analysis. We are enjoying the benefits of having a couple of data science fellows on board, with the prospect of several others joining us. We are also planning to invest in the next generation of data scientists by running a high school data science competition modeled after the famous robotic ones. I think this is an exciting way to involve students in data science, an area where there are many high-paying jobs. But this effort is about more than high wages. Increasingly, the demands of everyday life and citizenship involve understanding and using data.

Artificial intelligence: We have entered into an agreement with NSF to stand up two AI institutes: one on using AI to accelerate learning in STEM, the other focused on improving education outcomes for students with disabilities. All the details are in place, so get ready for opportunities to participate. If you are interested in learning more, join the webinar being held on November 16 at 1:15 pm.

Prize competitions: Prizes create buzz and encourage people of all backgrounds to use their skills to address some of the toughest challenges in education and education research. We have already announced and are currently running two prize competitions—the XPRIZE digital learning platform competition, which already has over 30 teams registered to compete, and the NAEP automated scoring competition, which has about 15 entrants so far (ranging from the usual suspects to a lone high school student, who I am rooting for!).

We are also currently planning a middle school science competition that would focus on accelerating learning for the lowest performing students on NWEA's science assessment. I am hoping to create a coalition of philanthropies to support this effort—not because of the money, but because it's important to get science higher up on the national agenda. We will never have a strong, diverse STEM workforce without improving student knowledge of science. A broad coalition of funders would help elevate science as a challenge facing the country.

Common infrastructures: Researchers and educators need access to high quality tools to test their ideas and products—and all the better if we stop reinventing the wheel and build some common resources to meet a broad range of needs. A motivation behind SEERNet was to see if we can create a faster, cheaper, easier way to run high quality experiments. We are also planning to modernize ERIC, which should be viewed as a research infrastructure making education research easier and cheaper to access. Not revolutionary like digital learning platforms, but ERIC is one of the most widely used IES resources, notably, by teachers while in graduate school. Based on teacher focus groups we conducted a few years ago, getting ERIC right increases good will among this important group of stakeholders.

What other infrastructure should IES be supporting? Some people have suggested an AI-oriented data infrastructure with large data sets that could be used to test AI models. Some have suggested some mechanism for matching willing school districts with researchers—but no one has come up with good answers to all the questions that immediately come to mind for this potential project.

Any reactions and thoughts on the idea of research-related infrastructure (or anything else touched on in this blog)? If so, please reach out to me at: mark.schneider@ed.gov.