Inside IES Research

Notes from NCER & NCSER

Developing and Piloting the Special Education Research Accelerator

The traditional approach to research involves individual researchers or small teams independently conducting a large number of relatively small studies. Crowdsourcing research provides an alternative approach that combines resources across researchers to conduct studies that could not be done individually. As such, it has the power to address some challenges with the traditional research approach, including limited diversity of research participants as well as researchers, small sample sizes, and lack of resources. In 2019, the National Center for Special Education Research funded a grant to the University of Virginia to develop a platform for conducting crowdsourced research with students with or at risk for disabilities—the Special Education Research Accelerator (SERA).

Below, the Principal Investigators of this grant – Bryan Cook, Bill Therrien, and Vivian Wong – tell us more about the problems they intend to address through SERA, its potential, and the activities involved in its development and testing.

What’s the purpose of SERA?

SERA is a platform for conducting research in special education with large and representative study samples across multiple research sites and researchers. We are developing SERA to address some common concerns in education research, such as (a) studies with small, underpowered, and non-representative samples; (b) lack of resources for individual investigators to engage in the high-quality research that they have the skills to conduct; and (c) scarce replication studies. The issue of small, underpowered, and non-representative samples is especially acute in research with students with low-incidence disabilities, with whom few randomized controlled trials have been conducted. SERA seeks to leverage crowdsourcing to flip “research planning from ‘what is the best we can do with the resources we have to investigate our question,’ to ‘what is the best way to investigate our question, so that we can decide what resources to recruit’” (Uhlmann et al., 2019, p. 713). Conducting multiple, concurrent replication studies will allow us to not only examine average effects across research sites, but also to examine variability between sites.

How do you plan to develop and test SERA?

To pilot SERA, we are currently developing the infrastructure (project website, training materials, etc.) and procedures—including for data management—to be applied in a study that will be conducted in the 2020/21 academic year. In that study, we will conceptually replicate Scruggs, Mastropieri, and Sullivan (1994) by examining the effects of direct and indirect teaching methods on the acquisition and retention of science facts among elementary-age students with high-functioning autism. Students will be randomly assigned to one of three conditions: (a) control, in which students are told 14 science facts (as an example, frog eggs sink to the bottom of the water); (b) interventionist-provided explanations, in which students are told 14 science facts with explanations from the interventionist (frog eggs sink to the bottom to avoid predators at the top of the water); and (c) student-generated explanations, in which the interventionist provides scaffolds to the student to generate their own explanation of each science fact (frog eggs sink to the bottom – why do you think they do?; what is at the top of the water that could harm the eggs?). Acquisition of facts and explanations will be assessed immediately after the intervention, and retention will be assessed after approximately 10 days. Twenty-three research partners, representing each of the nine U.S. Census districts, have agreed to conduct the intervention with a minimum of five students.

A map of the United States that is split up into different regions

One challenge with building an infrastructure platform for conducting replication studies is that the “science” of replication as a method has yet to be fully established. That is, there is not consensus on what replication means, how high-quality replication studies should be conducted in field settings, and appropriate statistical criteria for evaluating replication success. To address these concerns, the research team is collaborating with The University of Virginia’s School of Data Science to create the pilot SERA platform to facilitate distributed data collection across independent research sites. The platform is based on the Causal Replication Framework (Steiner, Wong, & Anglin, 2019; Wong & Steiner, 2018) for designing, conducting, and analyzing high-quality replication studies and utilizes data-science methods for efficiently collecting and processing information. Subsequent phases of SERA will focus on expanding the platform so that it is available for systematic replication research for the broader education research community.

How does SERA align with the IES Standards for Excellence in Education Research (SEER)?

With its focus on systematically conducting multiple replication studies across research sites, SERA aligns closely with and will address the following SEER principles.

  • Pre-register studies: To be implemented with fidelity across multiple research partners and sites, crowdsourced study procedures have to be carefully planned and documented, which will facilitate pre-registration. We will pre-register the SERA pilot study in the Registry of Efficacy and Effectiveness.
  • Make findings, methods, and data open: Because of the data platform being developed to merge study results across more than 20 research sites, data will be in a clean and sharable format upon completion of the study. We are committed to the principles of open science and plan to share our data, as well as freely accessible study materials and research reports, on the Open Science Framework.
  • Document treatment implementation and contrast: Using audio transcripts of sessions and fidelity rubrics, SERA will introduce novel ways for utilizing natural language processing methods to evaluate the fidelity and replicability of treatment conditions across sites. These measures will allow the research team to assess and improve intervention delivery while researchers are in the field, as well as to characterize and evaluate treatment contrast in the analysis phase.
  • Analyze interventions' costs: It will not only be important to examine the costs for implementing SERA as a whole, but also the costs of the intervention implemented by the individual research teams. To this end, we are adapting and distributing easy-to-use tools and resources that will allow our research partners to collect data on ingredients and costs related to implementing a pilot intervention and replicating study results.
  • Facilitate generalization of study findings: Because SERA studies involve large, diverse, and representative samples of research participants; multiple and diverse research locations; and multiple and diverse researchers, results are likely to generalize.
  • Support scaling of promising results: Crowdsourced studies, by their nature, examine scaling by investigating whether and how findings replicate across multiple samples, locations, and researchers.

Conducting research across multiple sites and researchers raises important questions: What types of interventions can be implemented? What is the most efficient and reliable approach to collecting, transferring, and merging data across sites? It will also lead to challenges (such as IRB issues, promoting and assessing fidelity) that we are working to address in our planning and pilot study. Despite these challenges, we believe that crowdsourcing research in education may provide important benefits.

This blog was co-authored by Bryan Cook (bc3qu@virginia.edu), Bill Therrien (wjt2c@virginia.edu), and Vivian Wong (vcw2n@virginia.edu) at the University of Virginia and Katie Taylor (Katherine.Taylor@ed.gov) at IES

 

Comments are closed