IES Blog

Institute of Education Sciences

“The How” of “What Works:” The Importance of Core Components in Education Research

Twenty-some odd years ago as a college junior, I screamed in horror watching a friend open a running dishwasher. She wanted to slip in a lightly used fork. I jumped to stop her, yelling “don’t open it, can’t you tell it’s full of water?” She paused briefly, turning to look at me with a “have you lost your mind” grimace, and yanked open the door.

Much to my surprise, nothing happened. A puff of steam. An errant drip, perhaps? But no cascade of soapy water. She slid the fork into the basket, closed the door, and hit a button. The machine started back up with a gurgle, and the kitchen floor was none the wetter.

Until that point in my life, I had no idea how a dishwasher worked. I had been around a dishwasher, but the house I lived in growing up didn’t have one. To me, washing the dishes meant filling the sink with soapy water, something akin to a washer in a laundry. I assumed dishwashers worked on the same principle, using gallons of water to slosh the dishes clean. Who knew?

Lest you think me completely inept, a counterpoint. My first car was a 1979 Ford Mustang. And I quickly learned how that very used car worked when the Mustang’s automatic choke conked out. As it happens, although a choke is necessary to start and run a gasoline engine, that it be “automatic” is not. My father Rube Goldberg-ed up a manual choke in about 15 minutes rather than paying to have it fixed.

My 14-year-old self learned how to tweak that choke “just so” so that I could get to school each morning. First, pull the choke all the way out to start the car, adjusting the fuel-air mixture ever so slightly. Then gingerly slide it back in, micron by micron, as the car warms up and you hit the road. A car doesn’t actually run on liquid gasoline, you see. Cars run on fuel vapor. And before the advent of fuel injection, fuel vapor was courtesy your carburetor and its choke. Not a soul alive who didn’t know how a manual choke worked could have started that car.

You would be forgiven if, by now, you were wondering where I am going with all of this and how it relates to the evaluation of education interventions. To that end, I offer three thoughts for your consideration:

  1. Knowing that something works is different from knowing how something works.

 

  1. Knowing how something works is necessary to put that something to its best use.

 

  1. Most education research ignores the how of interventions, dramatically diminishing the usefulness of research to practitioners.

My first argument—that there is a distinction between knowing what works and how something works—is straightforward. Since it began, the What Works Clearinghouse™ has focused on identifying “what works” for educators and other stakeholders, mounting a full-court press on behalf of internal validity. Taken together, Version 4.1 of the WWC Standards and Procedures Handbooks total some 192 pages. As a result, we have substantially greater confidence today than we did a decade ago that when an intervention developer or researcher reports that something worked for a particular group of students, we know that it actually did.

In contrast, WWC standards do not, and as far as I can tell have not ever, addressed the how of an intervention. By “the how” of an intervention, I’m referring to the parts of it that must be working, sometimes “just so,” if its efficacy claims are to be realized. For a dishwasher, it is something like: “a motor turns a wash arm, which sprays dishes with soapy water.” (It is not, as I had thought, “the dishwasher fills with soapy water that washes the mac and cheese down the drain.”) In the case of my Mustang, it was: “the choke controls the amount of air that mixes with fuel from the throttle, before heading to the cylinders.”

If you have been following the evolution of IES’ Standards for Excellence in Education Research, or SEER, and its principles, you recognize “the how” as core components. Most interventions consist of multiple core components that are—and perhaps must—be arrayed in a certain manner if the whole of the thing is to “work.” Depicted visually, core components and their relationships to one another and to the outcomes they are meant to affect form something between a logic model (often too simplistic) and a theory of change (often too complex).

(A word of caution: knowing how somethings works is also different from knowing why something works. I have been known to ask at work about “what’s in the arrows” that connect various boxes in a logic model. The why lives in those arrows. In the social sciences, those arrows are where theory resides.)  

My second argument is that knowing how something works matters, at least if you want to use it as effectively as possible. This isn’t quite as axiomatic as the distinction between “it works” and “how it works,” I realize.

This morning, when starting my car, I didn’t have to think about the complex series of events leading up to me pulling out of the driveway. Key turn, foot down, car go. But when the key turns and the car doesn’t go, then knowing something about how the parts of a car are meant to work together is very, very helpful. Conveniently, most things in our lives, if they work at all, simply do.  

Inconveniently, we don’t have that same confidence when it comes to things in education. There are currently 10,677 individual studies in the What Works Clearinghouse (WWC) database. Of those, only about 11 percent meet the WWC’s internal validity standards. Among them, only 445 have at least one statistically significant positive finding. Because the WWC doesn’t consider results from studies that don’t have strong internal validity, it isn’t quite as simple as saying “only about 4 percent of things work in education.” Instead, we’re left with “89 percent of things aren’t tested rigorously enough to have confidence about whether they work, and when tested rigorously, only about 38 percent do.” Between the “file drawer” problem that plagues research generally and our own review of the results from IES efficacy trials, we have reason to believe the true efficacy rate of “what works” in education is much lower.

Many things cause an intervention to fail. Some interventions are simply wrong-headed. Some interventions do work, but for only some students. And other interventions would work, if only they were implemented well.

Knowing an intervention’s core components and the relationships among them would, I submit, be helpful in at least that third case. If you don’t know that a dishwasher’s wash arm spins, the large skillet on the bottom rack with its handle jutting to the sky might not strike you as the proximate cause of dirty glasses on the top rack. If you don’t know that a core component of multi-tiered systems of support is progress monitoring, you might not connect the dots between a decision to cut back on periodic student assessments and suboptimal student outcomes.

My third and final argument, that most education research ignores the how of interventions, is based in at least some empiricism. The argument itself is a bit of a journey. One that starts with a caveat, wends its way to dismay, and ends in disappointment.

Here’s the caveat: My take on the relative lack of how in most education research comes from my recent experience trying to surface “what works” in remote learning. This specific segment of education research may well be an outlier. But I somehow doubt it.

Why dismay? Well, as regular readers might recall, in late March I announced plans to support a rapid evidence synthesis on effective practices in remote learning. It seemed simple enough: crowd-source research relevant to the task, conduct WWC reviews of the highest-quality submissions, and then make those reviews available to meta-analysts and other researchers to surface generalizable principles that could be useful to educators and families.

My stated goal had been to release study reviews on June 1. That date has passed, and the focus of this post is not “New WWC Reviews of Remote Learning Released.” As such, you may have gathered something about my plan has gone awry. You would be right.

Simply, things are taking longer than hoped. It is not for lack of effort. Our teams identified more than 930 studies, screened more than 700 of those studies, and surfaced 250 randomized trials or quasi-experiments. We have prioritized 35 of this last group for review. (For those of you who are thinking some version of “wow, it seems like it might be a waste to not look at 96 percent of the studies that were originally located,” I have some thoughts about that. We’ll have to save that discussion, though, for another blog.)

Our best guess for when those reviews will be widely available is now August 15. Why things are taking as long as they are is, as they say, “complicated.” The June 1 date was unlikely from the start, dependent as it was upon a series of best-case situations in times that are anything but. And at least some of the delay is driven by our emphasis on rigor and steps we take to ensure the quality of our work, something we would not short-change in any event.  

Not giving in to my dismay, however, I dug in to the 930 studies in our remote learning database to see what I might be able to learn in the meantime. I found that 22 of those studies had already been reviewed by the WWC. “Good news,” I said to myself. “There are lessons to be learned among them, I’m sure.”

And indeed, there was a lesson to be learned—just not the one I was looking for. After reviewing the lot, there was virtually no actionable evidence to be found. That’s not entirely fair. One of the 22 records was a duplicate, two were not relevant, two were not locatable, and one was behind a paywall that even my federal government IP address couldn’t get behind. Because fifteen of the sixteen remaining studies reviewed name-brand products, there was one action I could take in most cases: buy the product the researcher had evaluated.

I went through each article, this time making an imperfect determination about whether the researcher described the intervention’s core components and, if so, arrayed them in a logic model. My codes for core components included one “yes,” two “bordering on yes,” six “yes-ish,” one “not really,” and six “no.” Not surprisingly, logic models were uncommon, with two studies earning a “yes” and two more tallied as “yes-ish.” (You can see now why I am not a qualitative researcher.)

In case there’s any doubt, herein lies my disappointment: if an educator had turned to one of these articles to eke out a tip or two about “what works” in remote learning, they would have been, on average, out of luck. If they did luck out and find an article that described the core components of the tested intervention, there was a vanishingly small chance there would be information on how to put those components together to form a whole. As for surfacing generalizable principles for educators and families across multiple studies? Not without some serious effort, I can assure you.

I have never been more convinced of the importance of core components being well-documented in education research than I am today. As they currently stand, the SEER principles for core components ask:

  • Did the researcher document the core components of an intervention, including its essential practices, structural elements, and the contexts in which it was implemented and tested?
  • Did the researcher offer a clear description of how the core components of an intervention are hypothesized to affect outcomes?
  • Did the researcher's analysis help us understand which components are most important in achieving impact?

More often than not, the singular answer to the questions above is a resounding “no.” That is to the detriment of consumers of research, no doubt. Educators, or even other researchers, cannot turn to the average journal article or research report and divine enough information about what was actually studied to draw lessons for classroom practice. (There are many reasons for this, of course. I welcome your thoughts on the matter.) More importantly, though, it is to the detriment of the supposed beneficiaries of research: our students. We must do better. If our work isn’t ultimately serving them, who is it serving, really?  

Matthew Soldner
Commissioner, National Center for Education Evaluation and Regional Assistance
Agency Evaluation Officer, U.S. Department of Education

Addressing Persistent Disparities in Education Through IES Research

A teacher and students smiling and sitting cross legged in a circle

Spring 2020 has been a season of upheaval for students and educational institutions across the country. Just when the conditions around the COVID-19 pandemic began to improve, the longstanding symptoms of a different wave of distress resurfaced. We are seeing and experiencing the fear, distrust, and confusion that are the result of systemic racism and bigotry. For education stakeholders, both the COVID-19 pandemic and the civil unrest unfolding across the country accentuate the systemic inequities in access, opportunities, resources, and outcomes that continue to exist in education.

IES acknowledges these inequities and is supporting rigorous research that is helping to identify, measure, and address persistent disparities in education.

In January (back when large gatherings were a thing), IES hosted its Annual Principal Investigator’s (PI) Meeting with the theme of Closing the Gaps for All Learners. The theme underscored IES's objective of supporting research that improves equity in education access and outcomes. Presentations from IES-funded projects focusing on diversity, equity, and inclusion were included throughout the meeting and can be found here. In addition, below are highlights of several IES-funded studies that are exploring, developing, or evaluating programs, practices, and policies that education stakeholders can implement to help reduce bias and inequities in schools.

 

 

 

  • The Men of Color College Achievement (MoCCA) Project - This project addresses the problem of low completion rates for men of color at community colleges through an intervention that provides incoming male students of color with a culturally relevant student success course and adult mentors. In partnership with the Community College of Baltimore County, the team is engaged in program development, qualitative data collections to understand student perspectives, and an evaluation of the success course/mentorship intervention. This project is part of the College Completion Network and posts resources for supporting men of color here.

 

  • Identifying Discrete and Malleable Indicators of Culturally Responsive Instruction and Discipline—The purpose of this project is to use the culturally responsive practices (CRP) framework from a promising intervention, Double Check, to define and specify discrete indicators of CRPs; confirm and refine teacher and student surveys and classroom direct observation tools to measure these discrete indicators; and develop, refine, and evaluate a theory of change linking these indicators of CRPs with student academic and behavioral outcomes.

 

 

  • The Early Learning Network (Supporting Early Learning From Preschool Through Early Elementary School Grades Network)—The purpose of this research network is to examine why many children—especially children from low-income households or other disadvantaged backgrounds—experience academic and social difficulties as they begin elementary school. Network members are identifying factors (such as state and local policies, instructional practices, and parental support) that are associated with early learning and achievement from preschool through the early elementary school grades.
    • At the January 2020 IES PI Meeting, Early Learning Network researchers presented on the achievement gaps for early learners. Watch the video here. Presentations, newsletters, and other resources are available on the Early Learning Network website.

 

  • Reducing Achievement Gaps at Scale Through a Brief Self-Affirmation Intervention—In this study, researchers will test the effectiveness at scale of a low-cost, self-affirmation mindset intervention on the achievement, behavior, and attitudes of 7th grade students, focusing primarily on Black and Hispanic students. These minority student groups are susceptible to the threat of conforming to or being judged by negative stereotypes about the general underperformance of their racial/ethnic group ("stereotype threat"). A prior evaluation of this intervention has been reviewed by the What Works Clearinghouse and met standards without reservations.

 

 

IES seeks to work with education stakeholders at every level (for example, students, parents, educators, researchers, funders, and policy makers) to improve education access, equity, and outcomes for all learners, especially those who have been impacted by systemic bias. Together, we can do more.

This fall, IES will be hosting a technical working group on increasing the participation of researchers and institutions that have been historically underutilized in federal education research activities. If you have suggestions for how IES can better support research to improve equity in education, please contact us: NCER.Commissioner@ed.gov.  


Written by Christina Chhin (Christina.Chhin@ed.gov), National Center for Education Research (NCER).  

This is the fourth in a series of blog posts that stems from the 2020 Annual Principal Investigators Meeting. The theme of the meeting was Closing the Gaps for All Learners and focused on IES’s objective to support research that improves equity in access to education and education outcomes. Other posts in this series include Why I Want to Become an Education Researcher, Diversify Education Sciences? Yes, We Can!, and Closing the Opportunity Gap Through Instructional Alternatives to Exclusionary Discipline.

Bar Chart Races: Changing Demographics in K–12 Public School Enrollment

Bar chart races are a useful tool to visualize long-term trend changes. The visuals below, which use data from an array of sources, depict the changes in U.S. public elementary and secondary school enrollment from 1995 to 2029 by race/ethnicity.


Source: U.S. Department of Education, National Center for Education Statistics, Common Core of Data (CCD), “State Nonfiscal Survey of Public Elementary and Secondary Education,” 1995–96 through 2017–18; and National Elementary and Secondary Enrollment by Race/Ethnicity Projection Model, 1972 through 2029.


Total enrollment in public elementary and secondary schools has grown since 1995, but it has not grown across all racial/ethnic groups. As such, racial/ethnic distributions of public school students across the country have shifted.

One major change in public school enrollment has been in the number of Hispanic students enrolled. Enrollment of Hispanic students has grown from 6.0 million in 1995 to 13.6 million in fall 2017 (the last year of data available). During that time period, Hispanic students went from making up 13.5 percent of public school enrollment to 26.8 percent of public school enrollment. NCES projects that Hispanic enrollment will continue to grow, reaching 14.0 million and 27.5 percent of public school enrollment by fall 2029.

While the number of Hispanic public school students has grown, the number of White public school students schools has steadily declined from 29.0 million in 1995 to 24.1 million in fall 2017. NCES projects that enrollment of White public school students will continue to decline, reaching 22.4 million by 2029. The percentage of public school students who were White was 64.8 percent in 1995, and this percentage dropped below 50 percent in 2014 (to 49.5 percent). NCES projects that in 2029, White students will make up 43.8 percent of public school enrollment.

The percentage of public school students who were Black decreased from 16.8 percent in 1995 to 15.2 percent in 2017 and is projected to remain at 15.2 percent in 2029. The number of Black public school students increased from 7.6 million in 1995 to a peak of 8.4 million in 2005 but is projected to decrease to 7.7 million by 2029. Between fall 2017 and fall 2029, the percentage of public school students who were Asian/Pacific Islander is projected to continue increasing (from 5.6 to 6.9 percent), as is the percentage who were of Two or more races (from 3.9 to 5.8 percent). American Indian/Alaska Native students account for about 1 percent of public elementary and secondary enrollment in all years.

For more information about this topic, see The Condition of Education indicator Racial/Ethnic Enrollment in Public Schools.

 

By Ke Wang and Rachel Dinkes, AIR

Reducing the Burden to Grantees While Increasing the Public’s Access to IES Funded Research

In 2011 the Institute of Education Sciences (IES) adopted the IES Public Access Policy. This policy requires all IES grantees and contractors to submit their final peer-reviewed manuscripts to ERIC. ERIC then makes the work freely available to the public 12 months after publication. Operationally, this has required all grantees and some contractors to submit their work through ERIC’s Online Submission portal. To date, over 1,400 articles have been submitted as a result of this policy.

As part of an effort to minimize burden for our grantees and contractors, ERIC has negotiated agreements with the publishers of over 600 education journals to display publicly funded articles in ERIC 12 months after publication or sooner. If grantees or contractors publish their work in a participating journal, the journal will submit the full text to ERIC on behalf of the grantee. The grantee will not need to submit their work to the ERIC Online Submission portal. This is the same process currently implemented for work published by IES.

To ensure that their work is included, grantees and contractors are responsible for:

  • Including their grant or contract number(s) in the acknowledgements section of the published article.
  • Confirming that the journal title, publisher, and year matches ERIC’s list of participating journals.
  • Informing their publishers that they are subject to the IES Publication Policy when their manuscript is submitted.

This policy takes effect starting for work published after January 1, 2020. Grantees who published work prior to 2020 will still need to submit their work through ERIC’s Online Submission portal. Similarly, grantees publishing in journals not participating in this program will need to submit their work through the Online Submission portal. If an article was accepted by a journal that was participating in this program, but then the journal moved to a publisher that is not participating, the grantee will have to submit the article to ERIC using the ERIC Online Submission portal

ERIC is working to expand the list of journals who agree to display the full text of grantee articles. ERIC will update the list of participating journals multiple times a year, as new publishers sign agreements to participate in this program or journals move to a non-participating publisher. Publishers interested in participating should email ERICRequests@ed.gov for more information.

Addressing Mental Health Needs in Schools PreK to 12: An Update

As the month of May draws to a close in this unprecedented time of COVID-19, recognizing May as National Mental Health Awareness Month has taken on new significance. Organizations such as the National Association of School Psychologists (NASP) have long advocated for school-based mental health services to address the lack of access to mental health treatment in the United States for children and youth. In a 2016 blog, we provided a snapshot of the PreK to 12 school-based mental health research that the National Center for Education Research (NCER) had supported up to that point. With schools closed and uncertainty about when they will open, we are keeping an eye on these and more recent projects to see how IES-funded researchers and their school partners have addressed or are addressing mental health needs.

Preschool

  • Jason Downer (University of Virginia) developed the Learning to Objectively Observe Kids (LOOK) protocol to help prekindergarten teachers identify and understand children’s engagement in preschool and choose appropriate techniques to support children’s self-regulation skills.

Elementary School

  • Golda Ginsburg (University of Connecticut) and Kelly Drake (Johns Hopkins University) developed the CALM (Child Anxiety Learning Modules) protocol for elementary school nurses to work with children who have excessive anxiety.
  • Desiree Murray (University of North Carolina, Chapel Hill) is testing the Incredible Years Dina Dinosaur Treatment Program (IY-child) for helping early elementary school students with social-emotional and behavioral difficulties. This study is nearly complete, and findings will be available soon.
  • Gregory Fabiano (SUNY-Buffalo) adapted the Coaching Our Acting Out Children: Heightening Essential Skills (COACHES) program for implementation in schools. This is a clinic-based program to help fathers of children with or at risk for attention-deficit/hyperactivity disorder (ADHD) get more involved and engaged in their child's school performance. 
  • Aaron Thompson (University of Missouri) is testing the Self-Monitoring Training and Regulation Strategy (STARS) intervention to see if it can improve behavior, social emotional learning skills, and academic performance for fifth grade students who engage in disruptive or otherwise challenging classroom behaviors. The pilot study of promise is currently in progress.
  • Karen Bierman (Pennsylvania State University) is testing whether an intensive, individualized social skills training program, the Friendship Connections Program (FCP), can remediate the serious and chronic peer difficulties that 10–15 percent of elementary school students experience. Most of these students have or are at risk for emotional or behavioral disorders and exhibit social skill deficits (for example, poor communication skills, inability to resolve conflict) that alienate peers. This study is almost complete, and findings should be available soon.
  • Linda Pfiffner (UC San Francisco) is completing development of a web-based professional development program for school mental health providers to gain the skills needed to implement evidence-based practices (EBPs) for student attention and behavior problems.

Middle School

  • Joshua Langberg (Virginia Commonwealth University) refined the HOPS (Homework, Organization, and Planning Skills) program for middle school counselors and psychologists to support students with ADHD who need help with organization and time management. Dr. Langberg recently completed an efficacy trial of HOPS. In 2019, an independent research team at Children’s Hospital of Philadelphia received a grant to test the effectiveness of HOPS.
  • William Pelham (Florida International University) and colleagues at SUNY Buffalo are testing the efficacy of adaptive, evidence-based classroom interventions (such as Tier 1 and Tier 2 interventions delivered through a Response to Intervention framework) for children with ADHD in a Sequential Multiple Assignment Randomized Trial (SMART) design framework.
  • Thomas Power (Children’s Hospital of Philadelphia) is testing the efficacy of a school-based organizational skills training program (OST-S) for students in 3rd through 5th grade with deficits in organization, time management, and planning (OTMP), key executive function skills that support success in school.
  • Desiree Murray (UNC Chapel Hill) is completing the development of a self-regulation intervention for middle school students. The intervention will adapt and integrate strategies from existing evidence-based practices that intentionally target self-regulatory processes that develop during early adolescence.
  • Catherine Bradshaw (University of Virginia) is adapting the Early Adolescent Coping Power (EACP) to the rural school context. The Rural-EACP will address the cultural and contextual challenges of providing appropriate supports to help youth with aggressive behavior challenges in rural settings.   

High School

Policy

  • Sandra Chafouleas (University of Connecticut) identified current policies and national practice related to school-based behavioral assessment to determine whether current practice follows recommended best practice, and to develop policy recommendations for behavioral screening in schools. 

Written by Emily Doolittle (Emilly.Doolittle@ed.gov), Team Lead for Social and Behavioral Research at IES, National Center for Education Research