NCEE Blog

National Center for Education Evaluation and Regional Assistance

“The How” of “What Works:” The Importance of Core Components in Education Research

Twenty-some odd years ago as a college junior, I screamed in horror watching a friend open a running dishwasher. She wanted to slip in a lightly used fork. I jumped to stop her, yelling “don’t open it, can’t you tell it’s full of water?” She paused briefly, turning to look at me with a “have you lost your mind” grimace, and yanked open the door.

Much to my surprise, nothing happened. A puff of steam. An errant drip, perhaps? But no cascade of soapy water. She slid the fork into the basket, closed the door, and hit a button. The machine started back up with a gurgle, and the kitchen floor was none the wetter.

Until that point in my life, I had no idea how a dishwasher worked. I had been around a dishwasher, but the house I lived in growing up didn’t have one. To me, washing the dishes meant filling the sink with soapy water, something akin to a washer in a laundry. I assumed dishwashers worked on the same principle, using gallons of water to slosh the dishes clean. Who knew?

Lest you think me completely inept, a counterpoint. My first car was a 1979 Ford Mustang. And I quickly learned how that very used car worked when the Mustang’s automatic choke conked out. As it happens, although a choke is necessary to start and run a gasoline engine, that it be “automatic” is not. My father Rube Goldberg-ed up a manual choke in about 15 minutes rather than paying to have it fixed.

My 14-year-old self learned how to tweak that choke “just so” so that I could get to school each morning. First, pull the choke all the way out to start the car, adjusting the fuel-air mixture ever so slightly. Then gingerly slide it back in, micron by micron, as the car warms up and you hit the road. A car doesn’t actually run on liquid gasoline, you see. Cars run on fuel vapor. And before the advent of fuel injection, fuel vapor was courtesy your carburetor and its choke. Not a soul alive who didn’t know how a manual choke worked could have started that car.

You would be forgiven if, by now, you were wondering where I am going with all of this and how it relates to the evaluation of education interventions. To that end, I offer three thoughts for your consideration:

  1. Knowing that something works is different from knowing how something works.

 

  1. Knowing how something works is necessary to put that something to its best use.

 

  1. Most education research ignores the how of interventions, dramatically diminishing the usefulness of research to practitioners.

My first argument—that there is a distinction between knowing what works and how something works—is straightforward. Since it began, the What Works Clearinghouse™ has focused on identifying “what works” for educators and other stakeholders, mounting a full-court press on behalf of internal validity. Taken together, Version 4.1 of the WWC Standards and Procedures Handbooks total some 192 pages. As a result, we have substantially greater confidence today than we did a decade ago that when an intervention developer or researcher reports that something worked for a particular group of students, we know that it actually did.

In contrast, WWC standards do not, and as far as I can tell have not ever, addressed the how of an intervention. By “the how” of an intervention, I’m referring to the parts of it that must be working, sometimes “just so,” if its efficacy claims are to be realized. For a dishwasher, it is something like: “a motor turns a wash arm, which sprays dishes with soapy water.” (It is not, as I had thought, “the dishwasher fills with soapy water that washes the mac and cheese down the drain.”) In the case of my Mustang, it was: “the choke controls the amount of air that mixes with fuel from the throttle, before heading to the cylinders.”

If you have been following the evolution of IES’ Standards for Excellence in Education Research, or SEER, and its principles, you recognize “the how” as core components. Most interventions consist of multiple core components that are—and perhaps must—be arrayed in a certain manner if the whole of the thing is to “work.” Depicted visually, core components and their relationships to one another and to the outcomes they are meant to affect form something between a logic model (often too simplistic) and a theory of change (often too complex).

(A word of caution: knowing how somethings works is also different from knowing why something works. I have been known to ask at work about “what’s in the arrows” that connect various boxes in a logic model. The why lives in those arrows. In the social sciences, those arrows are where theory resides.)  

My second argument is that knowing how something works matters, at least if you want to use it as effectively as possible. This isn’t quite as axiomatic as the distinction between “it works” and “how it works,” I realize.

This morning, when starting my car, I didn’t have to think about the complex series of events leading up to me pulling out of the driveway. Key turn, foot down, car go. But when the key turns and the car doesn’t go, then knowing something about how the parts of a car are meant to work together is very, very helpful. Conveniently, most things in our lives, if they work at all, simply do.  

Inconveniently, we don’t have that same confidence when it comes to things in education. There are currently 10,677 individual studies in the What Works Clearinghouse (WWC) database. Of those, only about 11 percent meet the WWC’s internal validity standards. Among them, only 445 have at least one statistically significant positive finding. Because the WWC doesn’t consider results from studies that don’t have strong internal validity, it isn’t quite as simple as saying “only about 4 percent of things work in education.” Instead, we’re left with “89 percent of things aren’t tested rigorously enough to have confidence about whether they work, and when tested rigorously, only about 38 percent do.” Between the “file drawer” problem that plagues research generally and our own review of the results from IES efficacy trials, we have reason to believe the true efficacy rate of “what works” in education is much lower.

Many things cause an intervention to fail. Some interventions are simply wrong-headed. Some interventions do work, but for only some students. And other interventions would work, if only they were implemented well.

Knowing an intervention’s core components and the relationships among them would, I submit, be helpful in at least that third case. If you don’t know that a dishwasher’s wash arm spins, the large skillet on the bottom rack with its handle jutting to the sky might not strike you as the proximate cause of dirty glasses on the top rack. If you don’t know that a core component of multi-tiered systems of support is progress monitoring, you might not connect the dots between a decision to cut back on periodic student assessments and suboptimal student outcomes.

My third and final argument, that most education research ignores the how of interventions, is based in at least some empiricism. The argument itself is a bit of a journey. One that starts with a caveat, wends its way to dismay, and ends in disappointment.

Here’s the caveat: My take on the relative lack of how in most education research comes from my recent experience trying to surface “what works” in remote learning. This specific segment of education research may well be an outlier. But I somehow doubt it.

Why dismay? Well, as regular readers might recall, in late March I announced plans to support a rapid evidence synthesis on effective practices in remote learning. It seemed simple enough: crowd-source research relevant to the task, conduct WWC reviews of the highest-quality submissions, and then make those reviews available to meta-analysts and other researchers to surface generalizable principles that could be useful to educators and families.

My stated goal had been to release study reviews on June 1. That date has passed, and the focus of this post is not “New WWC Reviews of Remote Learning Released.” As such, you may have gathered something about my plan has gone awry. You would be right.

Simply, things are taking longer than hoped. It is not for lack of effort. Our teams identified more than 930 studies, screened more than 700 of those studies, and surfaced 250 randomized trials or quasi-experiments. We have prioritized 35 of this last group for review. (For those of you who are thinking some version of “wow, it seems like it might be a waste to not look at 96 percent of the studies that were originally located,” I have some thoughts about that. We’ll have to save that discussion, though, for another blog.)

Our best guess for when those reviews will be widely available is now August 15. Why things are taking as long as they are is, as they say, “complicated.” The June 1 date was unlikely from the start, dependent as it was upon a series of best-case situations in times that are anything but. And at least some of the delay is driven by our emphasis on rigor and steps we take to ensure the quality of our work, something we would not short-change in any event.  

Not giving in to my dismay, however, I dug in to the 930 studies in our remote learning database to see what I might be able to learn in the meantime. I found that 22 of those studies had already been reviewed by the WWC. “Good news,” I said to myself. “There are lessons to be learned among them, I’m sure.”

And indeed, there was a lesson to be learned—just not the one I was looking for. After reviewing the lot, there was virtually no actionable evidence to be found. That’s not entirely fair. One of the 22 records was a duplicate, two were not relevant, two were not locatable, and one was behind a paywall that even my federal government IP address couldn’t get behind. Because fifteen of the sixteen remaining studies reviewed name-brand products, there was one action I could take in most cases: buy the product the researcher had evaluated.

I went through each article, this time making an imperfect determination about whether the researcher described the intervention’s core components and, if so, arrayed them in a logic model. My codes for core components included one “yes,” two “bordering on yes,” six “yes-ish,” one “not really,” and six “no.” Not surprisingly, logic models were uncommon, with two studies earning a “yes” and two more tallied as “yes-ish.” (You can see now why I am not a qualitative researcher.)

In case there’s any doubt, herein lies my disappointment: if an educator had turned to one of these articles to eke out a tip or two about “what works” in remote learning, they would have been, on average, out of luck. If they did luck out and find an article that described the core components of the tested intervention, there was a vanishingly small chance there would be information on how to put those components together to form a whole. As for surfacing generalizable principles for educators and families across multiple studies? Not without some serious effort, I can assure you.

I have never been more convinced of the importance of core components being well-documented in education research than I am today. As they currently stand, the SEER principles for core components ask:

  • Did the researcher document the core components of an intervention, including its essential practices, structural elements, and the contexts in which it was implemented and tested?
  • Did the researcher offer a clear description of how the core components of an intervention are hypothesized to affect outcomes?
  • Did the researcher's analysis help us understand which components are most important in achieving impact?

More often than not, the singular answer to the questions above is a resounding “no.” That is to the detriment of consumers of research, no doubt. Educators, or even other researchers, cannot turn to the average journal article or research report and divine enough information about what was actually studied to draw lessons for classroom practice. (There are many reasons for this, of course. I welcome your thoughts on the matter.) More importantly, though, it is to the detriment of the supposed beneficiaries of research: our students. We must do better. If our work isn’t ultimately serving them, who is it serving, really?  

Matthew Soldner
Commissioner, National Center for Education Evaluation and Regional Assistance
Agency Evaluation Officer, U.S. Department of Education

An Evidence-Based Response to COVID-19: What We’re Learning

Several weeks ago, I announced the What Works Clearinghouse’s™ first ever rapid evidence synthesis project: a quick look at “what works” in distance education. I asked families and educators to send us their questions about how to adapt to learning at home, from early childhood to adult basic education. I posed a different challenge to researchers and technologists, asking them to nominate high-quality studies of distance and on-line learning that could begin to answer those questions.

Between public nominations and our own databases, we’ve now surfaced more than 900 studies. I was happy to see the full-text of about 300 studies were already available in ERIC, our own bibliographic database—and that many submitters whose work isn’t yet found there pledged to submit to ERIC, making sure it will be freely available to the public in the future. I was a little less happy to learn that only a few dozen of those 900 had already been reviewed by the WWC. This could mean either that (1) there is not a lot of rigorous research on distance learning, or (2) rigorous research exists, but we are systematically missing it. The truth is probably “both-and,” not “either-or.” Rigorous research exists, but more is needed … and the WWC needs to be more planful in capturing it.

The next step for the WWC team is to screen nominated studies to see which are likely to meet our evidence standards. As I’ve said elsewhere, we’ll be lucky if a small fraction—maybe 50—do. Full WWC reviews of the most actionable studies among them will be posted to the WWC website by June 1st, and at that time it is my hope that meta-analysts and technical assistance providers from across the country pitch in to create the products teachers and families desperately need. (Are you a researcher or content producer who wants to join that effort? If so, email me at matthew.soldner@ed.gov.)

Whether this approach actually works is an open question. Will it reduce the time it takes to create products that are both useful and used? All told, our time on the effort will amount to about two months. I had begun this process hoping for something even quicker. My early thinking was that IES would only put out a call for studies, leaving study reviews and product development to individual research teams. My team was convinced, however, that the value of a full WWC review for studies outweighed the potential benefit of quicker products. They were, of course, correct: IES’ comparative advantage stems from our commitment to quality and rigor.

I am willing to stipulate that these are unusual times: the WWC’s evidence synthesis infrastructure hasn’t typically needed to turn on a dime, and I hope that continues to be the case. That said, there may be lessons to be learned from this moment, about both how the WWC does its own work and how it supports the work of the field. To that end, I’d offer a few thoughts.

The WWC could support partners in research and content creation who can act nimbly, maintaining pressure for rigorous work.

Educators have questions that span every facet of their work, every subject, and every age band. And there’s a lot of education research out there, from complex, multi-site RCTs to small, qualitative case studies. The WWC doesn’t have the capacity to either answer every question that deserves answering or synthesize every study we’re interested in synthesizing. (Not to mention the many types of studies we don’t have good methods for synthesizing today.)

This suggests to me there is a potential market for researchers and technical assistance providers who can quickly identify high-quality evidence, accurately synthesize it, and create educator-facing materials that can make a difference in classroom practice. Some folks have begun to fill the gap, including both familiar faces and not-so-familiar ones. Opportunities for collaboration abound, and partners like these can be sources of inspiration and innovation for one another and for the WWC. Where there are gaps in our understanding of how to do this work well that can be filled through systematic inquiry, IES can offer financial support via our Statistical and Research Methodology in Education grant program.   

The WWC could consider adding new products to its mix, including rigorous rapid evidence syntheses.

Anyone who has visited us at whatworks.ed.gov recently knows the WWC offers two types of syntheses: Intervention Reports and Practice Guides. Neither are meant to be quick-turnaround products.

As their name implies, Intervention Reports are systematic reviews of a single, typically brand-name, intervention. They are fairly short, no longer than 15 pages. And they don’t take too long to produce, since they’re focused on a single product. Despite having done nearly 600 of them, we often hear we haven’t reviewed the specific product a stakeholder reports needing information on. Similarly, we often hear from stakeholders that they aren’t in a position to buy a product. Instead, they’re looking for the “secret sauce” they could use in their state, district, building, or classroom.

Practice Guides are our effort to identify generalizable practices across programs and products that can make a difference in student outcomes. Educators download our most popular Guides tens of thousands of times a year, and they are easily the best thing we create. But it is fair to say they are labors of love. Each Guide is the product of the hard work of researchers, practitioners, and other subject matter experts over about 18 months.  

Something seems to be missing from our product mix. What could the WWC produce that is as useful as a Practice Guide but as lean as an Intervention Report? 

Our very wise colleagues at the UK’s Education Endowment Foundation have a model that is potentially promising: Rapid Evidence Assessments based on pre-existing meta-analyses. I am particularly excited about their work because—despite not coordinating our efforts—they are also focusing on Distance Learning and released a rapid assessment on the topic on April 22nd. There are plusses and minuses to their approach, and they do not share our requirement for rigorous peer review. But there is certainly something to be learned from how they do their work.

The WWC could expand its “what works” remit to include “what’s innovative,” adding forward-looking horizon scanning to here-and-now (and sometimes yesterday) meta-analysis.

Meta-analyses play a critical role in efforts to bring evidence to persistent problems of practice, helping to sort through multiple, sometimes conflicting studies to yield a robust estimate of whether an intervention works. The inputs to any meta-analysis are what is already known—or at least what has already been published—about programs, practices, and policies. They are therefore backward-looking by design. Given how slowly most things change in education, that is typically fine.

But what help is meta-analysis when a problem is novel, or when the best solution isn’t a well-studied intervention but instead a new innovation? In these cases, practitioners are craving evidence before it has been synthesized and, sometimes, before it has even been generated. Present experience demonstrates that any of us can be made to grasp for anything that even smacks of evidence, if the circumstances are precarious enough. The challenge to an organization like the WWC, which relies on traditional conceptions of rigorous evidence of efficacy and effectiveness, is a serious one.

How might the WWC become aware of potentially promising solutions to today’s problems before much if anything is known about their efficacy, and how might we surface those problems that are nascent today but could explode across the landscape tomorrow? 

One model I’m intensely interested in is the Health Care Horizon Scanning System at PCORI. In their words, it “provides a systematic process to identify healthcare interventions that have a high potential to alter the standard of care.” Adapted to the WWC use case, this sort of system would alert us to novel solutions: practices that merited monitoring and might cause us to build and/or share early evidence broadly to relevant stakeholders. This same approach could surface innovations designed to solve novel problems that weren’t already the subject of multiple research efforts and well-represented in the literature. We’d be ahead of—or at least tracking alongside—the curve, not behind.  

Wrapping Up

The WWC’s current Rapid Evidence Synthesis focused on distance learning is an experiment of sorts. It represents a new way of interacting with our key stakeholders, a new way to gather evidence, and a new way to see our reviews synthesized into products that can improve practice. To the extent that it has pushed us to try new models and has identified hundreds of “new” (or “new to us”) studies, it is already a success. Of course, we still hope for more.

As I hope you can see from this blog, it has also spurred us to consider other ways we can further strengthen an already strong program. I welcome your thoughts and feedback – just email me at matthew.soldner@ed.gov.

Seeking Your Help in Learning More About What Works in Distance Education: A Rapid Evidence Synthesis

Note: NCEE will continue to accept study nominations after the April 3rd deadline, adding them on a regular basis to our growing bibliography found here. Studies received before the deadline will be considered for the June 1 data release. NCEE will use studies received after the deadline to inform our prioritization of studies for review. Awareness of these studies will also allow NCEE to consider them for future activities related to distance and/or online education and remote learning.

In the midst of the coronavirus crisis, we know that families and educators are scrambling for high-quality information about what works in distance education—a term we use here to include both online learning as well as opportunities for students to use technology or other resources to learn while not physically at school.

Leaders in the education technology ecosystem have already begun to respond to the COVID-19 outbreak by creating websites like techforlearners.org, which as of today lists more than 400 online learning products, resources, and services. But too little information is widely available about what works in distance education to improve student outcomes.

If ever there is a time for citizen science, it is now. Starting today, the What Works Clearinghouse™ (WWC) at the U.S. Department of Education’s Institute of Education Sciences is announcing its first-ever cooperative rapid evidence synthesis.

Here is what we have in mind:

  • Between now and April 3rd, we are asking families and educators to share with us questions they have about effective distance education practices and products. We are particularly interested in questions about practices that seem especially relevant today, in which educators are called to adapt their instruction to online formats or send learning materials home to students, and families, not all of whom have internet access, seek to combine available technology with other resources to create a coherent learning experience for their students. Early education, elementary, postsecondary, and adult basic education practices and products are welcome. Submit all nominations to NCEE.Feedback@ed.gov.
  • During that same time, we are asking that members of the public, including researchers and technologists, nominate any rigorous research they are aware of or have conducted that evaluates the effectiveness of specific distance education practices or products on student outcomes. As above,  education, elementary, postsecondary, and adult basic education practices and products are welcome.
    • Submit all nominations to NCEE.Feedback@ed.gov. Nominations should include links to publicly available versions of studies wherever possible.
    • Study authors are strongly encouraged to nominate studies as described above and simultaneously submit them to ED’s online repository of education research, ERIC. Learn more about the ERIC submission process here.
    • We will post a link to a list of studies on this page and update it on a regular basis.
       
  • By June 1, certified WWC reviewers will have prioritized and screened as many nominated studies as resources allow. Based on the responses received from families, educators, researchers, and technologists, we may narrow the focus of our review; however, nominations will be posted to our website, even those we do not review. Reviews will be entered in the WWC’s Review of Individual Studies Database, which can be downloaded as a flat file.
     
  • After June 1, individual meta-analysts, research teams, or others can download screened studies from the WWC and begin their meta-analytic work. As researchers complete their syntheses, they should submit them through the ERIC online submission system and alert IES. Although we cannot review each analysis or endorse their findings, we will do our best to announce each new review via social media—amplifying your work to educators, families, and other interested stakeholders. Let me know at NCEE.Feedback@ed.gov if this part of the work is of interest to you or your colleagues.

Will you help, joining the WWC’s effort to generate high-quality information about what works in distance education? If so, submit your study today, let me know you or your team are interested in lending your meta-analytic skills to the effort, or just provide feedback on how to make this work more effectively. You can reach me directly at matthew.soldner@ed.gov.

Matthew Soldner

Commissioner, National Center for Education Evaluation and
Agency Evaluation Officer, U.S. Department of Education

The Role of RELs in Making WWC Practice Guides Actionable for Educators

Earlier this year, I wrote a short blog about how I envisioned the Regional Educational Laboratories (REL) Program, The What Works Clearinghouse™ (WWC), and the Comprehensive Center Program could work together to take discovery to scale. In it, I promised I would follow-up with more thoughts on a specific—and critically important—example: making WWC Practice Guides actionable for educators. I do so below. At the end of this blog, I pose a few questions on which I welcome comments.

The challenge. The single most important resources the WWC produces are its Practice Guides. Practice Guides evaluate the research on a given topic—say, teaching fractions in elementary and middle school—and boil study findings down to a handful of evidence-based practices for educators. Each practice is given a rating to indicate the WWC’s confidence in the underlying evidence, along with tips for how practices can be implemented in the classroom. In many ways, Practice Guides are IES’s most specific and definitive statements about what works to improve education practice and promote student achievement.

Despite their importance, the amount of effort IES has intentionally dedicated to producing high-quality resources that support educators in implementing Practice Guide recommendations has been uneven. (By most measures, it has been on the decline.) Why? Although we have confidence that the materials we have already produced are high-quality, we cannot prove it. Rigor is part of our DNA, and the absence of systematic efficacy tests demonstrating tools’ contribution to improved teacher practice has made us hesitant to dramatically expand IES-branded resources.

To their credit, several organizations have stepped in to address the “last mile problem” between Practice Guides and classroom practice. Some, like RELs, are IES partners. As a result, we have seen a small number of Practice Guides turned in to professional learning community guides, massively on-line open courses, and other teacher-facing resources. Despite these efforts, similar resources have not been developed for the overwhelming majority of Practice Guides. This means many of our Guides and the dozens of recommendations for evidence-based practice they contain are languishing underused on IES’s virtual bookshelf.

An idea. IES should “back” the systematic transformation of Practice Guide recommendations from words on a page to high-quality materials that support teachers’ use of evidence-based practices in their classrooms. And because we should demonstrate our own practice works, those materials should be tested for efficacy.

From my perspective, RELs are well-suited to this task. This work unambiguously aligns with RELs’ purpose, which is to improve student achievement using scientifically-valid research. It also leverages RELs’ unique value proposition among federal technical assistance providers: the capacity to conduct rigorous research and development activities in partnership with state and local educators. If RELs took on a greater role in supporting Practice Guides in the next REL cycle—which runs from 2022 until 2027—what might it look like in practice?

One model involves RELs collaborating with state and/or district partners to design, pilot, and test a coherent set of resources (a “toolkit”) that help educators bring Practice Guide recommendations to life in the classroom. Potential products might include rubrics to audit current policy or practice, videos of high-quality instructional practice, sample classroom materials, or professional learning community facilitation guides, each linked to one or more Practice Guide recommendations.

Long-time followers of the WWC may recognize the design aspect of this work as similar to the defunct Doing What Works Program. The difference? New resources would not only be developed in collaboration with educators, they must be piloted and tested with them as well. It’s simple, really: if we expect educators to use evidence-based practices in the classroom, we need evidence-based tools to help teachers succeed when implementing them.  

Once vetted, materials must get into the hands of educators who need them. It’s here where the value of the REL-Comprehensive Center partnership becomes clear. With a mission of supporting each state education agency in its school improvement efforts, Regional Comprehensive Centers are in the ideal position to bring resources and implementation supports to state and local education leaders that meet their unique needs. Tools that are developed, piloted, and refined by a REL and educators in a single state can then be disseminated by the national network of Comprehensive Centers to meet other states’ needs.

Extensions. It isn’t hard to imagine other activities that the WWC, RELs, and Comprehensive Centers might take on to maximize this model’s potential effectiveness. Most hinge on building effective feedback loops.

Promoting continuous improvement of Practice Guide resources is an obvious example. RELs could and should be in the business of following Comprehensive Centers as they work with states and districts to implement REL-developed Practice Guide supports, looking for ways to maximize their effectiveness. Similarly, Comprehensive Centers and RELs should be regularly communicating with one another about needs-sensing, identifying areas where support for evidence-based practice is lacking and determining which partners to involve in the solution. When there is a growing body of evidence to support educator best practice, the WWC is in the best position to take the lead and develop a new Practice Guide. When that body of evidence does not exist yet—or when even the practices themselves are underdeveloped—the RELs and other parts of IES, such as the National Centers for Education and Special Education Research, should step in.  

Questions. When the WWC releases a new Practice Guide, its work may be done—at least temporarily. The work of its partners to support take-up of a Guide’s recommendations will, however, have just begun. I’d appreciate your thoughts on how to best accomplish that transition, and offer up the following additional questions for your consideration:

  1. Are we thinking about the problem correctly, and in a helpful way? Are there elements of the problem that should be redefined, and would that lead us to different solutions?

 

  1. What parts of the problem does this proposed solution address well, and where are its shortcomings? Are there other solutions—even solutions that don’t seem to fit squarely within today’s model of the REL Program—that might be more effective?

 

  1. If we proceed under a model like that which is described above:

 

  1. What sort of REL partnership models would be most effective in supporting the conceptualization, design, piloting, and testing of teacher-facing “toolkits” aligned to WWC Practice Guides?

 

  1. What research and evaluation activities—and which outcome measures—should be incorporated into this activity to give IES confidence that the resulting “toolkits” are likely to be associated with changed teacher practice and improved student outcomes?

 

  1. How does the 5-year limit on REL contracts affect the feasibility of this idea, including its scope and cost? What could be accomplished in 5 years, and what might take longer to see to completion?

 

  1. How could RELs leverage existing ED-sponsored content, such as that created by Doing What Works, in service of this new effort?

 

If you have thoughts on these questions or other feedback you would like to share, please e-mail me. I can be reached directly at matthew.soldner@ed.gov. Thanks in advance for the consideration!

by Matthew Soldner, NCEE Commissioner 

In Nebraska, a focus on evidence-based reading instruction

Researchers have learned a lot about reading over the last few decades, but these insights don’t always make their way into elementary school classrooms. In Nebraska, a fresh effort to provide teachers with practical resources on reading instruction could help bridge the research-practice divide.

At the start of the 2019-2020 school year, the Nebraska Department of Education (NDE) implemented NebraskaREADS to help “serve the needs of students, educators, and parents along the journey to successful reading.” As part of a broader effort guided by the Nebraska Reading Improvement Act, the initiative emphasizes the importance of high-quality reading instruction and targeted, individualized support for struggling readers.

As part of NebraskaREADS, NDE is developing an online resource inventory of tools and information to support high-quality literacy instruction for all Nebraska students. After reaching out to REL Central for support, experts from both agencies identified the What Works Clearinghouse (WWC) as a prime source for evidenced-based information on instructional practices and policies and partnered with REL Central to develop a set of instructional strategies summary documents based on recommendations in eight WWC practice guides.

Each WWC practice guide presents recommendations for educators based on reviews of research by a panel of nationally recognized experts on a particular topic or challenge. NDE’s instructional strategies summary documents align the evidence-based strategies in the practice guides more directly with NebraskaREADS.

REL Central and NDE partners used the WWC practice guides to develop the 35 instructional strategies summary documents, each of which condenses one recommendation from a practice guide. For each recommendation, the instructional strategies summary document provides the associated NebraskaREADS literacy focus, implementation instructions, appropriate grade levels, potential roadblocks and ways to address them, and the strength of supporting evidence.

NDE launched the instructional strategies summary documents in spring 2019, and already educators have found them to be helpful resources. Dee Hoge, executive director of Bennington Public Schools, described the guides as “checkpoints for strong instruction” and noted her district’s plan to use them to adapt materials and provide professional development. Marissa Payzant, English language arts education specialist at NDE and a lead partner in this collaboration, shared that the “districts are excited to incorporate the strategy summaries in a variety of professional learning and other initiatives. They make the information in the practice guides much more accessible, and all the better it’s from a trustworthy source.”

Building on the positive reception and benefits of the instructional strategies summary documents, NDE and REL Central plan to expand the resource inventory into other content areas. Currently, they are working together to develop instructional strategy summaries using the WWC practice guides for math instruction.

by Douglas Van Dine​, Regional Educational Laboratory Central

Taking Discovery to Scale

Along with my NCEE colleagues, I was excited to read the recent Notice Inviting Applications for the next cycle of Comprehensive Centers, administered by the Department’s Office of Elementary and Secondary Education.

As you can see in the notice, Regional Comprehensive Centers will “provide high-quality intensive capacity-building services to State clients and recipients to identify, implement, and sustain effective evidence-based programs, practices, and interventions that support improved educator and student outcomes,” with a special emphasis on benefitting disadvantaged students, students from low-income families, and rural populations.

With this focus on supporting implementation, Regional Comprehensive Centers (RCCs) can amplify the work of NCEE’s Regional Educational Laboratories (RELs) and What Works Clearinghouse (WWC). Learning from states, districts, and schools to understand their unique needs, and then being able to support high-quality implementation of evidence-based practices that align with those needs, has the potential to dramatically accelerate the process of improving outcomes for students.

RELs and the WWC already collaborate with today’s Comprehensive Centers, of course. But it’s easy to see how stronger and more intentional relationships between them could increase each program’s impact.

True to its name, the REL program has worked with educators to design and evaluate innovative practices – or identify, implement, and refine existing ones – to meet regional and local needs for more than 50 years. And since its inception in 2002, the WWC has systematically identified and synthesized high-quality evidence about the effectiveness of education programs, policies, and practices so that educators and other instructional leaders can put that information to use improving outcomes for students. But with more than 3.6 million teachers spread across more than 132,000 public and private schools nation-wide, making sure discoveries from education science are implemented at scale and with fidelity is no small feat. RCCs are welcome partners in that work.

This figure describes how RELs, the What Works Clearinghouse, and Regional Comprehensive Centers could most effectively collaborate across a continuum from discovery to scale.

RELs, the WWC, and Comprehensive Centers can play critical, complementary roles in taking discovery to scale (see Figure). With their analysis, design, and evaluation expertise, RELs – in partnership with states and districts, postsecondary institutions, and other stakeholders – can begin the process by designing and rigorously evaluating best practices that meet local or regional needs. (Or, as I will discuss in future messages, by developing and rigorously testing materials that support adoption of evidence-based practices.) The WWC follows, vetting causal impact studies, synthesizing their findings to better understand the strength of evidence that supports a practice and identifying its likely impact. Partners in the Comprehensive Centers can then “pick-up” those WWC-vetted practices, aligning them to needs of State and other clients, and supporting and sustaining implementation at scale. Finally, lessons learned from RCCs’ implementation efforts about what worked – and what didn’t – can be fed back to RELs, refining the practice and fueling the next cycle of discovery.

Those that follow the REL-WWC-RCC process know that what I’ve just described isn’t quite how these programs operate today. Sometimes, out of necessity, roles are more “fluid” and efforts are somewhat less well-aligned. The approach of “taking discovery to scale” depicted above provides one way of thinking about how each program can play a unique, but interdependent, role with the other two.

I have every confidence this is possible. After all, the North star of each program is the same: improving outcomes for students. And that means we have a unique opportunity. One we’d be remiss not to seize.

 

Matthew Soldner
Commissioner, National Center for Education Evaluation and Regional Assistance
Institute of Education Sciences
U.S. Department of Education

 

As always, your feedback is welcome. You can email the Commissioner at matthew.soldner@ed.gov.

 

 

Leading experts provide evidence-based recommendations on using technology to support postsecondary student learning

By Michael Frye and Sarah Costelloe. Both are part of Abt Associates team working on the What Works Clearinghouse.

Technology is part of almost every aspect of college life. Colleges use technology to improve student retention, offer active and engaging learning, and help students become more successful learners. The What Works Clearinghouse’s latest practice guide, Using Technology to Support Postsecondary Student Learning, offers several evidence-based recommendations to help higher education instructors, instructional designers, and administrators use technology to improve student learning outcomes.

IES practice guides incorporate research, practitioner experience, and expert opinions from a panel of nationally recognized experts. The panel that developed Using Technology to Support Postsecondary Student Learning included five experts with many years of experience leading the adoption, use, and research of technology in postsecondary classrooms.  Together, guided by Abt Associates’ review of the rigorous research on the topic, the Using Technology to Support Postsecondary Student Learning offers five evidence-based recommendations:

Practice Recommendations: Use communication and collaboration tools to increase interaction among students and between students and instructors, Minimal evidence. 2. Use varied, personalized, and readily available digital resources to design and deliver instructional content, moderate evidence. 3. Incorporate technology that models and fosters self-regulated learning strategies. Moderate evidence. 4. Use technology to provide timely and targeted feedback on student performance, moderate evidence. 5. Use simulation technologies that help students engage in complex problem-solving, minimal evidence.

 

Each recommendation is assigned an evidence level of minimal, moderate, or strong. The level of evidence reflects how well the research demonstrates the effectiveness of the recommended practices. For an explanation of how levels of evidence are determined, see the Practice Guide Level of Evidence Video.   The evidence-based recommendations also include research-based strategies and examples for implementation in postsecondary settings. Together, the recommendations highlight five interconnected themes that the practice guide’s authors suggest readers consider:

  • Focus on how technology is used, not on the technology itself.

“The basic act of teaching has actually changed very little by the introduction of technology into the classroom,” said panelist MJ Bishop, “and that’s because simply introducing a new technology changes nothing unless we first understand the need it is intended to fill and how to capitalize on its unique capabilities to address that need.” Because technology evolves rapidly, understanding specific technologies is less important than understanding how technology can be used effectively in college settings. “By understanding how a learning outcome can be enhanced and supported by technologies,” said panelist Jennifer Sparrow, “the focus stays on the learner and their learning.”

  • Technology should be aligned to specific learning goals.

Every recommendation in this guide is based on one idea: finding ways to use technology to engage students and enhance their learning experiences. Technology can engage students more deeply in learning content, activate their learning processes, and provide the social connections that are key to succeeding in college and beyond. To do this effectively, any use of technology suggested in this guide must be aligned with learning goals or objectives. “Technology is not just a tool,” said Panel Chair Nada Dabbagh. “Rather, technology has specific affordances that must be recognized to use it effectively for designing learning interactions. Aligning technology affordances with learning outcomes and instructional goals is paramount to successful learning designs.”

  • Pay attention to potential issues of accessibility.

The Internet is ubiquitous, but many households—particularly low-income households and those of recent immigrants and in rural communities—may not be able to afford or otherwise access digital communications. Course materials that rely heavily on Internet access may put these students at a disadvantage. “Colleges and universities making greater use of online education need to know who their students are and what access they have to technology,” said panelist Anthony Picciano. “This practice guide makes abundantly clear that colleges and universities should be careful not to be creating digital divides.”

Instructional designers must also ensure that learning materials on course websites and course/learning management systems can accommodate students who are visually and/or hearing impaired. “Technology can greatly enhance access to education both in terms of reaching a wide student population and overcoming location barriers and in terms of accommodating students with special needs,” said Dabbagh. “Any learning design should take into consideration the capabilities and limitations of technology in supporting a diverse and inclusive audience.”

  • Technology deployments may require significant investment and coordination.

Implementing any new intervention takes training and support from administrators and teaching and learning centers. That is especially true in an environment where resources are scarce. “In reviewing the studies for this practice guide,” said Picciano, “it became abundantly clear that the deployment of technology in our colleges and universities has evolved into a major administrative undertaking. Careful planning that is comprehensive, collaborative, and continuous is needed.”

“Hardware and software infrastructure, professional development, academic and student support services, and ongoing financial investment are testing the wherewithal of even the most seasoned administrators,” said Picciano. “Yet the dynamic and changing nature of technology demands that new strategies be constantly evaluated and modifications made as needed.”

These decisions are never easy. “Decisions need to be made,” said Sparrow, “about investment cost versus opportunity cost. Additionally, when a large investment in a technology has been made, it should not be without investment in faculty development, training, and support resources to ensure that faculty, staff, and students can take full advantage of it.”

  • Rigorous research is limited and more is needed.

Despite technology’s ubiquity in college settings, rigorous research on the effects of technological interventions on student outcomes is rather limited. “It’s problematic,” said Bishop, “that research in the instructional design/educational technology field has been so focused on things, such as technologies, theories, and processes, rather than on the problems we’re trying to solve with those things, such as developing critical thinking, enhancing knowledge transfer, and addressing individual differences. It turns out to be very difficult to cross-reference the instructional design/educational technology literature with the questions the broader field of educational research is trying to answer.”

More rigorous research is needed on new technologies and how best to support instructors and administrators in using them. “For experienced researchers as well as newcomers,” said Picciano, “technology in postsecondary teaching and learning is a fertile ground for further inquiry and investigation.”

Readers of this practice guide are encouraged to adapt the advice provided to the varied contexts in which they work. The five themes discussed above serve as a lens to help readers approach the guide and decide whether and how to implement some or all of the recommendations.

Download Using Technology to Support Postsecondary Student Learning from the What Works Clearinghouse website at https://ies.ed.gov/ncee/wwc/PracticeGuide/25.

 

What Works in STEM Education: Resources for National STEM Day, 2018

Are you celebrating National STEM Day this November 8th by learning more about how to improve student achievement in Science, Technology, Engineering, and Mathematics (STEM)? If so, the Institute of Education Sciences’ (IES’s) What Works Clearinghouse has great resources for educators who want information about the latest evidence-based practices in supporting learners of all ages.

  • Focused on math? If so, check out Improving Mathematical Problem Solving in Grades 4 Through 8. Based on 38 rigorous studies conducted over 20 years, this practice guide includes five recommendations that teachers, math coaches, and curriculum developers can use to improve students’ mathematical problem-solving skills. There’s strong evidence that assisting students in monitoring and reflecting on the problem-solving process and teaching students how to use visual representations (e.g., tables, graphs, and number lines) can improve achievement. Other practice guides focus on Teaching Math to Young Children and Teaching Strategies for Improving Algebra Knowledge in Middle and High School Students.

  • Don’t worry, we won’t leave science out! Encouraging Girls in Math and Science includes five evidence-based recommendations that both classroom teachers and other school personnel can use to encourage girls to choose career paths in math- and science-related fields. A handy 20-point checklist provides suggestions for how those recommendations can be incorporated into daily practice, such as “[teaching] students that working hard to learn new knowledge leads to improved performance” and “[connecting] mathematics and science activities to careers in ways that do not reinforce existing gender stereotypes of those careers.”

  • Looking for specific curricula or programs for encouraging success in STEM? If so, check out the What Works Clearinghouse’s Intervention Reports in Math and Science. Intervention reports are summaries of findings from high-quality research on a given educational program, practice, or policy. There are currently more than 200 intervention reports that include at least one math or science related outcome. (And nearly 600 in total!)

  • Maybe you just want to see the research we’ve reviewed? You can! The What Works Clearinghouse’s Reviews of Individual Studies Database includes nearly 11,000 citations across a wide range of topics, including STEM. Type in your preferred search term and you’re off—from algebra to zoology, we’ve got you covered!

We hope you’ll visit us on November 8th and learn more about evidence-based practices in STEM education. And with practice guides, intervention reports, and individual studies spanning topics from Early Childhood to Postsecondary education and everything in-between, we hope you’ll come back whenever you are looking for high-quality research to answer the question “what works in education!”

The WWC Evidence Standards: A Valuable and Accessible Resource for Teaching Validity Assessment of Causal Inferences to Identify What Works

by Herbert Turner, Ph.D., President and Principal Scientist, ANALYTICA, Inc.

 

The WWC Evidence Standards (hereafter, the Standards) provide a detailed description of the criteria used by the WWC to review studies. The standards were first developed in 2002 by leading methodological researchers using initial concepts from the Study Design and Implementation Assessment Device (DIAD), an instrument for assessing the correspondence between the methodological characteristics and implementation of social science research and using this research to draw inferences about causal relationships (Boruch, 1997; Valentine and Cooper, 2008).  During the past 16 years, the Standards have gone through four iterations of improvement, to keep pace with advances in methodological practice, and have been through rigorous peer review. The most recent of these is now codified in the WWC Standards Handbook 4.0 (hereafter, the Handbook).

 

Across the different versions of the Handbook, the methodological characteristics of an internally valid study, designed to causally infer the effect of an intervention on an outcome, have stood the test of time. These characteristics can be summarized as follows: A strong design starts with how the study groups are formed. It continues with use of reliable and valid measures of outcomes, has low attrition if a randomized controlled trial (RCT), shows baseline equivalence (in the analysis sample) if a quasi-experimental design (QED), and has no confounds.

 

These elements are the critical components of any strong research design – and are the cornerstones of all versions of the WWC’s standards. That fact, along with the transparent description of their logical underpinning, is what motivated me to use Standards 4.0 (for Group Designs) as the organizing framework for understanding study validity in a graduate-level Program Evaluation II course I taught at Boston College’s Lynch School of Education.

 

In spring 2017, nine Master and four Doctoral students participated in this semester-long course. The primary goal was to teach students how to organize their thinking and logically derive internal validity criteria using Standards 4.0—augmented with additional readings from the methodological literature. Students used the Standards (along with the supplemental readings) to design, implement, analyze, and report impact evaluations to determine what interventions work, harm, or have no discernible effect (Mosteller and Boruch, 2002). The Standards Handbook 4.0 along with online course modules were excellent resources to augment the lectures and provide Lynch School students with hands on learning.

 

At the end of the course, students were offered the choice to complete the WWC Certification Exam for Group Design or take the instructor’s developed final exam. All thirteen students chose to complete the WWC Certification Exam. Approximately half of the students became certified. Many emailed me personally to express their appreciation for the (1) opportunity to learn a systematic approach to organizing their thinking about assessing the validity of causal inference using data generated by RCTs and QEDs, and (2) developing design skills that can be used in other graduate courses and beyond. The WWC Evidence Standards and related online resources are a valuable, accessible, and free resource that have been rigorously vetted for close to two decades. The Standards have few equals as a resource to help students think systematically, logically, and clearly about designing (and evaluating) a valid research study to make causal inferences about what interventions work in education and related fields.

 

References

Boruch, R. F. (1997). Randomized experiments for planning and evaluation: A practical guide. Thousand Oaks, CA: Sage Publications.

Valentine, J.C., & Cooper, H. (2008), A systematic and transparent approach for assessing the methodological quality of intervention effectiveness research: The Study Design and Implementation Assessment Device (Study DIAD). Psychological Methods, 13(2), 130-149.

Mosteller, F., & Boruch, R. F. (2002). Evidence matters: Randomized trials in education research. Washington, D.C.: Brookings Institution Press.

Making the WWC Open to Everyone by Moving WWC Certification Online

In December 2016, the What Works Clearinghouse made a version of its online training publicly available through the WWC Website. This enabled everyone to be able to access the Version 3.0 Group Design Standards reviewer training to learn about the standards and methods that the WWC uses. While this was a great step to increase access to WWC resources, users still had to go through the 1 ½ day, in-person training to become a WWC certified reviewer.

To continue our efforts to promote access and transparency and make our resources available to everyone, the WWC has now moved all of its group design training to be online. Now everyone will have access to the same training and certification tests. This certification is available free of charge and is open to all users. It is our hope that this effort will increase the number of certified reviewers and help increase general awareness about the WWC.

Why did the WWC make these resources publicly available? As part of IES’s effort to increase access to high quality education research, we wanted to make it easier for researchers to use our standards. This meant opening up training opportunities and offering training online was a way to achieve this goal while using limited taxpayer resources most efficiently.

The online training consists of 9 modules. These videos feature an experienced WWC instructor and use the same materials that we used in our in-person courses, but adapted to Version 4.0 of the Group Design Standards. After completing the modules, users will have the opportunity to download a certificate of completion, take the online certification test, or go through the full certification exam.

Becoming a fully certified reviewer will require users to take a multiple choice online certification test and then use the new Online SRG application to conduct a full review using the same tools that the WWC team uses. The WWC team will then grade your exam to make sure you fully understand how to apply the Standards before certifying you to review for the Clearinghouse.

Not interested in becoming a certified reviewer? Online training still has several benefits. Educators can embed our videos in their course websites and use our training materials in their curricula. Researchers can use our Online SRG tool with their publications to determine a preliminary rating and understand what factors could cause their study to get the highest rating. They could also use the tool to use when conducting a systematic evidence review.

Have ideas for new resources we could make available? Email your ideas and suggestions to Contact.WWC@ed.gov!

by Erin Pollard, WWC Project Officer