IES Blog

Institute of Education Sciences

IES is Providing Digital Technical Assistance for FY 2021 Research Grant Applicants

Given the many challenges that this year has brought, including the difficulties and uncertainties due to the COVID-19 pandemic, IES is providing different resources and options to assist applicants as they begin preparing their applications. To ensure that program officers can focus their time on project-specific questions, applicants should review these resources first before seeking individual feedback.

First, have a copy of the documents that are needed to submit a proposal. Download a copy of the relevant request for applications (RFA) and the IES Application Submission Guide. This page has PDFs of these documents: https://ies.ed.gov/funding/21rfas.asp. Also, download the application package (search for CFDA 84.305) from https://grants.gov/. Contact Grants.gov (1-800-518-4726; support@grants.gov) if you need help with your electronic grant submission.

 

Next, take advantage of our digital technical assistance options.

  • On-demand webinars. These pre-recorded webinars answer questions about the grant competitions, how to apply, and how to prepare a strong application. You can access them here: https://ies.ed.gov/funding/webinars/.  

 

  • Virtual office hours. This year, we will host a series of drop-in hours during which a program officer will answer questions and give technical assistance. These office hours will help determine which competition or project type is the best fit and also understand some of the requirements and recommendations in the RFAs. Please see the schedule below along with the call-in information. This information is also posted here.

 

  • Cost analysis/Cost-effectiveness analysis. Many RFAs require a cost analysis plan, and some also require a cost effectiveness plan.  Please refer to our list of resources for developing these plans: https://ies.ed.gov/seer/cost_analysis.asp.

 

 

Finally, please make sure that you attend to the application due dates: https://ies.ed.gov/funding/futureComp.asp because IES does not accept late applications.

 

Virtual Office Hours

Staff from the research centers will host hour-long drop-in virtual sessions to provide technical assistance around particular competitions or research project types or for general purposes. Applicants are encouraged to join in the discussion and ask questions. These sessions are especially helpful if you are unsure of which competition or project type is the best match for you or if you are unclear on any changes to the requirements or recommendations. Below is a list of the current sessions and their topics. Please attend as many sessions as you would like.

All office hours will use the same call-in details. The program officer will allow participants into the meeting from the “lobby” at the beginning. We recommend you do not use video so that there is sufficient bandwidth. All times are shown in Eastern Standard time.

 

Join Microsoft Teams Meeting

+1 202-991-0393   United States, Washington DC (Toll)

Conference ID: 915 412 787#

 

If you would like to request accommodations (e.g., TTY), please send an email to NCER.Commissioner@ed.gov with this request as soon as possible.

You may have to download a free mobile application to use Microsoft Teams if you want the full audio and visual experience from your phone. Clicking on the linked “Join” hyperlink below should prompt you to do this. You can also refer to this article for information: https://support.microsoft.com/en-gb/office/set-up-your-teams-mobile-apps-1ba8dce3-1122-47f4-8db6-00a4f93117e8

 

 

Virtual Office Hours Schedule

 

 

Monday, June 22

Tuesday, June 23

Wednesday, June 24

Thursday, June 25

12:30 – 1:30 pm ET

Competition fit: this will cover all NCER grant competitions and items such as applicant eligibility, general requirements, submission questions, and the IES review process.

Efficacy/Follow-Up and Replication: this will cover characteristics of high-quality projects of these types.

Exploration projects: this will cover characteristics of high-quality projects of this type.

Development projects: this will cover characteristics of high-quality projects of this type.

2:00 – 3:00 pm ET

Exploration projects: this will cover characteristics of high-quality projects of this type.

Development projects: this will cover characteristics of high-quality projects of this type.

Is 305A (Education Research Grants) right for me? This will address general questions about CFDA 84.305A

Measurement projects: this will cover characteristics of high-quality projects of this type.

 

 

Monday, June 29

Tuesday, June 30

Wednesday, July 1

Thursday, July 2

12:30 – 1:30 pm ET

Development projects: this will cover characteristics of high-quality projects of this type.

Exploration projects: this will cover characteristics of high-quality projects of this type.

Measurement projects: this will cover characteristics of high-quality projects of this type.

 

2:00 – 3:00 pm ET

Competition fit: this will cover all NCER grant competitions and items such as applicant eligibility, general requirements, submission questions, and the IES review process.

Systematic Replication: this will focus on the requirements for a 305R or 324R application

Efficacy/Follow-Up: this will cover characteristics of high-quality projects of this type.

Pathways to the Education Sciences: this will address common questions about this training program opportunity.  

 

“The How” of “What Works:” The Importance of Core Components in Education Research

Twenty-some odd years ago as a college junior, I screamed in horror watching a friend open a running dishwasher. She wanted to slip in a lightly used fork. I jumped to stop her, yelling “don’t open it, can’t you tell it’s full of water?” She paused briefly, turning to look at me with a “have you lost your mind” grimace, and yanked open the door.

Much to my surprise, nothing happened. A puff of steam. An errant drip, perhaps? But no cascade of soapy water. She slid the fork into the basket, closed the door, and hit a button. The machine started back up with a gurgle, and the kitchen floor was none the wetter.

Until that point in my life, I had no idea how a dishwasher worked. I had been around a dishwasher, but the house I lived in growing up didn’t have one. To me, washing the dishes meant filling the sink with soapy water, something akin to a washer in a laundry. I assumed dishwashers worked on the same principle, using gallons of water to slosh the dishes clean. Who knew?

Lest you think me completely inept, a counterpoint. My first car was a 1979 Ford Mustang. And I quickly learned how that very used car worked when the Mustang’s automatic choke conked out. As it happens, although a choke is necessary to start and run a gasoline engine, that it be “automatic” is not. My father Rube Goldberg-ed up a manual choke in about 15 minutes rather than paying to have it fixed.

My 14-year-old self learned how to tweak that choke “just so” so that I could get to school each morning. First, pull the choke all the way out to start the car, adjusting the fuel-air mixture ever so slightly. Then gingerly slide it back in, micron by micron, as the car warms up and you hit the road. A car doesn’t actually run on liquid gasoline, you see. Cars run on fuel vapor. And before the advent of fuel injection, fuel vapor was courtesy your carburetor and its choke. Not a soul alive who didn’t know how a manual choke worked could have started that car.

You would be forgiven if, by now, you were wondering where I am going with all of this and how it relates to the evaluation of education interventions. To that end, I offer three thoughts for your consideration:

  1. Knowing that something works is different from knowing how something works.

 

  1. Knowing how something works is necessary to put that something to its best use.

 

  1. Most education research ignores the how of interventions, dramatically diminishing the usefulness of research to practitioners.

My first argument—that there is a distinction between knowing what works and how something works—is straightforward. Since it began, the What Works Clearinghouse™ has focused on identifying “what works” for educators and other stakeholders, mounting a full-court press on behalf of internal validity. Taken together, Version 4.1 of the WWC Standards and Procedures Handbooks total some 192 pages. As a result, we have substantially greater confidence today than we did a decade ago that when an intervention developer or researcher reports that something worked for a particular group of students, we know that it actually did.

In contrast, WWC standards do not, and as far as I can tell have not ever, addressed the how of an intervention. By “the how” of an intervention, I’m referring to the parts of it that must be working, sometimes “just so,” if its efficacy claims are to be realized. For a dishwasher, it is something like: “a motor turns a wash arm, which sprays dishes with soapy water.” (It is not, as I had thought, “the dishwasher fills with soapy water that washes the mac and cheese down the drain.”) In the case of my Mustang, it was: “the choke controls the amount of air that mixes with fuel from the throttle, before heading to the cylinders.”

If you have been following the evolution of IES’ Standards for Excellence in Education Research, or SEER, and its principles, you recognize “the how” as core components. Most interventions consist of multiple core components that are—and perhaps must—be arrayed in a certain manner if the whole of the thing is to “work.” Depicted visually, core components and their relationships to one another and to the outcomes they are meant to affect form something between a logic model (often too simplistic) and a theory of change (often too complex).

(A word of caution: knowing how somethings works is also different from knowing why something works. I have been known to ask at work about “what’s in the arrows” that connect various boxes in a logic model. The why lives in those arrows. In the social sciences, those arrows are where theory resides.)  

My second argument is that knowing how something works matters, at least if you want to use it as effectively as possible. This isn’t quite as axiomatic as the distinction between “it works” and “how it works,” I realize.

This morning, when starting my car, I didn’t have to think about the complex series of events leading up to me pulling out of the driveway. Key turn, foot down, car go. But when the key turns and the car doesn’t go, then knowing something about how the parts of a car are meant to work together is very, very helpful. Conveniently, most things in our lives, if they work at all, simply do.  

Inconveniently, we don’t have that same confidence when it comes to things in education. There are currently 10,677 individual studies in the What Works Clearinghouse (WWC) database. Of those, only about 11 percent meet the WWC’s internal validity standards. Among them, only 445 have at least one statistically significant positive finding. Because the WWC doesn’t consider results from studies that don’t have strong internal validity, it isn’t quite as simple as saying “only about 4 percent of things work in education.” Instead, we’re left with “89 percent of things aren’t tested rigorously enough to have confidence about whether they work, and when tested rigorously, only about 38 percent do.” Between the “file drawer” problem that plagues research generally and our own review of the results from IES efficacy trials, we have reason to believe the true efficacy rate of “what works” in education is much lower.

Many things cause an intervention to fail. Some interventions are simply wrong-headed. Some interventions do work, but for only some students. And other interventions would work, if only they were implemented well.

Knowing an intervention’s core components and the relationships among them would, I submit, be helpful in at least that third case. If you don’t know that a dishwasher’s wash arm spins, the large skillet on the bottom rack with its handle jutting to the sky might not strike you as the proximate cause of dirty glasses on the top rack. If you don’t know that a core component of multi-tiered systems of support is progress monitoring, you might not connect the dots between a decision to cut back on periodic student assessments and suboptimal student outcomes.

My third and final argument, that most education research ignores the how of interventions, is based in at least some empiricism. The argument itself is a bit of a journey. One that starts with a caveat, wends its way to dismay, and ends in disappointment.

Here’s the caveat: My take on the relative lack of how in most education research comes from my recent experience trying to surface “what works” in remote learning. This specific segment of education research may well be an outlier. But I somehow doubt it.

Why dismay? Well, as regular readers might recall, in late March I announced plans to support a rapid evidence synthesis on effective practices in remote learning. It seemed simple enough: crowd-source research relevant to the task, conduct WWC reviews of the highest-quality submissions, and then make those reviews available to meta-analysts and other researchers to surface generalizable principles that could be useful to educators and families.

My stated goal had been to release study reviews on June 1. That date has passed, and the focus of this post is not “New WWC Reviews of Remote Learning Released.” As such, you may have gathered something about my plan has gone awry. You would be right.

Simply, things are taking longer than hoped. It is not for lack of effort. Our teams identified more than 930 studies, screened more than 700 of those studies, and surfaced 250 randomized trials or quasi-experiments. We have prioritized 35 of this last group for review. (For those of you who are thinking some version of “wow, it seems like it might be a waste to not look at 96 percent of the studies that were originally located,” I have some thoughts about that. We’ll have to save that discussion, though, for another blog.)

Our best guess for when those reviews will be widely available is now August 15. Why things are taking as long as they are is, as they say, “complicated.” The June 1 date was unlikely from the start, dependent as it was upon a series of best-case situations in times that are anything but. And at least some of the delay is driven by our emphasis on rigor and steps we take to ensure the quality of our work, something we would not short-change in any event.  

Not giving in to my dismay, however, I dug in to the 930 studies in our remote learning database to see what I might be able to learn in the meantime. I found that 22 of those studies had already been reviewed by the WWC. “Good news,” I said to myself. “There are lessons to be learned among them, I’m sure.”

And indeed, there was a lesson to be learned—just not the one I was looking for. After reviewing the lot, there was virtually no actionable evidence to be found. That’s not entirely fair. One of the 22 records was a duplicate, two were not relevant, two were not locatable, and one was behind a paywall that even my federal government IP address couldn’t get behind. Because fifteen of the sixteen remaining studies reviewed name-brand products, there was one action I could take in most cases: buy the product the researcher had evaluated.

I went through each article, this time making an imperfect determination about whether the researcher described the intervention’s core components and, if so, arrayed them in a logic model. My codes for core components included one “yes,” two “bordering on yes,” six “yes-ish,” one “not really,” and six “no.” Not surprisingly, logic models were uncommon, with two studies earning a “yes” and two more tallied as “yes-ish.” (You can see now why I am not a qualitative researcher.)

In case there’s any doubt, herein lies my disappointment: if an educator had turned to one of these articles to eke out a tip or two about “what works” in remote learning, they would have been, on average, out of luck. If they did luck out and find an article that described the core components of the tested intervention, there was a vanishingly small chance there would be information on how to put those components together to form a whole. As for surfacing generalizable principles for educators and families across multiple studies? Not without some serious effort, I can assure you.

I have never been more convinced of the importance of core components being well-documented in education research than I am today. As they currently stand, the SEER principles for core components ask:

  • Did the researcher document the core components of an intervention, including its essential practices, structural elements, and the contexts in which it was implemented and tested?
  • Did the researcher offer a clear description of how the core components of an intervention are hypothesized to affect outcomes?
  • Did the researcher's analysis help us understand which components are most important in achieving impact?

More often than not, the singular answer to the questions above is a resounding “no.” That is to the detriment of consumers of research, no doubt. Educators, or even other researchers, cannot turn to the average journal article or research report and divine enough information about what was actually studied to draw lessons for classroom practice. (There are many reasons for this, of course. I welcome your thoughts on the matter.) More importantly, though, it is to the detriment of the supposed beneficiaries of research: our students. We must do better. If our work isn’t ultimately serving them, who is it serving, really?  

Matthew Soldner
Commissioner, National Center for Education Evaluation and Regional Assistance
Agency Evaluation Officer, U.S. Department of Education