IES Blog

Institute of Education Sciences

IES is Providing Digital Technical Assistance for FY 2021 Research Grant Applicants

Given the many challenges that this year has brought, including the difficulties and uncertainties due to the COVID-19 pandemic, IES is providing different resources and options to assist applicants as they begin preparing their applications. To ensure that program officers can focus their time on project-specific questions, applicants should review these resources first before seeking individual feedback.

First, have a copy of the documents that are needed to submit a proposal. Download a copy of the relevant request for applications (RFA) and the IES Application Submission Guide. This page has PDFs of these documents: https://ies.ed.gov/funding/21rfas.asp. Also, download the application package (search for CFDA 84.305) from https://grants.gov/. Contact Grants.gov (1-800-518-4726; support@grants.gov) if you need help with your electronic grant submission.

 

Next, take advantage of our digital technical assistance options.

  • On-demand webinars. These pre-recorded webinars answer questions about the grant competitions, how to apply, and how to prepare a strong application. You can access them here: https://ies.ed.gov/funding/webinars/.  

 

  • Virtual office hours. This year, we will host a series of drop-in hours during which a program officer will answer questions and give technical assistance. These office hours will help determine which competition or project type is the best fit and also understand some of the requirements and recommendations in the RFAs. Please see the schedule below along with the call-in information. This information is also posted here.

 

  • Cost analysis/Cost-effectiveness analysis. Many RFAs require a cost analysis plan, and some also require a cost effectiveness plan.  Please refer to our list of resources for developing these plans: https://ies.ed.gov/seer/cost_analysis.asp.

 

 

Finally, please make sure that you attend to the application due dates: https://ies.ed.gov/funding/futureComp.asp because IES does not accept late applications.

 

Virtual Office Hours

Staff from the research centers will host hour-long drop-in virtual sessions to provide technical assistance around particular competitions or research project types or for general purposes. Applicants are encouraged to join in the discussion and ask questions. These sessions are especially helpful if you are unsure of which competition or project type is the best match for you or if you are unclear on any changes to the requirements or recommendations. Below is a list of the current sessions and their topics. Please attend as many sessions as you would like.

All office hours will use the same call-in details. The program officer will allow participants into the meeting from the “lobby” at the beginning. We recommend you do not use video so that there is sufficient bandwidth. All times are shown in Eastern Standard time.

 

Join Microsoft Teams Meeting

+1 202-991-0393   United States, Washington DC (Toll)

Conference ID: 915 412 787#

 

If you would like to request accommodations (e.g., TTY), please send an email to NCER.Commissioner@ed.gov with this request as soon as possible.

You may have to download a free mobile application to use Microsoft Teams if you want the full audio and visual experience from your phone. Clicking on the linked “Join” hyperlink below should prompt you to do this. You can also refer to this article for information: https://support.microsoft.com/en-gb/office/set-up-your-teams-mobile-apps-1ba8dce3-1122-47f4-8db6-00a4f93117e8

 

 

Virtual Office Hours Schedule

 

 

Monday, June 22

Tuesday, June 23

Wednesday, June 24

Thursday, June 25

12:30 – 1:30 pm ET

Competition fit: this will cover all NCER grant competitions and items such as applicant eligibility, general requirements, submission questions, and the IES review process.

Efficacy/Follow-Up and Replication: this will cover characteristics of high-quality projects of these types.

Exploration projects: this will cover characteristics of high-quality projects of this type.

Development projects: this will cover characteristics of high-quality projects of this type.

2:00 – 3:00 pm ET

Exploration projects: this will cover characteristics of high-quality projects of this type.

Development projects: this will cover characteristics of high-quality projects of this type.

Is 305A (Education Research Grants) right for me? This will address general questions about CFDA 84.305A

Measurement projects: this will cover characteristics of high-quality projects of this type.

 

 

Monday, June 29

Tuesday, June 30

Wednesday, July 1

Thursday, July 2

12:30 – 1:30 pm ET

Development projects: this will cover characteristics of high-quality projects of this type.

Exploration projects: this will cover characteristics of high-quality projects of this type.

Measurement projects: this will cover characteristics of high-quality projects of this type.

 

2:00 – 3:00 pm ET

Competition fit: this will cover all NCER grant competitions and items such as applicant eligibility, general requirements, submission questions, and the IES review process.

Systematic Replication: this will focus on the requirements for a 305R or 324R application

Efficacy/Follow-Up: this will cover characteristics of high-quality projects of this type.

Pathways to the Education Sciences: this will address common questions about this training program opportunity.  

 

“The How” of “What Works:” The Importance of Core Components in Education Research

Twenty-some odd years ago as a college junior, I screamed in horror watching a friend open a running dishwasher. She wanted to slip in a lightly used fork. I jumped to stop her, yelling “don’t open it, can’t you tell it’s full of water?” She paused briefly, turning to look at me with a “have you lost your mind” grimace, and yanked open the door.

Much to my surprise, nothing happened. A puff of steam. An errant drip, perhaps? But no cascade of soapy water. She slid the fork into the basket, closed the door, and hit a button. The machine started back up with a gurgle, and the kitchen floor was none the wetter.

Until that point in my life, I had no idea how a dishwasher worked. I had been around a dishwasher, but the house I lived in growing up didn’t have one. To me, washing the dishes meant filling the sink with soapy water, something akin to a washer in a laundry. I assumed dishwashers worked on the same principle, using gallons of water to slosh the dishes clean. Who knew?

Lest you think me completely inept, a counterpoint. My first car was a 1979 Ford Mustang. And I quickly learned how that very used car worked when the Mustang’s automatic choke conked out. As it happens, although a choke is necessary to start and run a gasoline engine, that it be “automatic” is not. My father Rube Goldberg-ed up a manual choke in about 15 minutes rather than paying to have it fixed.

My 14-year-old self learned how to tweak that choke “just so” so that I could get to school each morning. First, pull the choke all the way out to start the car, adjusting the fuel-air mixture ever so slightly. Then gingerly slide it back in, micron by micron, as the car warms up and you hit the road. A car doesn’t actually run on liquid gasoline, you see. Cars run on fuel vapor. And before the advent of fuel injection, fuel vapor was courtesy your carburetor and its choke. Not a soul alive who didn’t know how a manual choke worked could have started that car.

You would be forgiven if, by now, you were wondering where I am going with all of this and how it relates to the evaluation of education interventions. To that end, I offer three thoughts for your consideration:

  1. Knowing that something works is different from knowing how something works.

 

  1. Knowing how something works is necessary to put that something to its best use.

 

  1. Most education research ignores the how of interventions, dramatically diminishing the usefulness of research to practitioners.

My first argument—that there is a distinction between knowing what works and how something works—is straightforward. Since it began, the What Works Clearinghouse™ has focused on identifying “what works” for educators and other stakeholders, mounting a full-court press on behalf of internal validity. Taken together, Version 4.1 of the WWC Standards and Procedures Handbooks total some 192 pages. As a result, we have substantially greater confidence today than we did a decade ago that when an intervention developer or researcher reports that something worked for a particular group of students, we know that it actually did.

In contrast, WWC standards do not, and as far as I can tell have not ever, addressed the how of an intervention. By “the how” of an intervention, I’m referring to the parts of it that must be working, sometimes “just so,” if its efficacy claims are to be realized. For a dishwasher, it is something like: “a motor turns a wash arm, which sprays dishes with soapy water.” (It is not, as I had thought, “the dishwasher fills with soapy water that washes the mac and cheese down the drain.”) In the case of my Mustang, it was: “the choke controls the amount of air that mixes with fuel from the throttle, before heading to the cylinders.”

If you have been following the evolution of IES’ Standards for Excellence in Education Research, or SEER, and its principles, you recognize “the how” as core components. Most interventions consist of multiple core components that are—and perhaps must—be arrayed in a certain manner if the whole of the thing is to “work.” Depicted visually, core components and their relationships to one another and to the outcomes they are meant to affect form something between a logic model (often too simplistic) and a theory of change (often too complex).

(A word of caution: knowing how somethings works is also different from knowing why something works. I have been known to ask at work about “what’s in the arrows” that connect various boxes in a logic model. The why lives in those arrows. In the social sciences, those arrows are where theory resides.)  

My second argument is that knowing how something works matters, at least if you want to use it as effectively as possible. This isn’t quite as axiomatic as the distinction between “it works” and “how it works,” I realize.

This morning, when starting my car, I didn’t have to think about the complex series of events leading up to me pulling out of the driveway. Key turn, foot down, car go. But when the key turns and the car doesn’t go, then knowing something about how the parts of a car are meant to work together is very, very helpful. Conveniently, most things in our lives, if they work at all, simply do.  

Inconveniently, we don’t have that same confidence when it comes to things in education. There are currently 10,677 individual studies in the What Works Clearinghouse (WWC) database. Of those, only about 11 percent meet the WWC’s internal validity standards. Among them, only 445 have at least one statistically significant positive finding. Because the WWC doesn’t consider results from studies that don’t have strong internal validity, it isn’t quite as simple as saying “only about 4 percent of things work in education.” Instead, we’re left with “89 percent of things aren’t tested rigorously enough to have confidence about whether they work, and when tested rigorously, only about 38 percent do.” Between the “file drawer” problem that plagues research generally and our own review of the results from IES efficacy trials, we have reason to believe the true efficacy rate of “what works” in education is much lower.

Many things cause an intervention to fail. Some interventions are simply wrong-headed. Some interventions do work, but for only some students. And other interventions would work, if only they were implemented well.

Knowing an intervention’s core components and the relationships among them would, I submit, be helpful in at least that third case. If you don’t know that a dishwasher’s wash arm spins, the large skillet on the bottom rack with its handle jutting to the sky might not strike you as the proximate cause of dirty glasses on the top rack. If you don’t know that a core component of multi-tiered systems of support is progress monitoring, you might not connect the dots between a decision to cut back on periodic student assessments and suboptimal student outcomes.

My third and final argument, that most education research ignores the how of interventions, is based in at least some empiricism. The argument itself is a bit of a journey. One that starts with a caveat, wends its way to dismay, and ends in disappointment.

Here’s the caveat: My take on the relative lack of how in most education research comes from my recent experience trying to surface “what works” in remote learning. This specific segment of education research may well be an outlier. But I somehow doubt it.

Why dismay? Well, as regular readers might recall, in late March I announced plans to support a rapid evidence synthesis on effective practices in remote learning. It seemed simple enough: crowd-source research relevant to the task, conduct WWC reviews of the highest-quality submissions, and then make those reviews available to meta-analysts and other researchers to surface generalizable principles that could be useful to educators and families.

My stated goal had been to release study reviews on June 1. That date has passed, and the focus of this post is not “New WWC Reviews of Remote Learning Released.” As such, you may have gathered something about my plan has gone awry. You would be right.

Simply, things are taking longer than hoped. It is not for lack of effort. Our teams identified more than 930 studies, screened more than 700 of those studies, and surfaced 250 randomized trials or quasi-experiments. We have prioritized 35 of this last group for review. (For those of you who are thinking some version of “wow, it seems like it might be a waste to not look at 96 percent of the studies that were originally located,” I have some thoughts about that. We’ll have to save that discussion, though, for another blog.)

Our best guess for when those reviews will be widely available is now August 15. Why things are taking as long as they are is, as they say, “complicated.” The June 1 date was unlikely from the start, dependent as it was upon a series of best-case situations in times that are anything but. And at least some of the delay is driven by our emphasis on rigor and steps we take to ensure the quality of our work, something we would not short-change in any event.  

Not giving in to my dismay, however, I dug in to the 930 studies in our remote learning database to see what I might be able to learn in the meantime. I found that 22 of those studies had already been reviewed by the WWC. “Good news,” I said to myself. “There are lessons to be learned among them, I’m sure.”

And indeed, there was a lesson to be learned—just not the one I was looking for. After reviewing the lot, there was virtually no actionable evidence to be found. That’s not entirely fair. One of the 22 records was a duplicate, two were not relevant, two were not locatable, and one was behind a paywall that even my federal government IP address couldn’t get behind. Because fifteen of the sixteen remaining studies reviewed name-brand products, there was one action I could take in most cases: buy the product the researcher had evaluated.

I went through each article, this time making an imperfect determination about whether the researcher described the intervention’s core components and, if so, arrayed them in a logic model. My codes for core components included one “yes,” two “bordering on yes,” six “yes-ish,” one “not really,” and six “no.” Not surprisingly, logic models were uncommon, with two studies earning a “yes” and two more tallied as “yes-ish.” (You can see now why I am not a qualitative researcher.)

In case there’s any doubt, herein lies my disappointment: if an educator had turned to one of these articles to eke out a tip or two about “what works” in remote learning, they would have been, on average, out of luck. If they did luck out and find an article that described the core components of the tested intervention, there was a vanishingly small chance there would be information on how to put those components together to form a whole. As for surfacing generalizable principles for educators and families across multiple studies? Not without some serious effort, I can assure you.

I have never been more convinced of the importance of core components being well-documented in education research than I am today. As they currently stand, the SEER principles for core components ask:

  • Did the researcher document the core components of an intervention, including its essential practices, structural elements, and the contexts in which it was implemented and tested?
  • Did the researcher offer a clear description of how the core components of an intervention are hypothesized to affect outcomes?
  • Did the researcher's analysis help us understand which components are most important in achieving impact?

More often than not, the singular answer to the questions above is a resounding “no.” That is to the detriment of consumers of research, no doubt. Educators, or even other researchers, cannot turn to the average journal article or research report and divine enough information about what was actually studied to draw lessons for classroom practice. (There are many reasons for this, of course. I welcome your thoughts on the matter.) More importantly, though, it is to the detriment of the supposed beneficiaries of research: our students. We must do better. If our work isn’t ultimately serving them, who is it serving, really?  

Matthew Soldner
Commissioner, National Center for Education Evaluation and Regional Assistance
Agency Evaluation Officer, U.S. Department of Education

Addressing Persistent Disparities in Education Through IES Research

A teacher and students smiling and sitting cross legged in a circle

Spring 2020 has been a season of upheaval for students and educational institutions across the country. Just when the conditions around the COVID-19 pandemic began to improve, the longstanding symptoms of a different wave of distress resurfaced. We are seeing and experiencing the fear, distrust, and confusion that are the result of systemic racism and bigotry. For education stakeholders, both the COVID-19 pandemic and the civil unrest unfolding across the country accentuate the systemic inequities in access, opportunities, resources, and outcomes that continue to exist in education.

IES acknowledges these inequities and is supporting rigorous research that is helping to identify, measure, and address persistent disparities in education.

In January (back when large gatherings were a thing), IES hosted its Annual Principal Investigator’s (PI) Meeting with the theme of Closing the Gaps for All Learners. The theme underscored IES's objective of supporting research that improves equity in education access and outcomes. Presentations from IES-funded projects focusing on diversity, equity, and inclusion were included throughout the meeting and can be found here. In addition, below are highlights of several IES-funded studies that are exploring, developing, or evaluating programs, practices, and policies that education stakeholders can implement to help reduce bias and inequities in schools.

 

 

 

  • The Men of Color College Achievement (MoCCA) Project - This project addresses the problem of low completion rates for men of color at community colleges through an intervention that provides incoming male students of color with a culturally relevant student success course and adult mentors. In partnership with the Community College of Baltimore County, the team is engaged in program development, qualitative data collections to understand student perspectives, and an evaluation of the success course/mentorship intervention. This project is part of the College Completion Network and posts resources for supporting men of color here.

 

  • Identifying Discrete and Malleable Indicators of Culturally Responsive Instruction and Discipline—The purpose of this project is to use the culturally responsive practices (CRP) framework from a promising intervention, Double Check, to define and specify discrete indicators of CRPs; confirm and refine teacher and student surveys and classroom direct observation tools to measure these discrete indicators; and develop, refine, and evaluate a theory of change linking these indicators of CRPs with student academic and behavioral outcomes.

 

 

  • The Early Learning Network (Supporting Early Learning From Preschool Through Early Elementary School Grades Network)—The purpose of this research network is to examine why many children—especially children from low-income households or other disadvantaged backgrounds—experience academic and social difficulties as they begin elementary school. Network members are identifying factors (such as state and local policies, instructional practices, and parental support) that are associated with early learning and achievement from preschool through the early elementary school grades.
    • At the January 2020 IES PI Meeting, Early Learning Network researchers presented on the achievement gaps for early learners. Watch the video here. Presentations, newsletters, and other resources are available on the Early Learning Network website.

 

  • Reducing Achievement Gaps at Scale Through a Brief Self-Affirmation Intervention—In this study, researchers will test the effectiveness at scale of a low-cost, self-affirmation mindset intervention on the achievement, behavior, and attitudes of 7th grade students, focusing primarily on Black and Hispanic students. These minority student groups are susceptible to the threat of conforming to or being judged by negative stereotypes about the general underperformance of their racial/ethnic group ("stereotype threat"). A prior evaluation of this intervention has been reviewed by the What Works Clearinghouse and met standards without reservations.

 

 

IES seeks to work with education stakeholders at every level (for example, students, parents, educators, researchers, funders, and policy makers) to improve education access, equity, and outcomes for all learners, especially those who have been impacted by systemic bias. Together, we can do more.

This fall, IES will be hosting a technical working group on increasing the participation of researchers and institutions that have been historically underutilized in federal education research activities. If you have suggestions for how IES can better support research to improve equity in education, please contact us: NCER.Commissioner@ed.gov.  


Written by Christina Chhin (Christina.Chhin@ed.gov), National Center for Education Research (NCER).  

This is the fourth in a series of blog posts that stems from the 2020 Annual Principal Investigators Meeting. The theme of the meeting was Closing the Gaps for All Learners and focused on IES’s objective to support research that improves equity in access to education and education outcomes. Other posts in this series include Why I Want to Become an Education Researcher, Diversify Education Sciences? Yes, We Can!, and Closing the Opportunity Gap Through Instructional Alternatives to Exclusionary Discipline.

Bar Chart Races: Changing Demographics in K–12 Public School Enrollment

Bar chart races are a useful tool to visualize long-term trend changes. The visuals below, which use data from an array of sources, depict the changes in U.S. public elementary and secondary school enrollment from 1995 to 2029 by race/ethnicity.


Source: U.S. Department of Education, National Center for Education Statistics, Common Core of Data (CCD), “State Nonfiscal Survey of Public Elementary and Secondary Education,” 1995–96 through 2017–18; and National Elementary and Secondary Enrollment by Race/Ethnicity Projection Model, 1972 through 2029.


Total enrollment in public elementary and secondary schools has grown since 1995, but it has not grown across all racial/ethnic groups. As such, racial/ethnic distributions of public school students across the country have shifted.

One major change in public school enrollment has been in the number of Hispanic students enrolled. Enrollment of Hispanic students has grown from 6.0 million in 1995 to 13.6 million in fall 2017 (the last year of data available). During that time period, Hispanic students went from making up 13.5 percent of public school enrollment to 26.8 percent of public school enrollment. NCES projects that Hispanic enrollment will continue to grow, reaching 14.0 million and 27.5 percent of public school enrollment by fall 2029.

While the number of Hispanic public school students has grown, the number of White public school students schools has steadily declined from 29.0 million in 1995 to 24.1 million in fall 2017. NCES projects that enrollment of White public school students will continue to decline, reaching 22.4 million by 2029. The percentage of public school students who were White was 64.8 percent in 1995, and this percentage dropped below 50 percent in 2014 (to 49.5 percent). NCES projects that in 2029, White students will make up 43.8 percent of public school enrollment.

The percentage of public school students who were Black decreased from 16.8 percent in 1995 to 15.2 percent in 2017 and is projected to remain at 15.2 percent in 2029. The number of Black public school students increased from 7.6 million in 1995 to a peak of 8.4 million in 2005 but is projected to decrease to 7.7 million by 2029. Between fall 2017 and fall 2029, the percentage of public school students who were Asian/Pacific Islander is projected to continue increasing (from 5.6 to 6.9 percent), as is the percentage who were of Two or more races (from 3.9 to 5.8 percent). American Indian/Alaska Native students account for about 1 percent of public elementary and secondary enrollment in all years.

For more information about this topic, see The Condition of Education indicator Racial/Ethnic Enrollment in Public Schools.

 

By Ke Wang and Rachel Dinkes, AIR