IES Blog

Institute of Education Sciences

Updates on Research Center Efforts to Increase Diversity, Equity, Inclusion, and Accessibility

As we begin a new school year, NCER and NCSER wanted to share with our community some of the work we have been doing—and are seeking to do more of—in relationship to diversity, equity, inclusion, and accessibility (DEIA). We plan to provide occasional updates via this blog to share progress and keep the conversations going.  

Actions on Diversity

At the end of 2020, IES convened a Technical Working Group (TWG) to get feedback on ways that the research centers could improve our investments focused on DEIA. Under the leadership of Drs. Katina Stapleton and Christina Chhin, we convened a stellar panel that participated in a robust conversation. That conversation and the recommendations from the panel are available in this summary document. We are already implementing some of the recommendations and wanted to share steps that we have taken and our plans for next steps to advance DEIA in IES-funded research.

  1. One of the first steps that we took in response to the TWG recommendations was to take a close look at our Requests for Applications (RFAs), identify potential barriers to applicants from underrepresented groups, and revise and/or add language that more clearly articulated our commitment to DEIA, both in terms of those that conduct the research and in the populations studied. These changes were reflected in our FY 2022 RFAs, and we will continue to revise and improve our application materials.
  2. IES has been committed to building expertise among a broad range of scholars in the education sciences for nearly two decades. The TWG noted, however, that there is a pressing need to provide funds for early career investigators who may be working at MSIs, teaching-intensive institutions, and/or at institutions with limited opportunities for research mentorship. In response, IES launched an Early Career Mentoring for Faculty at MSIs research program. This new program extends our FY 2016 training investment in MSIs that we recompeted in FY 2021: the Pathways to the Education Sciences Training program. This program is designed to encourage undergraduate, post-baccalaureate, and masters-level students in groups that are historically underrepresented in doctoral education to pursue graduate study relevant to education research. Currently, there are seven IES-funded Pathways training programs in the United States, hosted by minority serving institutions (MSIs) and their partners. We are excited to see who applied in this first round of the Early Career Mentoring program and anticipate investing in this program in FY 2023 and beyond.  
  3. The TWG also recommended that IES intentionally reach out to the MSI community to ensure that they know about the opportunities available at IES. We held our first such event since the TWG on September 7, 2021, where IES hosted a virtual listening session at HBCU week. More than 250 scholars attended that session and provided valuable feedback on barriers to HBCU-based researchers applying for research funding from IES. We are in the process of scheduling additional listening sessions with other communities of researchers to provide more opportunities for input from diverse stakeholders and underrepresented groups.
  4. The TWG also recommended that IES take a deeper look at the demographic and institutional data of applicants to our grants programs to identify which groups of researchers and institutions are underrepresented. Data indicate that the percentage of applications received from MSIs between 2013 and 2020 was very small—4% of applications to NCER and 1% to NCSER. Of those applications that were funded, 10% of NCER’s awards were made to MSIs and none of NCSER’s awards were made to MSIs. IES reviewed the demographic information FY 2021 NCER and NCSER grant applicants and awardees voluntarily submitted, and among those who reported their demographic information, we found the following:
    • Gender (response rate of approximately 82%) - The majority of the principal investigators that applied for (62%) and received funding (59%) from IES identified as female.
    • Race (response rate of approximately 75%) - The majority of principal investigators that applied for (78%) and received funding (88%) from IES identified as White, while 22% of applicants and 13% of awardees identified as non-White or multi-racial.
    • Ethnicity (response rate of approximately 72%) - The majority of principal investigators that applied for (95%) and received funding (97%) identified as non-Hispanic.
    • Disability (response rate of approximately 70%) - The majority of principal investigators that applied for (97%) and received funding (96%) identified as not having a disability.

These data underscore the need for IES to continue to broaden and diversify the education research pipeline, including institutions and researchers, and better support the needs of underrepresented researchers in the education community. However, tracking our progress has proven to be a challenge. Responding to the demographic survey was voluntary so a significant number of applicants chose not to respond to particular questions. We strongly encourage all our grant applicants to respond to the demographic survey so that we will be better able to track our progress in improving diversity in our grant programs.

Addressing Misconceptions that Limit Diversity in IES Applicants

TWG panel members and attendees at the HBCU session highlighted a series of misconceptions that the education sciences community holds about the funding process at IES and recommended that IES identify communication strategies to address these misconceptions. IES hears that message loud and clear and wants to address at least a few of those misconceptions here.

Myth: IES only funds randomized controlled trials, limiting the range of researchers and institutions that can be competitive for IES grants.

Reality: IES funds a range of research, including measurement work, exploratory research, intervention development and testing, and efficacy and replication studies. We also fund a wide range of methods, including various experimental and quasi-experimental designs and mixed methods that combine quantitative and qualitative methods.

Myth: IES doesn’t support course buyout or summer salary.

Reality: IES supports grant personnel time to carry out research related activities. This can include course buyout and summer salary. Principal investigators on grants coordinate their budget planning with sponsored projects officers to ensure that their budgets comply with institutional guidelines as well as federal guidelines.

Myth: IES program officers are too busy to help novice applicants.

Reality: Because IES program officers are not involved in the peer review of applications, they can provide in-depth technical assistance and advice throughout the application process. They can even review drafts of proposals prior to submission! IES program officers can be your best resource in helping you submit a competitive grant proposal.

 

If you’d like to learn more about DEIA at IES, please see our Diversity Statement. You can also subscribe to our Newsflash and follow us on Twitter (@IESResearch) for announcements of future listening sessions. Please send any feedback or suggestions to NCER.Commissioner@ed.gov (National Center for Education Research) or NCSER.Commissioner@ed.gov (National Center for Special Education Research). Also, watch this blog over the next few months to read about the wide range of IES grantees and fellows from diverse backgrounds and career paths. Next up is our Hispanic Heritage Month (Sept. 15-Oct. 15, 2021) blog series.


Christina Chhin (Christina.Chhin@ed.gov), Katina Stapleton (Katina.Stapleton@ed.gov), and Katie Taylor (Katherine.Taylor@ed.gov) assisted Commissioners Albro and McLaughlin in writing this blog.

Working to Understand the Policy Process in the Development of Michigan’s Read by Grade Three Law

In recent decades, there has been an emphasis on quantitative, causal research in education policy. These methods are best suited for answering questions about the effects of a policy and whether it achieved its intended outcomes. While the question of, “Did it work?” remains critical, there is a need for research that also asks, “Why did it work? For whom? In what contexts?” To answer these types of questions, researchers must incorporate rigorous qualitative methods into their quantitative studies. Education research organizations like the Association for Education Finance and Policy have explicitly invited proposals using qualitative and mixed methodologies in an effort to elevate research addressing a range of critical education policy questions. Funding organizations, including IES, encourage applicants to incorporate qualitative methods into their research process. In this guest blog, Amy Cummings, Craig De Voto, and Katharine Strunk discuss how they are using qualitative methods in their evaluation of a state education policy.

 

In our IES-funded study, we use qualitative, survey, and administrative data to understand the implementation and impact of Michigan’s early literacy law—the Read by Grade Three Law. Like policies passed in 19 other states, the Read by Grade Three Law aims to improve K-3 student literacy skills and mandates retention for those who do not meet a predetermined benchmark on the state’s third-grade English language arts assessment. Although the retention component of these policies remain controversial, similar laws are under consideration in several other states, including Alaska, Kentucky, and New Mexico. Below are some of the ways that we have integrated qualitative methods in our evaluation study to better understand the policy process in the development of the Read by Grade Three Law.  

Collecting qualitative sources helped us understand how the policy came to be, thereby assisting in the structure of our data collection for examining the law’s implementation and subsequent effects. In our first working paper stemming from this study, we interviewed 24 state-level stakeholders (policymakers, state department of education officials, early literacy leaders) involved in the development of the law and coded state policy documents related to early literacy to assess the similarity between Michigan’s policy and those of other states. Understanding the various components of the Law and how they ended up in the policy led us to ensure that we asked educators about their perceptions and implementation of these components in surveys that are also part of our evaluation. For example, because our interviews made clear the extent to which the inclusion of the retention component of the Law was controversial during its development, we included questions in the survey to assess educators’ perceptions and intended implementation of this component of the Law. In addition, it confirmed the importance of our plan to use state administrative retention and assessment data to evaluate the effect of retention on student literacy outcomes.

To trace the Read by Grade Three Law’s conception, development, and passage, we analyzed these qualitative data using two theories of the policy process: Multiple Streams Framework (MSF) and policy transfer. MSF says that policy issues emerge on government agendas through three streams: problem, policy, and political. When these streams join, a policy window is opened during which there is a greater opportunity for passing legislation. Meanwhile, policy transfer highlights how policies enacted in one place are often used in the development of policies in another.

We found that events in the problem and political streams created conditions ripe for the passage of an early literacy policy in Michigan:

  • A national sentiment around improving early literacy, including a retention-based third-grade literacy policy model that had been deemed successful in Florida
  • A pressing problem took shape, as evidenced by the state’s consistently below average fourth-grade reading scores on the National Assessment of Educational Progress
  • A court case addressing persistently low-test scores in a Detroit-area district
  • Previous attempts by the state to improve early literacy

As a result of these events, policy entrepreneurs—those willing to invest resources to get their preferred policy passed—took advantage of political conditions in the state and worked with policymakers to advance a retention-based third-grade literacy policy model. The figure below illustrates interviewee accounts of the Read by Grade Three Law’s development. Our policy document analysis further reveals that Michigan’s and Florida’s policies are very similar, only diverging on nine of the 50 elements on which we coded.

 

 

Although this study focuses on the development and passage of Michigan’s early literacy law, our findings highlight both practical and theoretical elements of the policy process that can be useful to researchers and policymakers. To this end, we show how particular conditions, coupled by policy entrepreneurs, spurred Michigan’s consideration of such a policy. It is conceivable that many state education policies beyond early literacy have taken shape under similar circumstances: a national sentiment combined with influential brokers outside government. In this way, our mixed-methods study provides a practical model of what elements might manifest to enact policy change more broadly.

From a theoretical standpoint, this research also extends our understanding of the policy process by showing that MSF and the theory of policy transfer can work together. We learned that policy entrepreneurs can play a vital role in transferring policy from one place to another by capitalizing on conditions in a target location and coming with a specific policy proposal at the ready.

There is, of course, more to be learned about the intersection between different theories of the policy process, as well as how external organizations as opposed to individuals operate as policy entrepreneurs. As the number of education advocacy organizations continues to grow and these groups become increasingly active in shaping policy, this will be an exciting avenue for researchers to continue to explore.

This study is just one example of how qualitative research can be used in education policy research and shows how engaging in such work can be both practically and theoretically valuable. The most comprehensive evaluations will use different methodologies in concert with one another to understand education policies, because ultimately, how policies are conceptualized and developed has important implications for their effectiveness.


Amy Cummings is an education policy PhD student and graduate research assistant at the Education Policy Innovation Collaborative (EPIC) at Michigan State University (MSU).

Craig De Voto is a visiting research assistant professor in the Learning Sciences Research Institute at the University of Illinois at Chicago and an EPIC affiliated researcher.

Katharine O. Strunk is the faculty director of EPIC, the Clifford E. Erickson Distinguished Chair in Education, and a professor of education policy and by courtesy economics at MSU.

Data Collection for Cost Analysis in an Efficacy Trial

This blog is part of a guest series by the Cost Analysis in Practice (CAP) project team to discuss practical details regarding cost studies.

In one of our periodic conversations about addressing cost analysis challenges for an efficacy trial, the Cost Analysis in Practice (CAP) Project and Promoting Accelerated Reading Comprehension of Text-Local (PACT-L) teams took on a number of questions related to data collection. The PACT-L cost analysts have a particularly daunting task with over 100 schools spread across multiple districts participating in a social studies and reading comprehension intervention. These schools will be served over the course of three cohorts. Here, we highlight some of the issues discussed and our advice.

Do we need to collect information about resource use in every district in our study?

For an efficacy study, you should collect data from all districts at least for the first cohort to assess the variation in resource use. If there isn’t much variation, then you can justify limiting data collection to a sample for subsequent cohorts.

Do we need to collect data from every school within each district?

Similar to the previous question, you would ideally collect data from every participating school within each district and assess variability across schools. You may be able to justify collecting data from a stratified random sample of schools, based on study relevant characteristics, within each district and presenting a range of costs to reflect differences. You might consider this option if funding for cost analysis is limited. Note that “district” and “school” refer to an example of one common setup in an educational randomized controlled trial, but other blocking and clustering units can stand in for other study designs and contexts.

How often should we collect cost data? 

The frequency of data collection depends on what the intervention is, length of implementation, and the types of resources (“ingredients”) needed. People’s time is usually the most important resource used for educational interventions, often 90% of the total costs. That’s where you should spend the most effort collecting data. Unfortunately, people are notoriously bad at reporting their time use, so ask for time use as often as you can (daily, weekly). Make it as easy as possible for people to respond and offer financial incentives, if possible. For efficacy trials in particular, be sure to collect cost data for each year of implementation so that you are accurately capturing the resources needed to produce the observed effects.

What’s the best way to collect time use data?

There are a few ways to collect time use data. The PACT-L team has had success with 2-question time logs (see Table 1) administered at the end of each history lesson during the fall quarter, plus a slightly longer 7-question final log (see Figure 2).

 

Table 1. Two-question time log. Copyright © 2021 American Institutes for Research.
1. Approximately, how many days did you spend teaching your [NAME OF THE UNIT] unit?  ____ total days
2. Approximately, how many hours of time outside class did you spend on the following activities for [NAME OF UNIT] unit? 

Record time to the nearest half hour (e.g., 1, 1.5, 2, 2.5)

   a. Developing lesson plans _____ hour(s)
   b. Grading student assignments _____ hour(s)
   c. Developing curricular materials, student assignments, or student assessments _____ hour(s)
   d. Providing additional assistance to students _____ hour(s)
   e. Other activities (e.g., coordinating with other staff; communicating with parents) related to unit _____ hour(s)

 

Table 2. Additional questions for the final log. Copyright © 2021 American Institutes for Research.
3. Just thinking of summer and fall, to prepare for teaching your American History classes, how many hours of professional development or training did you receive so far this year (e.g., trainings, coursework, coaching)? _____ Record time to the nearest half hour (e.g., 1, 1.5, 2, 2.5)
4. So far this year, did each student receive a school-provided textbook (either printed or in a digital form) for this history class? ______Yes     ______No
5. So far this year, did each student receive published materials other than a textbook (e.g., readings, worksheets, activities) for your American history classes? ______Yes     ______No
6. So far this year, what percentage of class time did you use the following materials for your American History classes? Record average percent of time used these materials (It has to add to 100%)
   a. A hardcopy textbook provided by the school _____%
   b. Published materials that were provided to you, other than a textbook (e.g., readings, worksheets, activities) _____%
   c. Other curricular materials that you located/provided yourself _____%
   d. Technology-based curricular materials or software (e.g., books online, online activities) _____%
       Total 100%
7. So far this year, how many hours during a typical week did the following people help you with your American history course? Please answer for all that apply Record time to the nearest half hour (e.g., 1, 1.5, 2, 2.5)
   a. Teaching assistant _____ hours during a typical week
   b. Special education teacher _____ hours during a typical week
   c. English learner teacher _____ hours during a typical week
   d. Principal or assistant principal _____ hours during a typical week
   e. Other administrative staff _____ hours during a typical week
   f. Coach _____ hours during a typical week
   g. Volunteer _____ hours during a typical week

 

They also provided financial incentives. If you cannot use time logs, interviews of a random sample of participants will likely yield more accurate information than surveys of all participants because the interviewer can prompt the interviewee and clarify responses that don’t make sense (see CAP Project Template for Cost Analysis Interview Protocol under Collecting and Analyzing Cost Data). In our experience, participants enjoy interviews about how they spend their time more than trying to enter time estimates in restricted survey questions. There also is good precedent for collecting time use through interviews: the American Time Use Survey is administered by trained interviewers who follow a scripted protocol lasting about 20 minutes.

Does it improve accuracy to collect time use in hours or as a percentage of total time?

Both methods of collecting time use can lead to less than useful estimates like the teacher whose percentage of time on various activities added up to 233%, or the coach who miraculously spent 200 hours training teachers in one week. Either way, always be clear about the relevant time period. For example, “Over the last 7 days, how many hours did you spend…” or “Of the 40 hours you worked last week, what percentage were spent on…” Mutually exclusive multiple-choice answers can also help ensure reasonable responses. For example, the answer options could be “no time; less than an hour; 1-2 hours; 3-5 hours; more than 5 hours.

What about other ingredients besides time?

Because ingredients such as materials and facilities usually represent a smaller share of total costs for educational interventions and are often more stable over time (for example, the number of hours a teacher spends on preparing to deliver an intervention may fluctuate from week to week, but the classrooms tend to be available for use for a consistent amount of time each week), the burden of gathering data on other resources is often lower. You can add a few questions to a survey about facilities, materials and equipment, and other resources such as parental time or travel once or twice per year, or better yet to an interview, or better still, to both. One challenge is that even though these resources may have less of an impact on the bottom line costs, they can involve quantities that are more difficult for participants to estimate than their own time such as the square footage of their office.

If you have additional questions about collecting data for your own cost analysis and would like free technical assistance from the IES-funded CAP Project, submit a request here. The CAP Project team is always game for a new challenge and happy to help other researchers brainstorm data collection strategies that would be appropriate for your analysis.


Robert D. Shand is Assistant Professor in the School of Education at American University

Iliana Brodziak is a senior research analyst at the American Institutes for Research

Why School-based Mental Health?

In May 2021, we launched a new blog series called Spotlight on School-based Mental Health to unpack the why, what, when, who, and where of providing mental health services in schools. This first post in the series focuses on the why by discussing three IES-funded projects that highlight the importance of these services.

Increasing access to needed services. A primary benefit of school-based mental health is that it can increase access to much-needed services. A 2019 report from the Substance Abuse and Mental Health Services Administration (SAMSHA) indicates that 60% of the nearly 4 million 12- to 17-year-olds who reported a major depressive episode in the past year did not receive any treatment whatsoever. What can be done to address this need? One idea being tested in this 2019 efficacy replication study is whether school counselors with clinician support can provide high school students a telehealth version of a tier-2 depression prevention program with prior evidence of efficacy, Interpersonal Psychotherapy-Adolescent Skills Training (IPT-AST). Through individual and group sessions, the IPT-AST program provides direct instruction in communication and interpersonal problem-solving strategies to decrease conflict, increase support, and improve social functioning.   

Improving access to services for Black youth. Social anxiety (SA) is a debilitating fear of negative evaluation in performance and social situations that can make school a particularly challenging environment. The connection between SA and impaired school functioning is likely exacerbated in Black youth who often contend with negative racial stereotypes. In this 2020 development and innovation project, the research team aims to expand Black youth’s access to mental health services by improving the contextual and cultural relevance of a promising school-based social anxiety intervention, the Skills for Academic and Social Success (SASS). Through community partnerships, focus groups, and interviews, the team will make cultural and structural changes to SASS and add strategies to engage Black students in urban high schools who experience social anxiety.

Reducing stigma by promoting well-being. The second leading barrier cited by adolescents for not seeking mental health treatment include social factors such as perceived stigma and embarrassment. One way to counteract these barriers is to frame intervention in more positive terms with a focus on subjective well-being, a central construct in positive psychology. In this 2020 initial efficacy study, the research team is testing the Well-Being Promotion Program in middle schools in Florida and Massachusetts. In 10 core sessions, students low in subjective well-being take part in group activities and complete homework assignments designed to increase gratitude, acts of kindness, use of signature character strengths, savoring of positive experiences, optimism, and hopeful or goal-directed thinking.

These three projects illustrate why we need to carefully consider school-based mental health as a logical and critical part of success in school, particularly as we navigate the road to helping students recover from disengagement and learning loss during the coronavirus pandemic.  

Next in the series, we will look at the what of school-based mental health and highlight several projects that are developing innovative ways to support the mental health of students and staff in school settings.


Written by Emily Doolittle (Emily.Doolittle@ed.gov), NCER Team Lead for Social Behavioral Research at IES

 

Timing is Everything: Collaborating with IES Grantees to Create a Needed Cost Analysis Timeline

This blog is part of a guest series by the Cost Analysis in Practice (CAP) project team to discuss practical details regarding cost studies.

 

A few months ago, a team of researchers conducting a large, IES-funded randomized controlled trial (RCT) on the intervention Promoting Accelerated Reading Comprehension of Text-Local (PACT-L) met with the Cost Analysis in Practice (CAP) Project team in search of planning support. The PACT-L team had just received funding for a 5-year systematic replication evaluation and were consumed with planning its execution. During an initial call, Iliana Brodziak, who is leading the cost analysis for the evaluation study, shared, “This is a large RCT with 150 schools across multiple districts each year. There is a lot to consider when thinking about all of the moving pieces and when they need to happen. I think I know what needs to happen, but it would help to have the key events on a timeline.”

The comments and feeling of overload are very common even for experienced cost analysts like Iliana because conducting a large RCT requires extensive thought and planning. Ideally, planning for a cost analysis at this scale is integrated with the overall evaluation planning at the outset of the study. For example, the PACT-L research team developed a design plan that specified the overall evaluation approach along with the cost analysis. Those who save the cost analysis for the end, or even for the last year of the evaluation, may find they have incomplete data, insufficient time or budget for analysis, and other avoidable challenges. Iliana understood this and her remark set off a spark for the CAP Project team—developing a timeline that aligns the steps for planning a cost analysis with RCT planning.

As the PACT-L and CAP Project teams continued to collaborate, it became clear that the PACT-L evaluation would be a great case study for crafting a full cost analysis timeline for rigorous evaluations. The CAP Project team, with input from the PACT-L evaluation team, created a detailed timeline for each year of the evaluation. It captures the key steps of a cost analysis and integrates the challenges and considerations that Iliana and her team anticipated for the PACT-L evaluation and similar large RCTs.

In addition, the timeline provides guidance on the data collection process for each year of the evaluation.

  • Year 1:  The team designs the cost analysis data collection instruments. This process includes collaborating with the broader evaluation team to ensure the cost analysis is integrated in the IRB application, setting up regular meetings with the team, and creating and populating spreadsheets or some other data entry tool.
  • Year 2: Researchers plan to document the ingredients or resources needed to implement the intervention on an ongoing basis. The timeline recommends collecting data, reviewing the data, and revising the data collection instruments in Year 2.
  • Year 3 (and maybe Year 4): The iteration of collecting data and revising instruments continue in Year 3 and, if needed, in Year 4.
  • Year 5: Data collection should be complete, allowing for the majority of the analysis. 

This is just one example of the year-by-year guidance included in the timeline. The latest version of the Timeline of Activities for Cost Analysis is available to help provide guidance to other researchers as they plan and execute their economic evaluations. As a planning tool, the timeline gathers all the moving pieces in one place. It includes detailed descriptions and notes for consideration for each year of the study and provides tips to help researchers.

The PACT-L evaluation team is still in the first year of the evaluation, leaving time for additional meetings and collective brainstorming. The CAP Project and PACT-L teams hope to continue collaborating over the next few years, using the shared expertise among the teams and PACT-L’s experience carrying out the cost analysis to refine the timeline.

Visit the CAP Project website to find other free cost analysis resources or to submit a help request for customized technical assistance on your own project.


Jaunelle Pratt-Williams is an Education Researcher at SRI International.

Iliana Brodziak is a senior research analyst at the American Institutes for Research.

Katie Drummond, a Senior Research Scientist at WestEd. 

Lauren Artzi is a senior researcher at the American Institutes for Research.