Inside IES Research

Notes from NCER & NCSER

Updates on Research Center Efforts to Increase Diversity, Equity, Inclusion, and Accessibility

As we begin a new school year, NCER and NCSER wanted to share with our community some of the work we have been doing—and are seeking to do more of—in relationship to diversity, equity, inclusion, and accessibility (DEIA). We plan to provide occasional updates via this blog to share progress and keep the conversations going.  

Actions on Diversity

At the end of 2020, IES convened a Technical Working Group (TWG) to get feedback on ways that the research centers could improve our investments focused on DEIA. Under the leadership of Drs. Katina Stapleton and Christina Chhin, we convened a stellar panel that participated in a robust conversation. That conversation and the recommendations from the panel are available in this summary document. We are already implementing some of the recommendations and wanted to share steps that we have taken and our plans for next steps to advance DEIA in IES-funded research.

  1. One of the first steps that we took in response to the TWG recommendations was to take a close look at our Requests for Applications (RFAs), identify potential barriers to applicants from underrepresented groups, and revise and/or add language that more clearly articulated our commitment to DEIA, both in terms of those that conduct the research and in the populations studied. These changes were reflected in our FY 2022 RFAs, and we will continue to revise and improve our application materials.
  2. IES has been committed to building expertise among a broad range of scholars in the education sciences for nearly two decades. The TWG noted, however, that there is a pressing need to provide funds for early career investigators who may be working at MSIs, teaching-intensive institutions, and/or at institutions with limited opportunities for research mentorship. In response, IES launched an Early Career Mentoring for Faculty at MSIs research program. This new program extends our FY 2016 training investment in MSIs that we recompeted in FY 2021: the Pathways to the Education Sciences Training program. This program is designed to encourage undergraduate, post-baccalaureate, and masters-level students in groups that are historically underrepresented in doctoral education to pursue graduate study relevant to education research. Currently, there are seven IES-funded Pathways training programs in the United States, hosted by minority serving institutions (MSIs) and their partners. We are excited to see who applied in this first round of the Early Career Mentoring program and anticipate investing in this program in FY 2023 and beyond.  
  3. The TWG also recommended that IES intentionally reach out to the MSI community to ensure that they know about the opportunities available at IES. We held our first such event since the TWG on September 7, 2021, where IES hosted a virtual listening session at HBCU week. More than 250 scholars attended that session and provided valuable feedback on barriers to HBCU-based researchers applying for research funding from IES. We are in the process of scheduling additional listening sessions with other communities of researchers to provide more opportunities for input from diverse stakeholders and underrepresented groups.
  4. The TWG also recommended that IES take a deeper look at the demographic and institutional data of applicants to our grants programs to identify which groups of researchers and institutions are underrepresented. Data indicate that the percentage of applications received from MSIs between 2013 and 2020 was very small—4% of applications to NCER and 1% to NCSER. Of those applications that were funded, 10% of NCER’s awards were made to MSIs and none of NCSER’s awards were made to MSIs. IES reviewed the demographic information FY 2021 NCER and NCSER grant applicants and awardees voluntarily submitted, and among those who reported their demographic information, we found the following:
    • Gender (response rate of approximately 82%) - The majority of the principal investigators that applied for (62%) and received funding (59%) from IES identified as female.
    • Race (response rate of approximately 75%) - The majority of principal investigators that applied for (78%) and received funding (88%) from IES identified as White, while 22% of applicants and 13% of awardees identified as non-White or multi-racial.
    • Ethnicity (response rate of approximately 72%) - The majority of principal investigators that applied for (95%) and received funding (97%) identified as non-Hispanic.
    • Disability (response rate of approximately 70%) - The majority of principal investigators that applied for (97%) and received funding (96%) identified as not having a disability.

These data underscore the need for IES to continue to broaden and diversify the education research pipeline, including institutions and researchers, and better support the needs of underrepresented researchers in the education community. However, tracking our progress has proven to be a challenge. Responding to the demographic survey was voluntary so a significant number of applicants chose not to respond to particular questions. We strongly encourage all our grant applicants to respond to the demographic survey so that we will be better able to track our progress in improving diversity in our grant programs.

Addressing Misconceptions that Limit Diversity in IES Applicants

TWG panel members and attendees at the HBCU session highlighted a series of misconceptions that the education sciences community holds about the funding process at IES and recommended that IES identify communication strategies to address these misconceptions. IES hears that message loud and clear and wants to address at least a few of those misconceptions here.

Myth: IES only funds randomized controlled trials, limiting the range of researchers and institutions that can be competitive for IES grants.

Reality: IES funds a range of research, including measurement work, exploratory research, intervention development and testing, and efficacy and replication studies. We also fund a wide range of methods, including various experimental and quasi-experimental designs and mixed methods that combine quantitative and qualitative methods.

Myth: IES doesn’t support course buyout or summer salary.

Reality: IES supports grant personnel time to carry out research related activities. This can include course buyout and summer salary. Principal investigators on grants coordinate their budget planning with sponsored projects officers to ensure that their budgets comply with institutional guidelines as well as federal guidelines.

Myth: IES program officers are too busy to help novice applicants.

Reality: Because IES program officers are not involved in the peer review of applications, they can provide in-depth technical assistance and advice throughout the application process. They can even review drafts of proposals prior to submission! IES program officers can be your best resource in helping you submit a competitive grant proposal.

 

If you’d like to learn more about DEIA at IES, please see our Diversity Statement. You can also subscribe to our Newsflash and follow us on Twitter (@IESResearch) for announcements of future listening sessions. Please send any feedback or suggestions to NCER.Commissioner@ed.gov (National Center for Education Research) or NCSER.Commissioner@ed.gov (National Center for Special Education Research). Also, watch this blog over the next few months to read about the wide range of IES grantees and fellows from diverse backgrounds and career paths. Next up is our Hispanic Heritage Month (Sept. 15-Oct. 15, 2021) blog series.


Christina Chhin (Christina.Chhin@ed.gov), Katina Stapleton (Katina.Stapleton@ed.gov), and Katie Taylor (Katherine.Taylor@ed.gov) assisted Commissioners Albro and McLaughlin in writing this blog.

Working to Understand the Policy Process in the Development of Michigan’s Read by Grade Three Law

In recent decades, there has been an emphasis on quantitative, causal research in education policy. These methods are best suited for answering questions about the effects of a policy and whether it achieved its intended outcomes. While the question of, “Did it work?” remains critical, there is a need for research that also asks, “Why did it work? For whom? In what contexts?” To answer these types of questions, researchers must incorporate rigorous qualitative methods into their quantitative studies. Education research organizations like the Association for Education Finance and Policy have explicitly invited proposals using qualitative and mixed methodologies in an effort to elevate research addressing a range of critical education policy questions. Funding organizations, including IES, encourage applicants to incorporate qualitative methods into their research process. In this guest blog, Amy Cummings, Craig De Voto, and Katharine Strunk discuss how they are using qualitative methods in their evaluation of a state education policy.

 

In our IES-funded study, we use qualitative, survey, and administrative data to understand the implementation and impact of Michigan’s early literacy law—the Read by Grade Three Law. Like policies passed in 19 other states, the Read by Grade Three Law aims to improve K-3 student literacy skills and mandates retention for those who do not meet a predetermined benchmark on the state’s third-grade English language arts assessment. Although the retention component of these policies remain controversial, similar laws are under consideration in several other states, including Alaska, Kentucky, and New Mexico. Below are some of the ways that we have integrated qualitative methods in our evaluation study to better understand the policy process in the development of the Read by Grade Three Law.  

Collecting qualitative sources helped us understand how the policy came to be, thereby assisting in the structure of our data collection for examining the law’s implementation and subsequent effects. In our first working paper stemming from this study, we interviewed 24 state-level stakeholders (policymakers, state department of education officials, early literacy leaders) involved in the development of the law and coded state policy documents related to early literacy to assess the similarity between Michigan’s policy and those of other states. Understanding the various components of the Law and how they ended up in the policy led us to ensure that we asked educators about their perceptions and implementation of these components in surveys that are also part of our evaluation. For example, because our interviews made clear the extent to which the inclusion of the retention component of the Law was controversial during its development, we included questions in the survey to assess educators’ perceptions and intended implementation of this component of the Law. In addition, it confirmed the importance of our plan to use state administrative retention and assessment data to evaluate the effect of retention on student literacy outcomes.

To trace the Read by Grade Three Law’s conception, development, and passage, we analyzed these qualitative data using two theories of the policy process: Multiple Streams Framework (MSF) and policy transfer. MSF says that policy issues emerge on government agendas through three streams: problem, policy, and political. When these streams join, a policy window is opened during which there is a greater opportunity for passing legislation. Meanwhile, policy transfer highlights how policies enacted in one place are often used in the development of policies in another.

We found that events in the problem and political streams created conditions ripe for the passage of an early literacy policy in Michigan:

  • A national sentiment around improving early literacy, including a retention-based third-grade literacy policy model that had been deemed successful in Florida
  • A pressing problem took shape, as evidenced by the state’s consistently below average fourth-grade reading scores on the National Assessment of Educational Progress
  • A court case addressing persistently low-test scores in a Detroit-area district
  • Previous attempts by the state to improve early literacy

As a result of these events, policy entrepreneurs—those willing to invest resources to get their preferred policy passed—took advantage of political conditions in the state and worked with policymakers to advance a retention-based third-grade literacy policy model. The figure below illustrates interviewee accounts of the Read by Grade Three Law’s development. Our policy document analysis further reveals that Michigan’s and Florida’s policies are very similar, only diverging on nine of the 50 elements on which we coded.

 

 

Although this study focuses on the development and passage of Michigan’s early literacy law, our findings highlight both practical and theoretical elements of the policy process that can be useful to researchers and policymakers. To this end, we show how particular conditions, coupled by policy entrepreneurs, spurred Michigan’s consideration of such a policy. It is conceivable that many state education policies beyond early literacy have taken shape under similar circumstances: a national sentiment combined with influential brokers outside government. In this way, our mixed-methods study provides a practical model of what elements might manifest to enact policy change more broadly.

From a theoretical standpoint, this research also extends our understanding of the policy process by showing that MSF and the theory of policy transfer can work together. We learned that policy entrepreneurs can play a vital role in transferring policy from one place to another by capitalizing on conditions in a target location and coming with a specific policy proposal at the ready.

There is, of course, more to be learned about the intersection between different theories of the policy process, as well as how external organizations as opposed to individuals operate as policy entrepreneurs. As the number of education advocacy organizations continues to grow and these groups become increasingly active in shaping policy, this will be an exciting avenue for researchers to continue to explore.

This study is just one example of how qualitative research can be used in education policy research and shows how engaging in such work can be both practically and theoretically valuable. The most comprehensive evaluations will use different methodologies in concert with one another to understand education policies, because ultimately, how policies are conceptualized and developed has important implications for their effectiveness.


Amy Cummings is an education policy PhD student and graduate research assistant at the Education Policy Innovation Collaborative (EPIC) at Michigan State University (MSU).

Craig De Voto is a visiting research assistant professor in the Learning Sciences Research Institute at the University of Illinois at Chicago and an EPIC affiliated researcher.

Katharine O. Strunk is the faculty director of EPIC, the Clifford E. Erickson Distinguished Chair in Education, and a professor of education policy and by courtesy economics at MSU.

Data Collection for Cost Analysis in an Efficacy Trial

This blog is part of a guest series by the Cost Analysis in Practice (CAP) project team to discuss practical details regarding cost studies.

In one of our periodic conversations about addressing cost analysis challenges for an efficacy trial, the Cost Analysis in Practice (CAP) Project and Promoting Accelerated Reading Comprehension of Text-Local (PACT-L) teams took on a number of questions related to data collection. The PACT-L cost analysts have a particularly daunting task with over 100 schools spread across multiple districts participating in a social studies and reading comprehension intervention. These schools will be served over the course of three cohorts. Here, we highlight some of the issues discussed and our advice.

Do we need to collect information about resource use in every district in our study?

For an efficacy study, you should collect data from all districts at least for the first cohort to assess the variation in resource use. If there isn’t much variation, then you can justify limiting data collection to a sample for subsequent cohorts.

Do we need to collect data from every school within each district?

Similar to the previous question, you would ideally collect data from every participating school within each district and assess variability across schools. You may be able to justify collecting data from a stratified random sample of schools, based on study relevant characteristics, within each district and presenting a range of costs to reflect differences. You might consider this option if funding for cost analysis is limited. Note that “district” and “school” refer to an example of one common setup in an educational randomized controlled trial, but other blocking and clustering units can stand in for other study designs and contexts.

How often should we collect cost data? 

The frequency of data collection depends on what the intervention is, length of implementation, and the types of resources (“ingredients”) needed. People’s time is usually the most important resource used for educational interventions, often 90% of the total costs. That’s where you should spend the most effort collecting data. Unfortunately, people are notoriously bad at reporting their time use, so ask for time use as often as you can (daily, weekly). Make it as easy as possible for people to respond and offer financial incentives, if possible. For efficacy trials in particular, be sure to collect cost data for each year of implementation so that you are accurately capturing the resources needed to produce the observed effects.

What’s the best way to collect time use data?

There are a few ways to collect time use data. The PACT-L team has had success with 2-question time logs (see Table 1) administered at the end of each history lesson during the fall quarter, plus a slightly longer 7-question final log (see Figure 2).

 

Table 1. Two-question time log. Copyright © 2021 American Institutes for Research.
1. Approximately, how many days did you spend teaching your [NAME OF THE UNIT] unit?  ____ total days
2. Approximately, how many hours of time outside class did you spend on the following activities for [NAME OF UNIT] unit? 

Record time to the nearest half hour (e.g., 1, 1.5, 2, 2.5)

   a. Developing lesson plans _____ hour(s)
   b. Grading student assignments _____ hour(s)
   c. Developing curricular materials, student assignments, or student assessments _____ hour(s)
   d. Providing additional assistance to students _____ hour(s)
   e. Other activities (e.g., coordinating with other staff; communicating with parents) related to unit _____ hour(s)

 

Table 2. Additional questions for the final log. Copyright © 2021 American Institutes for Research.
3. Just thinking of summer and fall, to prepare for teaching your American History classes, how many hours of professional development or training did you receive so far this year (e.g., trainings, coursework, coaching)? _____ Record time to the nearest half hour (e.g., 1, 1.5, 2, 2.5)
4. So far this year, did each student receive a school-provided textbook (either printed or in a digital form) for this history class? ______Yes     ______No
5. So far this year, did each student receive published materials other than a textbook (e.g., readings, worksheets, activities) for your American history classes? ______Yes     ______No
6. So far this year, what percentage of class time did you use the following materials for your American History classes? Record average percent of time used these materials (It has to add to 100%)
   a. A hardcopy textbook provided by the school _____%
   b. Published materials that were provided to you, other than a textbook (e.g., readings, worksheets, activities) _____%
   c. Other curricular materials that you located/provided yourself _____%
   d. Technology-based curricular materials or software (e.g., books online, online activities) _____%
       Total 100%
7. So far this year, how many hours during a typical week did the following people help you with your American history course? Please answer for all that apply Record time to the nearest half hour (e.g., 1, 1.5, 2, 2.5)
   a. Teaching assistant _____ hours during a typical week
   b. Special education teacher _____ hours during a typical week
   c. English learner teacher _____ hours during a typical week
   d. Principal or assistant principal _____ hours during a typical week
   e. Other administrative staff _____ hours during a typical week
   f. Coach _____ hours during a typical week
   g. Volunteer _____ hours during a typical week

 

They also provided financial incentives. If you cannot use time logs, interviews of a random sample of participants will likely yield more accurate information than surveys of all participants because the interviewer can prompt the interviewee and clarify responses that don’t make sense (see CAP Project Template for Cost Analysis Interview Protocol under Collecting and Analyzing Cost Data). In our experience, participants enjoy interviews about how they spend their time more than trying to enter time estimates in restricted survey questions. There also is good precedent for collecting time use through interviews: the American Time Use Survey is administered by trained interviewers who follow a scripted protocol lasting about 20 minutes.

Does it improve accuracy to collect time use in hours or as a percentage of total time?

Both methods of collecting time use can lead to less than useful estimates like the teacher whose percentage of time on various activities added up to 233%, or the coach who miraculously spent 200 hours training teachers in one week. Either way, always be clear about the relevant time period. For example, “Over the last 7 days, how many hours did you spend…” or “Of the 40 hours you worked last week, what percentage were spent on…” Mutually exclusive multiple-choice answers can also help ensure reasonable responses. For example, the answer options could be “no time; less than an hour; 1-2 hours; 3-5 hours; more than 5 hours.

What about other ingredients besides time?

Because ingredients such as materials and facilities usually represent a smaller share of total costs for educational interventions and are often more stable over time (for example, the number of hours a teacher spends on preparing to deliver an intervention may fluctuate from week to week, but the classrooms tend to be available for use for a consistent amount of time each week), the burden of gathering data on other resources is often lower. You can add a few questions to a survey about facilities, materials and equipment, and other resources such as parental time or travel once or twice per year, or better yet to an interview, or better still, to both. One challenge is that even though these resources may have less of an impact on the bottom line costs, they can involve quantities that are more difficult for participants to estimate than their own time such as the square footage of their office.

If you have additional questions about collecting data for your own cost analysis and would like free technical assistance from the IES-funded CAP Project, submit a request here. The CAP Project team is always game for a new challenge and happy to help other researchers brainstorm data collection strategies that would be appropriate for your analysis.


Robert D. Shand is Assistant Professor in the School of Education at American University

Iliana Brodziak is a senior research analyst at the American Institutes for Research

Why School-based Mental Health?

In May 2021, we launched a new blog series called Spotlight on School-based Mental Health to unpack the why, what, when, who, and where of providing mental health services in schools. This first post in the series focuses on the why by discussing three IES-funded projects that highlight the importance of these services.

Increasing access to needed services. A primary benefit of school-based mental health is that it can increase access to much-needed services. A 2019 report from the Substance Abuse and Mental Health Services Administration (SAMSHA) indicates that 60% of the nearly 4 million 12- to 17-year-olds who reported a major depressive episode in the past year did not receive any treatment whatsoever. What can be done to address this need? One idea being tested in this 2019 efficacy replication study is whether school counselors with clinician support can provide high school students a telehealth version of a tier-2 depression prevention program with prior evidence of efficacy, Interpersonal Psychotherapy-Adolescent Skills Training (IPT-AST). Through individual and group sessions, the IPT-AST program provides direct instruction in communication and interpersonal problem-solving strategies to decrease conflict, increase support, and improve social functioning.   

Improving access to services for Black youth. Social anxiety (SA) is a debilitating fear of negative evaluation in performance and social situations that can make school a particularly challenging environment. The connection between SA and impaired school functioning is likely exacerbated in Black youth who often contend with negative racial stereotypes. In this 2020 development and innovation project, the research team aims to expand Black youth’s access to mental health services by improving the contextual and cultural relevance of a promising school-based social anxiety intervention, the Skills for Academic and Social Success (SASS). Through community partnerships, focus groups, and interviews, the team will make cultural and structural changes to SASS and add strategies to engage Black students in urban high schools who experience social anxiety.

Reducing stigma by promoting well-being. The second leading barrier cited by adolescents for not seeking mental health treatment include social factors such as perceived stigma and embarrassment. One way to counteract these barriers is to frame intervention in more positive terms with a focus on subjective well-being, a central construct in positive psychology. In this 2020 initial efficacy study, the research team is testing the Well-Being Promotion Program in middle schools in Florida and Massachusetts. In 10 core sessions, students low in subjective well-being take part in group activities and complete homework assignments designed to increase gratitude, acts of kindness, use of signature character strengths, savoring of positive experiences, optimism, and hopeful or goal-directed thinking.

These three projects illustrate why we need to carefully consider school-based mental health as a logical and critical part of success in school, particularly as we navigate the road to helping students recover from disengagement and learning loss during the coronavirus pandemic.  

Next in the series, we will look at the what of school-based mental health and highlight several projects that are developing innovative ways to support the mental health of students and staff in school settings.


Written by Emily Doolittle (Emily.Doolittle@ed.gov), NCER Team Lead for Social Behavioral Research at IES

 

Perspective Matters: How Diversity of Background, Expertise, and Cognition Can Lead to Good Science

IES funds cutting-edge researchers who often bring multiple disciplines together. Dr. Maithilee Kunda (Vanderbilt University) is one such researcher who stands at the juncture of multiple fields, using artificial intelligence (AI) to address questions related to cognition and autism spectrum disorder. Recently, Dr. Kunda received an award from the National Center for Special Education Research to develop an educational game that leverages AI to help students with autism spectrum disorder better infer and understand the beliefs, desires, and emotions of others. As a computer scientist and woman of color performing education research, Dr. Kunda exemplifies the value that diverse backgrounds, experiences, and disciplines bring to the field.

Bennett Lunn, a Truman-Albright Fellow at IES, asked Dr. Kunda about her work and background. Her responses are below.

As a woman of color, how have your background and experiences shaped your scholarship and career?

Photo of Dr. Maithilee Kunda

In college, I was a math major on the theory track, which meant that my math classes were really hard! I had been what one might call a “quick study” in high school, so it was a new experience for me to be floating around the bottom quartile of each class. The classes were mostly men, but it happened that there was a woman of color in our cohort—an international student from Colombia—and she was flat-out brilliant. She would ask the professor a question that no one else even understood, but the professor’s eyes would light up, and the two of them would start having some animated and incomprehensible discussion about whatever “mathy” thing it was. That student’s presence bestowed upon me a valuable gift: the ability to assume, without even thinking twice, that women of color quite naturally belong in math and science, even at the top of the heap! I don’t even remember her name, but I wish I could shake her hand. She was a role model for me and for every other student in those classes just by being who she was and doing what she did.

I have been extremely lucky to have seen diverse scientists and academics frequently throughout my career. My very first computer science teacher in high school was a woman. At a high school science camp, my engineering professor was a man who walked with two forearm crutches. Several of my college professors in math, chemistry, and robotics were women. My favorite teaching assistant in a robotics class was a Black man. In graduate school, I remember professors and senior students who were women, LGBTQ people, and people of color. Unfortunately, I know that the vast majority of students do not have access to such a wealth of diverse role models. It is heartening, though, that even a single role model—just by showing up—has so much power to positively shape the perceptions of everyone who sees them in their rightful place, be it in STEM, academia, or whatever context they inhabit.

What got you interested in a career in education science?

I read a lot of science fiction and fantasy growing up, and in high school, I was wrestling with why I liked these genres so much. I came up with a pet theory about fiction writing. All works of fiction are like extended thought experiments; the author sets up some initial conditions—characters, setting, etc.—and they run the experiment via writing about it. In general fiction, the experiments mostly involve variables at the people scale. In sci-fi and fantasy, on the other hand, authors are trying to run experiments at civilization or planetary scales, and that’s why they have to create whole new worlds to write about. I realized that was why I loved those genres so much: they allowed me to think about planetary-scale experiments! 

This “what if” mindset has continued to weave itself throughout my scholarship and career.

How did it ever become possible for humans to imagine things that don’t exist? Why do some people think differently from others, and how can we redesign the workings of our societies to make sure that everyone is supported, enriched, and empowered to contribute to their fullest potential? These kinds of questions fuel my scientific passions and have led me to pursue a variety of research directions on visual thinking, autism, AI, and education.

How does your research contribute to a better understanding of the importance of neurodiversity and inclusion in education?

Early in graduate school, and long before I heard the term neurodiversity, the first big paper I wrote was a re-analysis of several research studies on cognition in autism. This research taught me there can be significant individual variation in how people think. Even if 99 other people with similar demographic characteristics happen to solve a problem one particular way, that does not mean that the hundredth person from the same group is also going to solve the problem that way.

I realized much later that this research fits very well into the idea of neurodiversity, which essentially observes that atypical patterns of thinking should be viewed more as differences than as being inherently wrong or inadequate. Like any individual characteristics you have, the way you think brings with it a particular set of strengths and weaknesses, and different kinds of thinking come with different strengths and weaknesses.

Much of my team’s current research is a continuation of this theme. For example, in one project, we are developing new methods for assessing spatial skills that dig down into the processes people use to solve problems. This view of individual differences is probably one that teachers know intuitively from working one-on-one with students. One of the challenges for today’s education research is to continue to bring this kind of intuitive expertise into our research studies to describe individual differences more systematically across diverse learner populations.

In your area of research, what do you see as the greatest research needs or recommendations to address diversity and equity and improve the relevance of education research for diverse communities of students and families?

For the past 3 years, I have been leading an IES project to create a new educational game called Film Detective to help students with autism spectrum disorder improve their theory of mind (ability to take another’s perspective) and social reasoning skills. This was my first experience doing research on an interactive application of this kind. I was a newcomer to the idea of participatory design, which basically means that instead of just designing for some particular group of users, you bring their voices in as active contributors early in the design process. Our amazing postdoc Dr. Roxanne Rashedi put together a series of early studies using participatory methods, so we had the opportunity to hear directly from middle schoolers on the spectrum, their parents, and their teachers about what they needed and wanted to see in this kind of technology.

In one of these studies, we had students try out a similar education game and then give us feedback. One young man, about 11 or 12 years old, got frustrated in the middle of the session and had a bit of a meltdown. After he calmed down, we asked him about the game and what he would like to see taught in similar games. He told us that he would really like some help in learning how to handle his frustration better so that he could avoid having those kinds of meltdowns. Impressed by his self-awareness and courage in talking to us about his personal challenges, we ended up designing a whole new area in our game called the Relaxatron arcade. This is where students can play mini-games that help them learn about strategies for self-regulation, like deep breathing or meditation. This whole experience reinforced for me the mindset of participatory design: we are all on a team—researchers, students, parents, and teachers—working collaboratively to find new solutions for education.

We are also proud to work with Vanderbilt’s Frist Center for Autism and Innovation to make our research more inclusive and participatory. One of the many excellent programs run by this center is a software internship program for college students or recent graduates on the spectrum. This summer, we are pleased to be welcoming three Frist Center interns who will be helping us on our Film Detective project.

What has been the biggest challenge you have encountered and how did you overcome the challenge?

Throughout my career, I seem to have gravitated towards questions that not many other people are asking, using methods that not many other people are using. For example, I am a computer scientist who studies autism. My research investigates visual thinking, but not vision. I work in AI, but mostly in areas out of the mainstream.

I get a lot of personal and intellectual satisfaction out of my research, but I do face some steep challenges that I believe are common for researchers working in not-so-mainstream areas. For instance, it is sometimes harder to get our papers published in the big AI conferences because our work does not always follow standard patterns for how studies are designed and implemented. And I do experience my share of impostor syndrome (feeling unqualified for your job even when you are performing well) and FOMO (fear of missing out), especially when I come across some trendy paper that already has a thousand citations in 3 months and I think to myself, “Why am I not doing that? Should I be doing that?”

I try to remember to apply the very lessons that my research has produced, and I am fortunate to have friends and colleagues who help lift me out of self-doubt. I actively remind myself about the importance to our species of having diverse forms of thinking and how my own individual view of things is a culmination of my unique lifetime of educational and intellectual experiences. That particular perspective—my perspective—is irreplaceable, and, more than any one paper or grant or citation, it is the true value I bring to the world as a scientist.

How can the broader education research community better support the careers and scholarship of researchers from underrepresented groups?

I think research communities in general need to recognize that inclusion and diversity are everybody’s business, regardless of what someone’s specific research topic is. For example, we assume that every grant proposal and paper follow principles of rigorous and ethical research design, no matter the specific methodology. While some researchers in every discipline specialize in thinking about research design from a scholarly perspective, everyone has a baseline responsibility for knowing about it and for doing it.

Similarly, while we will always want and need researchers who specialize in research on inclusion and diversity, these topics should not be considered somehow peripheral to “real science." They are just as much core parts of a discipline as anything else is. As I constantly remind my students, science is a social enterprise! The pool of individual minds that make our discoveries for us is just as important as any piece of equipment or research method.

What advice would you give to emerging scholars from underrepresented, minoritized groups that are pursuing a career in education research?

A few years ago, when I was a newly minted assistant professor, I went to a rather specialized AI symposium where I found myself to be one of only two women there—out of over 70 attendees! The other woman was a senior researcher whom I had long admired but never met, and I felt a bit star-struck at the idea of meeting her. During one of the coffee breaks, I saw her determinedly heading my way. I said to myself as she approached, “Be cool, Maithilee, be cool, don’t mention the women thing…”  I was gearing myself up to have a properly research-focused discussion, but when she arrived, the very first words out of her mouth were, “So, there’s only the two of us, huh!” We both burst out laughing, and over the next couple of days, we talked about our research as well as about the lack of diversity at the symposium and in the research area more broadly.

The lesson I learned from this wonderful role model was that taking your rightful place in the research community does not mean papering over who you are. Certain researchers are going to be rarities, at least for a while, because of aspects of who we are, but that is nothing to hide. The value we bring as scientists comes from our whole selves and we should not just accept that but embrace and celebrate it.

This blog is part of a series of interviews showcasing a diverse group of IES-funded education researchers that are making significant contributions to education research, policy, and practice. For the first blog in the series, please see Representation Matters: Exploring the Role of Gender and Race on Educational Outcomes.

Dr. Maithilee Kunda is the director of the Laboratory for Artificial Intelligence and Visual Analogical Systems and founding investigator for the Frist Center for Autism and Innovation at Vanderbilt University. This interview was produced and edited by Bennett Lunn, Truman-Albright Fellow for the National Center for Education Research and the National Center for Special Education Research.