Inside IES Research

Notes from NCER & NCSER

Perspective Matters: How Diversity of Background, Expertise, and Cognition Can Lead to Good Science

IES funds cutting-edge researchers who often bring multiple disciplines together. Dr. Maithilee Kunda (Vanderbilt University) is one such researcher who stands at the juncture of multiple fields, using artificial intelligence (AI) to address questions related to cognition and autism spectrum disorder. Recently, Dr. Kunda received an award from the National Center for Special Education Research to develop an educational game that leverages AI to help students with autism spectrum disorder better infer and understand the beliefs, desires, and emotions of others. As a computer scientist and woman of color performing education research, Dr. Kunda exemplifies the value that diverse backgrounds, experiences, and disciplines bring to the field.

Bennett Lunn, a Truman-Albright Fellow at IES, asked Dr. Kunda about her work and background. Her responses are below.

As a woman of color, how have your background and experiences shaped your scholarship and career?

Photo of Dr. Maithilee Kunda

In college, I was a math major on the theory track, which meant that my math classes were really hard! I had been what one might call a “quick study” in high school, so it was a new experience for me to be floating around the bottom quartile of each class. The classes were mostly men, but it happened that there was a woman of color in our cohort—an international student from Colombia—and she was flat-out brilliant. She would ask the professor a question that no one else even understood, but the professor’s eyes would light up, and the two of them would start having some animated and incomprehensible discussion about whatever “mathy” thing it was. That student’s presence bestowed upon me a valuable gift: the ability to assume, without even thinking twice, that women of color quite naturally belong in math and science, even at the top of the heap! I don’t even remember her name, but I wish I could shake her hand. She was a role model for me and for every other student in those classes just by being who she was and doing what she did.

I have been extremely lucky to have seen diverse scientists and academics frequently throughout my career. My very first computer science teacher in high school was a woman. At a high school science camp, my engineering professor was a man who walked with two forearm crutches. Several of my college professors in math, chemistry, and robotics were women. My favorite teaching assistant in a robotics class was a Black man. In graduate school, I remember professors and senior students who were women, LGBTQ people, and people of color. Unfortunately, I know that the vast majority of students do not have access to such a wealth of diverse role models. It is heartening, though, that even a single role model—just by showing up—has so much power to positively shape the perceptions of everyone who sees them in their rightful place, be it in STEM, academia, or whatever context they inhabit.

What got you interested in a career in education science?

I read a lot of science fiction and fantasy growing up, and in high school, I was wrestling with why I liked these genres so much. I came up with a pet theory about fiction writing. All works of fiction are like extended thought experiments; the author sets up some initial conditions—characters, setting, etc.—and they run the experiment via writing about it. In general fiction, the experiments mostly involve variables at the people scale. In sci-fi and fantasy, on the other hand, authors are trying to run experiments at civilization or planetary scales, and that’s why they have to create whole new worlds to write about. I realized that was why I loved those genres so much: they allowed me to think about planetary-scale experiments! 

This “what if” mindset has continued to weave itself throughout my scholarship and career.

How did it ever become possible for humans to imagine things that don’t exist? Why do some people think differently from others, and how can we redesign the workings of our societies to make sure that everyone is supported, enriched, and empowered to contribute to their fullest potential? These kinds of questions fuel my scientific passions and have led me to pursue a variety of research directions on visual thinking, autism, AI, and education.

How does your research contribute to a better understanding of the importance of neurodiversity and inclusion in education?

Early in graduate school, and long before I heard the term neurodiversity, the first big paper I wrote was a re-analysis of several research studies on cognition in autism. This research taught me there can be significant individual variation in how people think. Even if 99 other people with similar demographic characteristics happen to solve a problem one particular way, that does not mean that the hundredth person from the same group is also going to solve the problem that way.

I realized much later that this research fits very well into the idea of neurodiversity, which essentially observes that atypical patterns of thinking should be viewed more as differences than as being inherently wrong or inadequate. Like any individual characteristics you have, the way you think brings with it a particular set of strengths and weaknesses, and different kinds of thinking come with different strengths and weaknesses.

Much of my team’s current research is a continuation of this theme. For example, in one project, we are developing new methods for assessing spatial skills that dig down into the processes people use to solve problems. This view of individual differences is probably one that teachers know intuitively from working one-on-one with students. One of the challenges for today’s education research is to continue to bring this kind of intuitive expertise into our research studies to describe individual differences more systematically across diverse learner populations.

In your area of research, what do you see as the greatest research needs or recommendations to address diversity and equity and improve the relevance of education research for diverse communities of students and families?

For the past 3 years, I have been leading an IES project to create a new educational game called Film Detective to help students with autism spectrum disorder improve their theory of mind (ability to take another’s perspective) and social reasoning skills. This was my first experience doing research on an interactive application of this kind. I was a newcomer to the idea of participatory design, which basically means that instead of just designing for some particular group of users, you bring their voices in as active contributors early in the design process. Our amazing postdoc Dr. Roxanne Rashedi put together a series of early studies using participatory methods, so we had the opportunity to hear directly from middle schoolers on the spectrum, their parents, and their teachers about what they needed and wanted to see in this kind of technology.

In one of these studies, we had students try out a similar education game and then give us feedback. One young man, about 11 or 12 years old, got frustrated in the middle of the session and had a bit of a meltdown. After he calmed down, we asked him about the game and what he would like to see taught in similar games. He told us that he would really like some help in learning how to handle his frustration better so that he could avoid having those kinds of meltdowns. Impressed by his self-awareness and courage in talking to us about his personal challenges, we ended up designing a whole new area in our game called the Relaxatron arcade. This is where students can play mini-games that help them learn about strategies for self-regulation, like deep breathing or meditation. This whole experience reinforced for me the mindset of participatory design: we are all on a team—researchers, students, parents, and teachers—working collaboratively to find new solutions for education.

We are also proud to work with Vanderbilt’s Frist Center for Autism and Innovation to make our research more inclusive and participatory. One of the many excellent programs run by this center is a software internship program for college students or recent graduates on the spectrum. This summer, we are pleased to be welcoming three Frist Center interns who will be helping us on our Film Detective project.

What has been the biggest challenge you have encountered and how did you overcome the challenge?

Throughout my career, I seem to have gravitated towards questions that not many other people are asking, using methods that not many other people are using. For example, I am a computer scientist who studies autism. My research investigates visual thinking, but not vision. I work in AI, but mostly in areas out of the mainstream.

I get a lot of personal and intellectual satisfaction out of my research, but I do face some steep challenges that I believe are common for researchers working in not-so-mainstream areas. For instance, it is sometimes harder to get our papers published in the big AI conferences because our work does not always follow standard patterns for how studies are designed and implemented. And I do experience my share of impostor syndrome (feeling unqualified for your job even when you are performing well) and FOMO (fear of missing out), especially when I come across some trendy paper that already has a thousand citations in 3 months and I think to myself, “Why am I not doing that? Should I be doing that?”

I try to remember to apply the very lessons that my research has produced, and I am fortunate to have friends and colleagues who help lift me out of self-doubt. I actively remind myself about the importance to our species of having diverse forms of thinking and how my own individual view of things is a culmination of my unique lifetime of educational and intellectual experiences. That particular perspective—my perspective—is irreplaceable, and, more than any one paper or grant or citation, it is the true value I bring to the world as a scientist.

How can the broader education research community better support the careers and scholarship of researchers from underrepresented groups?

I think research communities in general need to recognize that inclusion and diversity are everybody’s business, regardless of what someone’s specific research topic is. For example, we assume that every grant proposal and paper follow principles of rigorous and ethical research design, no matter the specific methodology. While some researchers in every discipline specialize in thinking about research design from a scholarly perspective, everyone has a baseline responsibility for knowing about it and for doing it.

Similarly, while we will always want and need researchers who specialize in research on inclusion and diversity, these topics should not be considered somehow peripheral to “real science." They are just as much core parts of a discipline as anything else is. As I constantly remind my students, science is a social enterprise! The pool of individual minds that make our discoveries for us is just as important as any piece of equipment or research method.

What advice would you give to emerging scholars from underrepresented, minoritized groups that are pursuing a career in education research?

A few years ago, when I was a newly minted assistant professor, I went to a rather specialized AI symposium where I found myself to be one of only two women there—out of over 70 attendees! The other woman was a senior researcher whom I had long admired but never met, and I felt a bit star-struck at the idea of meeting her. During one of the coffee breaks, I saw her determinedly heading my way. I said to myself as she approached, “Be cool, Maithilee, be cool, don’t mention the women thing…”  I was gearing myself up to have a properly research-focused discussion, but when she arrived, the very first words out of her mouth were, “So, there’s only the two of us, huh!” We both burst out laughing, and over the next couple of days, we talked about our research as well as about the lack of diversity at the symposium and in the research area more broadly.

The lesson I learned from this wonderful role model was that taking your rightful place in the research community does not mean papering over who you are. Certain researchers are going to be rarities, at least for a while, because of aspects of who we are, but that is nothing to hide. The value we bring as scientists comes from our whole selves and we should not just accept that but embrace and celebrate it.

This blog is part of a series of interviews showcasing a diverse group of IES-funded education researchers that are making significant contributions to education research, policy, and practice. For the first blog in the series, please see Representation Matters: Exploring the Role of Gender and Race on Educational Outcomes.

Dr. Maithilee Kunda is the director of the Laboratory for Artificial Intelligence and Visual Analogical Systems and founding investigator for the Frist Center for Autism and Innovation at Vanderbilt University. This interview was produced and edited by Bennett Lunn, Truman-Albright Fellow for the National Center for Education Research and the National Center for Special Education Research.

 

Timing is Everything: Collaborating with IES Grantees to Create a Needed Cost Analysis Timeline

This blog is part of a guest series by the Cost Analysis in Practice (CAP) project team to discuss practical details regarding cost studies.

 

A few months ago, a team of researchers conducting a large, IES-funded randomized controlled trial (RCT) on the intervention Promoting Accelerated Reading Comprehension of Text-Local (PACT-L) met with the Cost Analysis in Practice (CAP) Project team in search of planning support. The PACT-L team had just received funding for a 5-year systematic replication evaluation and were consumed with planning its execution. During an initial call, Iliana Brodziak, who is leading the cost analysis for the evaluation study, shared, “This is a large RCT with 150 schools across multiple districts each year. There is a lot to consider when thinking about all of the moving pieces and when they need to happen. I think I know what needs to happen, but it would help to have the key events on a timeline.”

The comments and feeling of overload are very common even for experienced cost analysts like Iliana because conducting a large RCT requires extensive thought and planning. Ideally, planning for a cost analysis at this scale is integrated with the overall evaluation planning at the outset of the study. For example, the PACT-L research team developed a design plan that specified the overall evaluation approach along with the cost analysis. Those who save the cost analysis for the end, or even for the last year of the evaluation, may find they have incomplete data, insufficient time or budget for analysis, and other avoidable challenges. Iliana understood this and her remark set off a spark for the CAP Project team—developing a timeline that aligns the steps for planning a cost analysis with RCT planning.

As the PACT-L and CAP Project teams continued to collaborate, it became clear that the PACT-L evaluation would be a great case study for crafting a full cost analysis timeline for rigorous evaluations. The CAP Project team, with input from the PACT-L evaluation team, created a detailed timeline for each year of the evaluation. It captures the key steps of a cost analysis and integrates the challenges and considerations that Iliana and her team anticipated for the PACT-L evaluation and similar large RCTs.

In addition, the timeline provides guidance on the data collection process for each year of the evaluation.

  • Year 1:  The team designs the cost analysis data collection instruments. This process includes collaborating with the broader evaluation team to ensure the cost analysis is integrated in the IRB application, setting up regular meetings with the team, and creating and populating spreadsheets or some other data entry tool.
  • Year 2: Researchers plan to document the ingredients or resources needed to implement the intervention on an ongoing basis. The timeline recommends collecting data, reviewing the data, and revising the data collection instruments in Year 2.
  • Year 3 (and maybe Year 4): The iteration of collecting data and revising instruments continue in Year 3 and, if needed, in Year 4.
  • Year 5: Data collection should be complete, allowing for the majority of the analysis. 

This is just one example of the year-by-year guidance included in the timeline. The latest version of the Timeline of Activities for Cost Analysis is available to help provide guidance to other researchers as they plan and execute their economic evaluations. As a planning tool, the timeline gathers all the moving pieces in one place. It includes detailed descriptions and notes for consideration for each year of the study and provides tips to help researchers.

The PACT-L evaluation team is still in the first year of the evaluation, leaving time for additional meetings and collective brainstorming. The CAP Project and PACT-L teams hope to continue collaborating over the next few years, using the shared expertise among the teams and PACT-L’s experience carrying out the cost analysis to refine the timeline.

Visit the CAP Project website to find other free cost analysis resources or to submit a help request for customized technical assistance on your own project.


Jaunelle Pratt-Williams is an Education Researcher at SRI International.

Iliana Brodziak is a senior research analyst at the American Institutes for Research.

Katie Drummond, a Senior Research Scientist at WestEd. 

Lauren Artzi is a senior researcher at the American Institutes for Research.

English Learners with or at Risk for Disabilities

A young girl is sitting and reading a book

English learners (ELs) are the fastest growing group of students in U.S. public schools. They are disproportionately at risk for poor academic outcomes and are more likely than non-ELs to be classified as having specific learning disabilities and speech/language impairment. Data collected by the U.S. Department of Education in school year 2018-2019 (Common Core of Data, Individuals with Disabilities Education Act (IDEA) data) indicate that approximately 14.1% of students in classrooms across the country received services through IDEA Part B. Nationally, 11.3% of students with disabilities were ELs, a little higher than the percentage of total student enrollment who were ELs (10.2%). However, it is important to distinguish between language and literacy struggles that are due to learning English as a second language and those due to a language or reading disability. For those who have or are at risk for a disability and in need of intervention, it is also important that the interventions are linguistically and culturally appropriate for these children.

Since the first round of competitions in 2006, the National Center for Special Education Research (NCSER) has funded research on ELs with or at risk for disabilities. The projects are in broad topic areas, including early childhood; reading, writing, and language development; cognition and learning; and social and behavioral skill development. They vary with respect to the types of research conducted (such as exploration, development, efficacy, measurement) as well as the extent to which they focus on ELs, from ELs as the exclusive or primary population of interest to a secondary focus as a student group within the general population.

As an example, David Francis (University of Houston) explored factors related to the identification and classification of reading and language disabilities among Spanish-speaking ELs. The aim was to provide schools with clearer criteria and considerations for identifying learning disabilities among these students in kindergarten through grade 2. Analyzing data from previous studies, the team found that narrative measures (measures in which narrative responses were elicited, transcribed, and scored) were more sensitive to identifying EL students with disabilities than standardized measures that did not include a narrative component. They also found that the differences in student language growth depended on the language used in the instruction and the language used to measure outcomes. Specifically, language growth was greatest for Spanish-instructed students on Spanish reading and language outcomes, followed by English outcomes for English-instructed students, English outcomes for Spanish-instructed students, and with the lowest growth, Spanish outcomes for English-instructed students.

A number of these projects are currently in progress. For example, Ann Kaiser (Vanderbilt University) and her team are using a randomized controlled trial to test the efficacy of a cultural and linguistic adaptation of Enhanced Milieu Teaching (EMT). EMT en Español aims to improve the language and related school readiness skills of Spanish-speaking toddlers with receptive and expressive language delays who may be at risk for language impairment. In another study, Nicole Schatz (Florida International University) and her team will be using a randomized controlled trial to compare the efficacy of a language-only, behavior-only, or combination language and behavior intervention for students in early elementary school who are English language learners with or at risk for ADHD.

Overall, NCSER has funded 12 research grants that focus specifically on English learners, dual-language learners, and/or Spanish-speaking children with or at risk for disabilities, including the following:

In addition to the research focused specifically on English learners, many other projects include ELs as a large portion of their sample and/or focus some of their analyses specifically on the student group of ELs with or at risk for disabilities. A few recently completed studies show encouraging results with little differences between ELs and non-ELs. For example, Nathan Clemens (University of Texas, Austin) investigated the adequacy of six early literacy measures and validated their use for monitoring the reading progress for kindergarten students at risk for reading disabilities. As part of this project, the research team conducted subgroup analyses that indicated ELs do not necessarily demonstrate lower initial scores and rates of growth over time than non-ELs and that there are few differences between ELs and non-ELs in the extent to which the initial performance or rate of growth differentially predict later reading skills. As another example, Jeanne Wanzek (Vanderbilt University) examined the efficacy of an intensive multicomponent reading intervention for fourth graders with severe reading difficulties. The team found that those in the intervention group outperformed their peers in word reading and word fluency, but not reading fluency or comprehension; importantly, there was no variation in outcomes based on English learner status.

NCSER continues to value and support research projects that focus on English learners with or at risk for disabilities throughout its various programs of research funding.

This blog was written by Amy Sussman, NCSER Program Officer

How Remote Data Collection Enhanced One Grantee’s Classroom Research During COVID-19

Under an IES grant, Michigan State University, in collaboration with the Michigan Department of Education, the Michigan Center for Educational Performance and Information, and the University of Michigan, is assessing the implementation, impact, and cost of the Michigan “Read by Grade 3” law intended to increase early literacy outcomes for Michigan students. In this guest blog, Dr. Tanya Wright and Lori Bruner discuss how they were able to quickly pivot to a remote data collection plan when COVID-19 disrupted their initial research plan.  

The COVID-19 pandemic began while we were planning a study of early literacy coaching for the 2020-2021 academic year. It soon became abundantly clear that restrictions to in-person research would pose a major hurdle for our research team. We had planned to enter classrooms and record videos of literacy instruction in the fall. As such, we found ourselves faced with a difficult choice: we could pause our study until it became safer to visit classrooms and miss the opportunity to learn about literacy coaching and in-person classroom instruction during the pandemic, or we could quickly pivot to a remote data collection plan.

Our team chose the second option. We found that there are multiple technologies available to carry out remote data collection. We chose one of them (a device known as the Swivl) that included a robotic mount, where a tablet or smartphone can be placed to take the video, with a 360-degree rotating platform that works in tandem with a handheld or wearable tracker and an app that allows videos to be instantly uploaded to a cloud-based storage system for easy access.

Over the course of the school year, we captured over 100 hours of elementary literacy instruction in 26 classrooms throughout our state. While remote data collection looks and feels very different from visiting a classroom to record video, we learned that it offers many benefits to both researchers and educators alike. We also learned a few important lessons along the way.

First, we learned remote data collection provides greater flexibility for both researchers and educators. In our original study design, we planned to hire data collectors to visit classrooms, which restricted our recruitment of schools to a reasonable driving distance from Michigan State University (MSU). However, recording devices allow us to capture video anywhere, including rural areas of our state that are often excluded from classroom research due to their remote location. Furthermore, we found that the cost of purchasing and shipping equipment to schools is significantly less than paying for travel and people’s time to visit classrooms. In addition, using devices in place of data collectors allowed us to easily adapt to last-minute schedule changes and offer teachers the option to record video over multiple days to accommodate shifts in instruction due to COVID-19.

Second, we discovered that we could capture more classroom talk than when using a typical video camera. After some trial and error, we settled on a device with three external wireless microphones: one for the teacher and two additional microphones to place around the classroom. Not only did the extra microphones record audio beyond what the teacher was saying, but we learned that we can also isolate each microphone during data analysis to hear what is happening in specific areas of the classroom (even when the teacher and children were wearing masks). We also purchased an additional wide-angle lens, which clipped over the camera on our tablet and allowed us to capture a wider video angle.  

Third, we found remote data collection to be less intrusive than sending a research team into schools. The device is compact and can be placed on any flat surface in the classroom or be mounted on a basic tripod. The teacher has the option to wear the microphone on a lanyard to serve as a hands-free tracker that signals the device to rotate to follow the teacher’s movements automatically. At the end of the lesson, the video uploads to a password-protected storage cloud with one touch of a button, making it easy for teachers to share videos with our research team. We then download the videos to the MSU server and delete them from our cloud account. This set-up allowed us to collect data with minimal disruption, especially when compared to sending a person with a video camera to spend time in the classroom.

As with most remote work this year, we ran into a few unexpected hurdles during our first round of data collection. After gathering feedback from teachers and members of our research team, we were able to make adjustments that led to a better experience during the second round of data collection this spring. We hope the following suggestions might help others who are considering such a device to collect classroom data in the future:

  1. Consider providing teachers with a brief informational video or offering after-school training sessions to help answer questions and address concerns ahead of your data collection period. We initially provided teachers with a detailed user guide, but we found that the extra support was key to ensuring teachers had a positive experience with the device. You might also consider appointing a member of your research team to serve as a contact person to answer questions about the remote data collection during data collection periods.
  2. As a research team, it is important to remember that team members will not be collecting the data, so it is critical to provide teachers with clear directions ahead of time: what exactly do you want them to record? Our team found it helpful to send teachers a brief two-minute video outlining our goals and then follow up with a printable checklist they could use on the day they recorded instruction. 
  3. Finally, we found it beneficial to scan the videos for content at the end of each day. By doing so, we were able to spot a few problems, such as missing audio or a device that stopped rotating during a lesson. While these instances were rare, it was helpful to catch them right away, while teachers still had the device in their schools so that they could record missing parts the next day.

Although restrictions to in-person research are beginning to lift, we plan to continue using remote data collection for the remaining three years of our project. Conducting classroom research during the COVID-19 pandemic has proven challenging at every turn, but as we adapted to remote video data collection, we were pleased to find unanticipated benefits for our research team and for our study participants.


This blog is part of a series focusing on conducting education research during COVID-19. For other blog posts related to this topic, please see here.

Tanya S. Wright is an Associate Professor of Language and Literacy in the Department of Teacher Education at Michigan State University.

Lori Bruner is a doctoral candidate in the Curriculum, Instruction, and Teacher Education program at Michigan State University.

Overcoming Challenges in Conducting Cost Analysis as Part of an Efficacy Trial

This blog is part of a guest series by the Cost Analysis in Practice (CAP) project team to discuss practical details regarding cost studies.

 

Educational interventions come at a cost—and no, it is not just the price tag, but the personnel time and other resources needed to implement them effectively. Having both efficacy and cost information is essential for educators to make wise investments. However, including cost analysis in an efficacy study comes with its own costs.

Experts from the Cost Analysis in Practice (CAP) Project recently connected with the IES-funded team studying Promoting Accelerated Reading Comprehension of Text - Local (PACT-L) to discuss the challenges of conducting cost analysis and cost-effectiveness analysis as part of an efficacy trial. PACT-L is a social studies and reading comprehension intervention with a train-the-trainer professional development model. Here, we share some of the challenges we discussed and the solutions that surfaced.

 

Challenge 1: Not understanding the value of a cost analysis for educational programs

Some people may not understand the value of a cost analysis and focus only on needing to know whether they have the budget to cover program expenses. For those who may be reluctant to invest in a cost analysis, ask them to consider how a thorough look at implementation in practice (as opposed to “as intended”) might help support planning for scale-up of a local program or adoption at different sites.

For example, take Tennessee’s Student/Teacher Achievement Ratio (STAR) project, a class size reduction experiment, which was implemented successfully with a few thousand students. California tried to scale up the approach for several million students but failed to anticipate the difficulty of finding enough qualified teachers and building more classrooms to accommodate smaller classes. A cost analysis would have supplied key details to support decision-makers in California in preparing for such a massive scale-up, including an inventory of the type and quantity of resources needed. For decision-makers seeking to replicate an effective intervention even on a small scale, success is much more likely if they can anticipate whether they have the requisite time, staff, facilities, materials, and equipment to implement the intervention with fidelity.

 

Challenge 2: Inconsistent implementation across cohorts

Efficacy studies often involve two or three cohorts of participants, and the intervention may be adapted from one to the next, leading to varying costs across cohorts. This issue has been particularly acute for studies running prior to the COVID-19 pandemic, then during COVID-19, and into post-COVID-19 times. You may have in-person, online, and hybrid versions of the intervention delivered, all in the course of one study. While such variation in implementation may be necessary in response to real-world circumstances, it poses problems for the effectiveness analysis because it’s hard to draw conclusions about exactly what was or wasn’t effective.

The variation in implementation also poses problems for the cost analysis because substantially different types and amounts of resources might be used across cohorts. At worst, this leads to the need for three cost analyses funded by the study budget intended for one! In the case of PACT-L, the study team modified part of the intervention to be delivered online due to COVID-19 but plans to keep this change consistent through all three cohorts.

For other interventions, if the differences in implementation among cohorts are substantial, perhaps they should not be combined and analyzed as if all participants are receiving a single intervention. Cost analysts may need to focus their efforts on the cohort for which implementation reflects how the intervention is most likely to be used in the future. For less substantial variations, cost analysts should stay close to the implementation team to document differences in resource use across cohorts, so they can present a range of costs as well as an average across all cohorts.

 

Challenge 3: Balancing accuracy of data against burden on participants and researchers

Data collection for an efficacy trial can be burdensome—add a cost analysis and researchers worry about balancing the accuracy of the data against the burden on participants and researchers. This is something that the PACT-L research team grappled with when designing the evaluation plan. If you plan in advance and integrate the data collection for cost analysis with that for fidelity of implementation, it is possible to lower the additional burden on participants. For example, include questions related to time use in interviews and surveys that are primarily designed to document the quality of the implementation (as the PACT-L team plans to do), and ask observers to note the kinds of facilities, materials, and equipment used to implement the intervention. However, it may be necessary to conduct interviews dedicated solely to the cost analysis and to ask key implementers to keep time logs. We’ll have more advice on collecting cost data in a future blog.

 

Challenge 4: Determining whether to use national and/or local prices

Like many other RCTs, the PACT-L team’s study will span multiple districts and geographical locations, so the question arises about which prices to use. When deciding whether to use national or local prices—or both—analysts should consider the audience for the results, availability of relevant prices from national or local sources, the number of different sets of local prices that would need to be collected, and their research budget. Salaries and facilities prices may vary significantly from location to location. Local audiences may be most interested in costs estimated using local prices, but it would be a lot of work to collect local price information from each district or region. The cost analysis research budget would need to reflect the work involved. Furthermore, for cost-effectiveness analysis, prices must be standardized across geographical locations which means applying regional price parities to adjust prices to a single location or to a national average equivalent.

It may be more feasible to use national average prices from publicly available sources for all sites. However, that comes with a catch too: national surveys of personnel salaries don't include a wide variety of school or district personnel positions. Consequently, the analyst must look for a similar-enough position or make some assumptions about how to adjust a published salary for a different position.

If the research budget allows, analysts could present costs using national prices and local prices. This might be especially helpful for an intervention targeting schools in a rural area or an urban area which, respectively, are likely to have lower and higher costs than the national average. The CAP Project’s cost analysis Excel template is set up to allow for both national prices and local prices. You can find the template and other cost analysis tools here: https://capproject.org/resources.


The CAP Project team is interested in learning about new challenges and figuring out how to help. If you are encountering similar or other challenges and would like free technical assistance from the IES-funded CAP Project, submit a request here. You can also email us at helpdesk@capproject.org or tweet us @The_CAP_Project

 

Fiona Hollands is a Senior Researcher at Teachers College, Columbia University who focuses on the effectiveness and costs of educational programs, and how education practitioners and policymakers can optimize the use of resources in education to promote better student outcomes.

Iliana Brodziak is a senior research analyst at the American Institutes for Research who focuses on statistical analysis of achievement data, resource allocation data and survey data with special focus on English Learners and early childhood.

Jaunelle Pratt-Williams is an Education Researcher at SRI who uses mixed methods approaches to address resource allocation, social and emotional learning and supports, school finance policy, and educational opportunities for disadvantaged student populations.

Robert D. Shand is Assistant Professor in the School of Education at American University with expertise in teacher improvement through collaboration and professional development and how schools and teachers use data from economic evaluation and accountability systems to make decisions and improve over time.

Katie Drummond, a Senior Research Scientist at WestEd, has designed and directed research and evaluation projects related to literacy, early childhood, and professional development for over 20 years. 

Lauren Artzi is a senior researcher with expertise in second language education PK-12, intervention research, and multi-tiered systems of support.