Inside IES Research

Notes from NCER & NCSER

Data Collection for Cost Analysis in an Efficacy Trial

This blog is part of a guest series by the Cost Analysis in Practice (CAP) project team to discuss practical details regarding cost studies.

In one of our periodic conversations about addressing cost analysis challenges for an efficacy trial, the Cost Analysis in Practice (CAP) Project and Promoting Accelerated Reading Comprehension of Text-Local (PACT-L) teams took on a number of questions related to data collection. The PACT-L cost analysts have a particularly daunting task with over 100 schools spread across multiple districts participating in a social studies and reading comprehension intervention. These schools will be served over the course of three cohorts. Here, we highlight some of the issues discussed and our advice.

Do we need to collect information about resource use in every district in our study?

For an efficacy study, you should collect data from all districts at least for the first cohort to assess the variation in resource use. If there isn’t much variation, then you can justify limiting data collection to a sample for subsequent cohorts.

Do we need to collect data from every school within each district?

Similar to the previous question, you would ideally collect data from every participating school within each district and assess variability across schools. You may be able to justify collecting data from a stratified random sample of schools, based on study relevant characteristics, within each district and presenting a range of costs to reflect differences. You might consider this option if funding for cost analysis is limited. Note that “district” and “school” refer to an example of one common setup in an educational randomized controlled trial, but other blocking and clustering units can stand in for other study designs and contexts.

How often should we collect cost data? 

The frequency of data collection depends on what the intervention is, length of implementation, and the types of resources (“ingredients”) needed. People’s time is usually the most important resource used for educational interventions, often 90% of the total costs. That’s where you should spend the most effort collecting data. Unfortunately, people are notoriously bad at reporting their time use, so ask for time use as often as you can (daily, weekly). Make it as easy as possible for people to respond and offer financial incentives, if possible. For efficacy trials in particular, be sure to collect cost data for each year of implementation so that you are accurately capturing the resources needed to produce the observed effects.

What’s the best way to collect time use data?

There are a few ways to collect time use data. The PACT-L team has had success with 2-question time logs (see Table 1) administered at the end of each history lesson during the fall quarter, plus a slightly longer 7-question final log (see Figure 2).

 

Table 1. Two-question time log. Copyright © 2021 American Institutes for Research.
1. Approximately, how many days did you spend teaching your [NAME OF THE UNIT] unit?  ____ total days
2. Approximately, how many hours of time outside class did you spend on the following activities for [NAME OF UNIT] unit? 

Record time to the nearest half hour (e.g., 1, 1.5, 2, 2.5)

   a. Developing lesson plans _____ hour(s)
   b. Grading student assignments _____ hour(s)
   c. Developing curricular materials, student assignments, or student assessments _____ hour(s)
   d. Providing additional assistance to students _____ hour(s)
   e. Other activities (e.g., coordinating with other staff; communicating with parents) related to unit _____ hour(s)

 

Table 2. Additional questions for the final log. Copyright © 2021 American Institutes for Research.
3. Just thinking of summer and fall, to prepare for teaching your American History classes, how many hours of professional development or training did you receive so far this year (e.g., trainings, coursework, coaching)? _____ Record time to the nearest half hour (e.g., 1, 1.5, 2, 2.5)
4. So far this year, did each student receive a school-provided textbook (either printed or in a digital form) for this history class? ______Yes     ______No
5. So far this year, did each student receive published materials other than a textbook (e.g., readings, worksheets, activities) for your American history classes? ______Yes     ______No
6. So far this year, what percentage of class time did you use the following materials for your American History classes? Record average percent of time used these materials (It has to add to 100%)
   a. A hardcopy textbook provided by the school _____%
   b. Published materials that were provided to you, other than a textbook (e.g., readings, worksheets, activities) _____%
   c. Other curricular materials that you located/provided yourself _____%
   d. Technology-based curricular materials or software (e.g., books online, online activities) _____%
       Total 100%
7. So far this year, how many hours during a typical week did the following people help you with your American history course? Please answer for all that apply Record time to the nearest half hour (e.g., 1, 1.5, 2, 2.5)
   a. Teaching assistant _____ hours during a typical week
   b. Special education teacher _____ hours during a typical week
   c. English learner teacher _____ hours during a typical week
   d. Principal or assistant principal _____ hours during a typical week
   e. Other administrative staff _____ hours during a typical week
   f. Coach _____ hours during a typical week
   g. Volunteer _____ hours during a typical week

 

They also provided financial incentives. If you cannot use time logs, interviews of a random sample of participants will likely yield more accurate information than surveys of all participants because the interviewer can prompt the interviewee and clarify responses that don’t make sense (see CAP Project Template for Cost Analysis Interview Protocol under Collecting and Analyzing Cost Data). In our experience, participants enjoy interviews about how they spend their time more than trying to enter time estimates in restricted survey questions. There also is good precedent for collecting time use through interviews: the American Time Use Survey is administered by trained interviewers who follow a scripted protocol lasting about 20 minutes.

Does it improve accuracy to collect time use in hours or as a percentage of total time?

Both methods of collecting time use can lead to less than useful estimates like the teacher whose percentage of time on various activities added up to 233%, or the coach who miraculously spent 200 hours training teachers in one week. Either way, always be clear about the relevant time period. For example, “Over the last 7 days, how many hours did you spend…” or “Of the 40 hours you worked last week, what percentage were spent on…” Mutually exclusive multiple-choice answers can also help ensure reasonable responses. For example, the answer options could be “no time; less than an hour; 1-2 hours; 3-5 hours; more than 5 hours.

What about other ingredients besides time?

Because ingredients such as materials and facilities usually represent a smaller share of total costs for educational interventions and are often more stable over time (for example, the number of hours a teacher spends on preparing to deliver an intervention may fluctuate from week to week, but the classrooms tend to be available for use for a consistent amount of time each week), the burden of gathering data on other resources is often lower. You can add a few questions to a survey about facilities, materials and equipment, and other resources such as parental time or travel once or twice per year, or better yet to an interview, or better still, to both. One challenge is that even though these resources may have less of an impact on the bottom line costs, they can involve quantities that are more difficult for participants to estimate than their own time such as the square footage of their office.

If you have additional questions about collecting data for your own cost analysis and would like free technical assistance from the IES-funded CAP Project, submit a request here. The CAP Project team is always game for a new challenge and happy to help other researchers brainstorm data collection strategies that would be appropriate for your analysis.


Robert D. Shand is Assistant Professor in the School of Education at American University

Iliana Brodziak is a senior research analyst at the American Institutes for Research

Why School-based Mental Health?

In May 2021, we launched a new blog series called Spotlight on School-based Mental Health to unpack the why, what, when, who, and where of providing mental health services in schools. This first post in the series focuses on the why by discussing three IES-funded projects that highlight the importance of these services.

Increasing access to needed services. A primary benefit of school-based mental health is that it can increase access to much-needed services. A 2019 report from the Substance Abuse and Mental Health Services Administration (SAMSHA) indicates that 60% of the nearly 4 million 12- to 17-year-olds who reported a major depressive episode in the past year did not receive any treatment whatsoever. What can be done to address this need? One idea being tested in this 2019 efficacy replication study is whether school counselors with clinician support can provide high school students a telehealth version of a tier-2 depression prevention program with prior evidence of efficacy, Interpersonal Psychotherapy-Adolescent Skills Training (IPT-AST). Through individual and group sessions, the IPT-AST program provides direct instruction in communication and interpersonal problem-solving strategies to decrease conflict, increase support, and improve social functioning.   

Improving access to services for Black youth. Social anxiety (SA) is a debilitating fear of negative evaluation in performance and social situations that can make school a particularly challenging environment. The connection between SA and impaired school functioning is likely exacerbated in Black youth who often contend with negative racial stereotypes. In this 2020 development and innovation project, the research team aims to expand Black youth’s access to mental health services by improving the contextual and cultural relevance of a promising school-based social anxiety intervention, the Skills for Academic and Social Success (SASS). Through community partnerships, focus groups, and interviews, the team will make cultural and structural changes to SASS and add strategies to engage Black students in urban high schools who experience social anxiety.

Reducing stigma by promoting well-being. The second leading barrier cited by adolescents for not seeking mental health treatment include social factors such as perceived stigma and embarrassment. One way to counteract these barriers is to frame intervention in more positive terms with a focus on subjective well-being, a central construct in positive psychology. In this 2020 initial efficacy study, the research team is testing the Well-Being Promotion Program in middle schools in Florida and Massachusetts. In 10 core sessions, students low in subjective well-being take part in group activities and complete homework assignments designed to increase gratitude, acts of kindness, use of signature character strengths, savoring of positive experiences, optimism, and hopeful or goal-directed thinking.

These three projects illustrate why we need to carefully consider school-based mental health as a logical and critical part of success in school, particularly as we navigate the road to helping students recover from disengagement and learning loss during the coronavirus pandemic.  

Next in the series, we will look at the what of school-based mental health and highlight several projects that are developing innovative ways to support the mental health of students and staff in school settings.


Written by Emily Doolittle (Emily.Doolittle@ed.gov), NCER Team Lead for Social Behavioral Research at IES

 

Timing is Everything: Collaborating with IES Grantees to Create a Needed Cost Analysis Timeline

This blog is part of a guest series by the Cost Analysis in Practice (CAP) project team to discuss practical details regarding cost studies.

 

A few months ago, a team of researchers conducting a large, IES-funded randomized controlled trial (RCT) on the intervention Promoting Accelerated Reading Comprehension of Text-Local (PACT-L) met with the Cost Analysis in Practice (CAP) Project team in search of planning support. The PACT-L team had just received funding for a 5-year systematic replication evaluation and were consumed with planning its execution. During an initial call, Iliana Brodziak, who is leading the cost analysis for the evaluation study, shared, “This is a large RCT with 150 schools across multiple districts each year. There is a lot to consider when thinking about all of the moving pieces and when they need to happen. I think I know what needs to happen, but it would help to have the key events on a timeline.”

The comments and feeling of overload are very common even for experienced cost analysts like Iliana because conducting a large RCT requires extensive thought and planning. Ideally, planning for a cost analysis at this scale is integrated with the overall evaluation planning at the outset of the study. For example, the PACT-L research team developed a design plan that specified the overall evaluation approach along with the cost analysis. Those who save the cost analysis for the end, or even for the last year of the evaluation, may find they have incomplete data, insufficient time or budget for analysis, and other avoidable challenges. Iliana understood this and her remark set off a spark for the CAP Project team—developing a timeline that aligns the steps for planning a cost analysis with RCT planning.

As the PACT-L and CAP Project teams continued to collaborate, it became clear that the PACT-L evaluation would be a great case study for crafting a full cost analysis timeline for rigorous evaluations. The CAP Project team, with input from the PACT-L evaluation team, created a detailed timeline for each year of the evaluation. It captures the key steps of a cost analysis and integrates the challenges and considerations that Iliana and her team anticipated for the PACT-L evaluation and similar large RCTs.

In addition, the timeline provides guidance on the data collection process for each year of the evaluation.

  • Year 1:  The team designs the cost analysis data collection instruments. This process includes collaborating with the broader evaluation team to ensure the cost analysis is integrated in the IRB application, setting up regular meetings with the team, and creating and populating spreadsheets or some other data entry tool.
  • Year 2: Researchers plan to document the ingredients or resources needed to implement the intervention on an ongoing basis. The timeline recommends collecting data, reviewing the data, and revising the data collection instruments in Year 2.
  • Year 3 (and maybe Year 4): The iteration of collecting data and revising instruments continue in Year 3 and, if needed, in Year 4.
  • Year 5: Data collection should be complete, allowing for the majority of the analysis. 

This is just one example of the year-by-year guidance included in the timeline. The latest version of the Timeline of Activities for Cost Analysis is available to help provide guidance to other researchers as they plan and execute their economic evaluations. As a planning tool, the timeline gathers all the moving pieces in one place. It includes detailed descriptions and notes for consideration for each year of the study and provides tips to help researchers.

The PACT-L evaluation team is still in the first year of the evaluation, leaving time for additional meetings and collective brainstorming. The CAP Project and PACT-L teams hope to continue collaborating over the next few years, using the shared expertise among the teams and PACT-L’s experience carrying out the cost analysis to refine the timeline.

Visit the CAP Project website to find other free cost analysis resources or to submit a help request for customized technical assistance on your own project.


Jaunelle Pratt-Williams is an Education Researcher at SRI International.

Iliana Brodziak is a senior research analyst at the American Institutes for Research.

Katie Drummond, a Senior Research Scientist at WestEd. 

Lauren Artzi is a senior researcher at the American Institutes for Research.

How Remote Data Collection Enhanced One Grantee’s Classroom Research During COVID-19

Under an IES grant, Michigan State University, in collaboration with the Michigan Department of Education, the Michigan Center for Educational Performance and Information, and the University of Michigan, is assessing the implementation, impact, and cost of the Michigan “Read by Grade 3” law intended to increase early literacy outcomes for Michigan students. In this guest blog, Dr. Tanya Wright and Lori Bruner discuss how they were able to quickly pivot to a remote data collection plan when COVID-19 disrupted their initial research plan.  

The COVID-19 pandemic began while we were planning a study of early literacy coaching for the 2020-2021 academic year. It soon became abundantly clear that restrictions to in-person research would pose a major hurdle for our research team. We had planned to enter classrooms and record videos of literacy instruction in the fall. As such, we found ourselves faced with a difficult choice: we could pause our study until it became safer to visit classrooms and miss the opportunity to learn about literacy coaching and in-person classroom instruction during the pandemic, or we could quickly pivot to a remote data collection plan.

Our team chose the second option. We found that there are multiple technologies available to carry out remote data collection. We chose one of them (a device known as the Swivl) that included a robotic mount, where a tablet or smartphone can be placed to take the video, with a 360-degree rotating platform that works in tandem with a handheld or wearable tracker and an app that allows videos to be instantly uploaded to a cloud-based storage system for easy access.

Over the course of the school year, we captured over 100 hours of elementary literacy instruction in 26 classrooms throughout our state. While remote data collection looks and feels very different from visiting a classroom to record video, we learned that it offers many benefits to both researchers and educators alike. We also learned a few important lessons along the way.

First, we learned remote data collection provides greater flexibility for both researchers and educators. In our original study design, we planned to hire data collectors to visit classrooms, which restricted our recruitment of schools to a reasonable driving distance from Michigan State University (MSU). However, recording devices allow us to capture video anywhere, including rural areas of our state that are often excluded from classroom research due to their remote location. Furthermore, we found that the cost of purchasing and shipping equipment to schools is significantly less than paying for travel and people’s time to visit classrooms. In addition, using devices in place of data collectors allowed us to easily adapt to last-minute schedule changes and offer teachers the option to record video over multiple days to accommodate shifts in instruction due to COVID-19.

Second, we discovered that we could capture more classroom talk than when using a typical video camera. After some trial and error, we settled on a device with three external wireless microphones: one for the teacher and two additional microphones to place around the classroom. Not only did the extra microphones record audio beyond what the teacher was saying, but we learned that we can also isolate each microphone during data analysis to hear what is happening in specific areas of the classroom (even when the teacher and children were wearing masks). We also purchased an additional wide-angle lens, which clipped over the camera on our tablet and allowed us to capture a wider video angle.  

Third, we found remote data collection to be less intrusive than sending a research team into schools. The device is compact and can be placed on any flat surface in the classroom or be mounted on a basic tripod. The teacher has the option to wear the microphone on a lanyard to serve as a hands-free tracker that signals the device to rotate to follow the teacher’s movements automatically. At the end of the lesson, the video uploads to a password-protected storage cloud with one touch of a button, making it easy for teachers to share videos with our research team. We then download the videos to the MSU server and delete them from our cloud account. This set-up allowed us to collect data with minimal disruption, especially when compared to sending a person with a video camera to spend time in the classroom.

As with most remote work this year, we ran into a few unexpected hurdles during our first round of data collection. After gathering feedback from teachers and members of our research team, we were able to make adjustments that led to a better experience during the second round of data collection this spring. We hope the following suggestions might help others who are considering such a device to collect classroom data in the future:

  1. Consider providing teachers with a brief informational video or offering after-school training sessions to help answer questions and address concerns ahead of your data collection period. We initially provided teachers with a detailed user guide, but we found that the extra support was key to ensuring teachers had a positive experience with the device. You might also consider appointing a member of your research team to serve as a contact person to answer questions about the remote data collection during data collection periods.
  2. As a research team, it is important to remember that team members will not be collecting the data, so it is critical to provide teachers with clear directions ahead of time: what exactly do you want them to record? Our team found it helpful to send teachers a brief two-minute video outlining our goals and then follow up with a printable checklist they could use on the day they recorded instruction. 
  3. Finally, we found it beneficial to scan the videos for content at the end of each day. By doing so, we were able to spot a few problems, such as missing audio or a device that stopped rotating during a lesson. While these instances were rare, it was helpful to catch them right away, while teachers still had the device in their schools so that they could record missing parts the next day.

Although restrictions to in-person research are beginning to lift, we plan to continue using remote data collection for the remaining three years of our project. Conducting classroom research during the COVID-19 pandemic has proven challenging at every turn, but as we adapted to remote video data collection, we were pleased to find unanticipated benefits for our research team and for our study participants.


This blog is part of a series focusing on conducting education research during COVID-19. For other blog posts related to this topic, please see here.

Tanya S. Wright is an Associate Professor of Language and Literacy in the Department of Teacher Education at Michigan State University.

Lori Bruner is a doctoral candidate in the Curriculum, Instruction, and Teacher Education program at Michigan State University.

Overcoming Challenges in Conducting Cost Analysis as Part of an Efficacy Trial

This blog is part of a guest series by the Cost Analysis in Practice (CAP) project team to discuss practical details regarding cost studies.

 

Educational interventions come at a cost—and no, it is not just the price tag, but the personnel time and other resources needed to implement them effectively. Having both efficacy and cost information is essential for educators to make wise investments. However, including cost analysis in an efficacy study comes with its own costs.

Experts from the Cost Analysis in Practice (CAP) Project recently connected with the IES-funded team studying Promoting Accelerated Reading Comprehension of Text - Local (PACT-L) to discuss the challenges of conducting cost analysis and cost-effectiveness analysis as part of an efficacy trial. PACT-L is a social studies and reading comprehension intervention with a train-the-trainer professional development model. Here, we share some of the challenges we discussed and the solutions that surfaced.

 

Challenge 1: Not understanding the value of a cost analysis for educational programs

Some people may not understand the value of a cost analysis and focus only on needing to know whether they have the budget to cover program expenses. For those who may be reluctant to invest in a cost analysis, ask them to consider how a thorough look at implementation in practice (as opposed to “as intended”) might help support planning for scale-up of a local program or adoption at different sites.

For example, take Tennessee’s Student/Teacher Achievement Ratio (STAR) project, a class size reduction experiment, which was implemented successfully with a few thousand students. California tried to scale up the approach for several million students but failed to anticipate the difficulty of finding enough qualified teachers and building more classrooms to accommodate smaller classes. A cost analysis would have supplied key details to support decision-makers in California in preparing for such a massive scale-up, including an inventory of the type and quantity of resources needed. For decision-makers seeking to replicate an effective intervention even on a small scale, success is much more likely if they can anticipate whether they have the requisite time, staff, facilities, materials, and equipment to implement the intervention with fidelity.

 

Challenge 2: Inconsistent implementation across cohorts

Efficacy studies often involve two or three cohorts of participants, and the intervention may be adapted from one to the next, leading to varying costs across cohorts. This issue has been particularly acute for studies running prior to the COVID-19 pandemic, then during COVID-19, and into post-COVID-19 times. You may have in-person, online, and hybrid versions of the intervention delivered, all in the course of one study. While such variation in implementation may be necessary in response to real-world circumstances, it poses problems for the effectiveness analysis because it’s hard to draw conclusions about exactly what was or wasn’t effective.

The variation in implementation also poses problems for the cost analysis because substantially different types and amounts of resources might be used across cohorts. At worst, this leads to the need for three cost analyses funded by the study budget intended for one! In the case of PACT-L, the study team modified part of the intervention to be delivered online due to COVID-19 but plans to keep this change consistent through all three cohorts.

For other interventions, if the differences in implementation among cohorts are substantial, perhaps they should not be combined and analyzed as if all participants are receiving a single intervention. Cost analysts may need to focus their efforts on the cohort for which implementation reflects how the intervention is most likely to be used in the future. For less substantial variations, cost analysts should stay close to the implementation team to document differences in resource use across cohorts, so they can present a range of costs as well as an average across all cohorts.

 

Challenge 3: Balancing accuracy of data against burden on participants and researchers

Data collection for an efficacy trial can be burdensome—add a cost analysis and researchers worry about balancing the accuracy of the data against the burden on participants and researchers. This is something that the PACT-L research team grappled with when designing the evaluation plan. If you plan in advance and integrate the data collection for cost analysis with that for fidelity of implementation, it is possible to lower the additional burden on participants. For example, include questions related to time use in interviews and surveys that are primarily designed to document the quality of the implementation (as the PACT-L team plans to do), and ask observers to note the kinds of facilities, materials, and equipment used to implement the intervention. However, it may be necessary to conduct interviews dedicated solely to the cost analysis and to ask key implementers to keep time logs. We’ll have more advice on collecting cost data in a future blog.

 

Challenge 4: Determining whether to use national and/or local prices

Like many other RCTs, the PACT-L team’s study will span multiple districts and geographical locations, so the question arises about which prices to use. When deciding whether to use national or local prices—or both—analysts should consider the audience for the results, availability of relevant prices from national or local sources, the number of different sets of local prices that would need to be collected, and their research budget. Salaries and facilities prices may vary significantly from location to location. Local audiences may be most interested in costs estimated using local prices, but it would be a lot of work to collect local price information from each district or region. The cost analysis research budget would need to reflect the work involved. Furthermore, for cost-effectiveness analysis, prices must be standardized across geographical locations which means applying regional price parities to adjust prices to a single location or to a national average equivalent.

It may be more feasible to use national average prices from publicly available sources for all sites. However, that comes with a catch too: national surveys of personnel salaries don't include a wide variety of school or district personnel positions. Consequently, the analyst must look for a similar-enough position or make some assumptions about how to adjust a published salary for a different position.

If the research budget allows, analysts could present costs using national prices and local prices. This might be especially helpful for an intervention targeting schools in a rural area or an urban area which, respectively, are likely to have lower and higher costs than the national average. The CAP Project’s cost analysis Excel template is set up to allow for both national prices and local prices. You can find the template and other cost analysis tools here: https://capproject.org/resources.


The CAP Project team is interested in learning about new challenges and figuring out how to help. If you are encountering similar or other challenges and would like free technical assistance from the IES-funded CAP Project, submit a request here. You can also email us at helpdesk@capproject.org or tweet us @The_CAP_Project

 

Fiona Hollands is a Senior Researcher at Teachers College, Columbia University who focuses on the effectiveness and costs of educational programs, and how education practitioners and policymakers can optimize the use of resources in education to promote better student outcomes.

Iliana Brodziak is a senior research analyst at the American Institutes for Research who focuses on statistical analysis of achievement data, resource allocation data and survey data with special focus on English Learners and early childhood.

Jaunelle Pratt-Williams is an Education Researcher at SRI who uses mixed methods approaches to address resource allocation, social and emotional learning and supports, school finance policy, and educational opportunities for disadvantaged student populations.

Robert D. Shand is Assistant Professor in the School of Education at American University with expertise in teacher improvement through collaboration and professional development and how schools and teachers use data from economic evaluation and accountability systems to make decisions and improve over time.

Katie Drummond, a Senior Research Scientist at WestEd, has designed and directed research and evaluation projects related to literacy, early childhood, and professional development for over 20 years. 

Lauren Artzi is a senior researcher with expertise in second language education PK-12, intervention research, and multi-tiered systems of support.