Inside IES Research

Notes from NCER & NCSER

How Remote Data Collection Enhanced One Grantee’s Classroom Research During COVID-19

Under an IES grant, Michigan State University, in collaboration with the Michigan Department of Education, the Michigan Center for Educational Performance and Information, and the University of Michigan, is assessing the implementation, impact, and cost of the Michigan “Read by Grade 3” law intended to increase early literacy outcomes for Michigan students. In this guest blog, Dr. Tanya Wright and Lori Bruner discuss how they were able to quickly pivot to a remote data collection plan when COVID-19 disrupted their initial research plan.  

The COVID-19 pandemic began while we were planning a study of early literacy coaching for the 2020-2021 academic year. It soon became abundantly clear that restrictions to in-person research would pose a major hurdle for our research team. We had planned to enter classrooms and record videos of literacy instruction in the fall. As such, we found ourselves faced with a difficult choice: we could pause our study until it became safer to visit classrooms and miss the opportunity to learn about literacy coaching and in-person classroom instruction during the pandemic, or we could quickly pivot to a remote data collection plan.

Our team chose the second option. We found that there are multiple technologies available to carry out remote data collection. We chose one of them (a device known as the Swivl) that included a robotic mount, where a tablet or smartphone can be placed to take the video, with a 360-degree rotating platform that works in tandem with a handheld or wearable tracker and an app that allows videos to be instantly uploaded to a cloud-based storage system for easy access.

Over the course of the school year, we captured over 100 hours of elementary literacy instruction in 26 classrooms throughout our state. While remote data collection looks and feels very different from visiting a classroom to record video, we learned that it offers many benefits to both researchers and educators alike. We also learned a few important lessons along the way.

First, we learned remote data collection provides greater flexibility for both researchers and educators. In our original study design, we planned to hire data collectors to visit classrooms, which restricted our recruitment of schools to a reasonable driving distance from Michigan State University (MSU). However, recording devices allow us to capture video anywhere, including rural areas of our state that are often excluded from classroom research due to their remote location. Furthermore, we found that the cost of purchasing and shipping equipment to schools is significantly less than paying for travel and people’s time to visit classrooms. In addition, using devices in place of data collectors allowed us to easily adapt to last-minute schedule changes and offer teachers the option to record video over multiple days to accommodate shifts in instruction due to COVID-19.

Second, we discovered that we could capture more classroom talk than when using a typical video camera. After some trial and error, we settled on a device with three external wireless microphones: one for the teacher and two additional microphones to place around the classroom. Not only did the extra microphones record audio beyond what the teacher was saying, but we learned that we can also isolate each microphone during data analysis to hear what is happening in specific areas of the classroom (even when the teacher and children were wearing masks). We also purchased an additional wide-angle lens, which clipped over the camera on our tablet and allowed us to capture a wider video angle.  

Third, we found remote data collection to be less intrusive than sending a research team into schools. The device is compact and can be placed on any flat surface in the classroom or be mounted on a basic tripod. The teacher has the option to wear the microphone on a lanyard to serve as a hands-free tracker that signals the device to rotate to follow the teacher’s movements automatically. At the end of the lesson, the video uploads to a password-protected storage cloud with one touch of a button, making it easy for teachers to share videos with our research team. We then download the videos to the MSU server and delete them from our cloud account. This set-up allowed us to collect data with minimal disruption, especially when compared to sending a person with a video camera to spend time in the classroom.

As with most remote work this year, we ran into a few unexpected hurdles during our first round of data collection. After gathering feedback from teachers and members of our research team, we were able to make adjustments that led to a better experience during the second round of data collection this spring. We hope the following suggestions might help others who are considering such a device to collect classroom data in the future:

  1. Consider providing teachers with a brief informational video or offering after-school training sessions to help answer questions and address concerns ahead of your data collection period. We initially provided teachers with a detailed user guide, but we found that the extra support was key to ensuring teachers had a positive experience with the device. You might also consider appointing a member of your research team to serve as a contact person to answer questions about the remote data collection during data collection periods.
  2. As a research team, it is important to remember that team members will not be collecting the data, so it is critical to provide teachers with clear directions ahead of time: what exactly do you want them to record? Our team found it helpful to send teachers a brief two-minute video outlining our goals and then follow up with a printable checklist they could use on the day they recorded instruction. 
  3. Finally, we found it beneficial to scan the videos for content at the end of each day. By doing so, we were able to spot a few problems, such as missing audio or a device that stopped rotating during a lesson. While these instances were rare, it was helpful to catch them right away, while teachers still had the device in their schools so that they could record missing parts the next day.

Although restrictions to in-person research are beginning to lift, we plan to continue using remote data collection for the remaining three years of our project. Conducting classroom research during the COVID-19 pandemic has proven challenging at every turn, but as we adapted to remote video data collection, we were pleased to find unanticipated benefits for our research team and for our study participants.


This blog is part of a series focusing on conducting education research during COVID-19. For other blog posts related to this topic, please see here.

Tanya S. Wright is an Associate Professor of Language and Literacy in the Department of Teacher Education at Michigan State University.

Lori Bruner is a doctoral candidate in the Curriculum, Instruction, and Teacher Education program at Michigan State University.

Overcoming Challenges in Conducting Cost Analysis as Part of an Efficacy Trial

This blog is part of a guest series by the Cost Analysis in Practice (CAP) project team to discuss practical details regarding cost studies.

 

Educational interventions come at a cost—and no, it is not just the price tag, but the personnel time and other resources needed to implement them effectively. Having both efficacy and cost information is essential for educators to make wise investments. However, including cost analysis in an efficacy study comes with its own costs.

Experts from the Cost Analysis in Practice (CAP) Project recently connected with the IES-funded team studying Promoting Accelerated Reading Comprehension of Text - Local (PACT-L) to discuss the challenges of conducting cost analysis and cost-effectiveness analysis as part of an efficacy trial. PACT-L is a social studies and reading comprehension intervention with a train-the-trainer professional development model. Here, we share some of the challenges we discussed and the solutions that surfaced.

 

Challenge 1: Not understanding the value of a cost analysis for educational programs

Some people may not understand the value of a cost analysis and focus only on needing to know whether they have the budget to cover program expenses. For those who may be reluctant to invest in a cost analysis, ask them to consider how a thorough look at implementation in practice (as opposed to “as intended”) might help support planning for scale-up of a local program or adoption at different sites.

For example, take Tennessee’s Student/Teacher Achievement Ratio (STAR) project, a class size reduction experiment, which was implemented successfully with a few thousand students. California tried to scale up the approach for several million students but failed to anticipate the difficulty of finding enough qualified teachers and building more classrooms to accommodate smaller classes. A cost analysis would have supplied key details to support decision-makers in California in preparing for such a massive scale-up, including an inventory of the type and quantity of resources needed. For decision-makers seeking to replicate an effective intervention even on a small scale, success is much more likely if they can anticipate whether they have the requisite time, staff, facilities, materials, and equipment to implement the intervention with fidelity.

 

Challenge 2: Inconsistent implementation across cohorts

Efficacy studies often involve two or three cohorts of participants, and the intervention may be adapted from one to the next, leading to varying costs across cohorts. This issue has been particularly acute for studies running prior to the COVID-19 pandemic, then during COVID-19, and into post-COVID-19 times. You may have in-person, online, and hybrid versions of the intervention delivered, all in the course of one study. While such variation in implementation may be necessary in response to real-world circumstances, it poses problems for the effectiveness analysis because it’s hard to draw conclusions about exactly what was or wasn’t effective.

The variation in implementation also poses problems for the cost analysis because substantially different types and amounts of resources might be used across cohorts. At worst, this leads to the need for three cost analyses funded by the study budget intended for one! In the case of PACT-L, the study team modified part of the intervention to be delivered online due to COVID-19 but plans to keep this change consistent through all three cohorts.

For other interventions, if the differences in implementation among cohorts are substantial, perhaps they should not be combined and analyzed as if all participants are receiving a single intervention. Cost analysts may need to focus their efforts on the cohort for which implementation reflects how the intervention is most likely to be used in the future. For less substantial variations, cost analysts should stay close to the implementation team to document differences in resource use across cohorts, so they can present a range of costs as well as an average across all cohorts.

 

Challenge 3: Balancing accuracy of data against burden on participants and researchers

Data collection for an efficacy trial can be burdensome—add a cost analysis and researchers worry about balancing the accuracy of the data against the burden on participants and researchers. This is something that the PACT-L research team grappled with when designing the evaluation plan. If you plan in advance and integrate the data collection for cost analysis with that for fidelity of implementation, it is possible to lower the additional burden on participants. For example, include questions related to time use in interviews and surveys that are primarily designed to document the quality of the implementation (as the PACT-L team plans to do), and ask observers to note the kinds of facilities, materials, and equipment used to implement the intervention. However, it may be necessary to conduct interviews dedicated solely to the cost analysis and to ask key implementers to keep time logs. We’ll have more advice on collecting cost data in a future blog.

 

Challenge 4: Determining whether to use national and/or local prices

Like many other RCTs, the PACT-L team’s study will span multiple districts and geographical locations, so the question arises about which prices to use. When deciding whether to use national or local prices—or both—analysts should consider the audience for the results, availability of relevant prices from national or local sources, the number of different sets of local prices that would need to be collected, and their research budget. Salaries and facilities prices may vary significantly from location to location. Local audiences may be most interested in costs estimated using local prices, but it would be a lot of work to collect local price information from each district or region. The cost analysis research budget would need to reflect the work involved. Furthermore, for cost-effectiveness analysis, prices must be standardized across geographical locations which means applying regional price parities to adjust prices to a single location or to a national average equivalent.

It may be more feasible to use national average prices from publicly available sources for all sites. However, that comes with a catch too: national surveys of personnel salaries don't include a wide variety of school or district personnel positions. Consequently, the analyst must look for a similar-enough position or make some assumptions about how to adjust a published salary for a different position.

If the research budget allows, analysts could present costs using national prices and local prices. This might be especially helpful for an intervention targeting schools in a rural area or an urban area which, respectively, are likely to have lower and higher costs than the national average. The CAP Project’s cost analysis Excel template is set up to allow for both national prices and local prices. You can find the template and other cost analysis tools here: https://capproject.org/resources.


The CAP Project team is interested in learning about new challenges and figuring out how to help. If you are encountering similar or other challenges and would like free technical assistance from the IES-funded CAP Project, submit a request here. You can also email us at helpdesk@capproject.org or tweet us @The_CAP_Project

 

Fiona Hollands is a Senior Researcher at Teachers College, Columbia University who focuses on the effectiveness and costs of educational programs, and how education practitioners and policymakers can optimize the use of resources in education to promote better student outcomes.

Iliana Brodziak is a senior research analyst at the American Institutes for Research who focuses on statistical analysis of achievement data, resource allocation data and survey data with special focus on English Learners and early childhood.

Jaunelle Pratt-Williams is an Education Researcher at SRI who uses mixed methods approaches to address resource allocation, social and emotional learning and supports, school finance policy, and educational opportunities for disadvantaged student populations.

Robert D. Shand is Assistant Professor in the School of Education at American University with expertise in teacher improvement through collaboration and professional development and how schools and teachers use data from economic evaluation and accountability systems to make decisions and improve over time.

Katie Drummond, a Senior Research Scientist at WestEd, has designed and directed research and evaluation projects related to literacy, early childhood, and professional development for over 20 years. 

Lauren Artzi is a senior researcher with expertise in second language education PK-12, intervention research, and multi-tiered systems of support. 

Cost Analysis in Practice: Resources for Cost Analysis Studies

IES supports rigorous research that can provide scientific evidence on how best to address our nation’s most pressing education needs. As part of the Standards for Excellence in Education Research (SEER) principles, IES-funded researchers are encouraged, and in some cases required, to conduct a cost analysis for their projects with the intended goal of supporting education agencies’ decision-making around the adoption of programs, policies, or practices. 

 

The Cost Analysis in Practice (CAP) Project is a 3-year initiative funded by IES to support researchers and practitioners who are planning or conducting a cost analysis of educational programs and practices. This support includes the following freely available resources.

  • Resources developed by the CAP Project
    • Introductory resources on cost analysis including Standards and Guidelines 1.1, an infographic, a video lecture, and FAQs.
    • Tools for planning your cost analysis, collecting and analyzing cost data, and reporting your results.
    • A Help Desk for you to submit inquiries about conducting a cost analysis with a response from a member of the CAP Project Team within two business days.
  • Other resources recommended by the CAP Project
    • Background materials on cost analysis
    • Guidance on carrying out a cost analysis
    • Standards for the Economic Evaluation of Educational and Social Programs
    • Cost analysis software

 

The CAP Project is also involved in longer-term collaborations with IES-funded evaluation projects to better understand their cost analysis needs. As part of this work, the CAP Project will be producing a set of three blogs to discuss practical details regarding cost studies based on its collaboration with a replication project evaluating an intervention that integrates literacy instruction into the teaching of American history. These blogs will discuss the following:

  • Common cost analysis challenges that researchers encounter and recommendations to address them
  • The development of a timeline resource for planning a cost study
  • Data collection for a cost study

 

The CAP Project is interested in your feedback on any of the CAP Project resources and welcomes suggestions for additional resources to support cost analysis. If you have any feedback, please fill out a suggestion form at the bottom of the Resources web page.

CTE Research Is Flourishing at IES!

Since its inception in 2017, the CTE portfolio in the National Center for Education Research (NCER) at IES has grown to 11 research grants and a research network! Several other CTE-related grants have been funded under other topics, such as “Postsecondary/Adult Education” and “Improving Education Systems” in the education research grants program, and in other grant programs such as “Using SLDS to Support State Policymaking.” Two CTE-related grants under the latter program were awarded in FY21—

The newest grants funded in FY21 in the CTE topic of the Education Research Grants program include—

As a causal impact study, the last project (on Virtual Enterprises) has been invited to join NCER’s CTE Research Network as its sixth and final member. Funded in 2018 to expand the evidence base for CTE, the CTE Research Network (led by PI Kathy Hughes at the American Institutes for Research) includes five other CTE impact studies (one project’s interim report by MDRC was recently reviewed by the What Works Clearinghouse and was found to meet standards without reservations). You can read more about the network’s mission and each of its member projects here.  

On AIR’s CTE Research Network website, you can find several new resources and reports, such as: 

The CTE Research Network has also been conducting training, including workshops in causal design for CTE researchers and online modules on data and research for CTE practitioners, shared widely with the field by a Network Lead partner, the Association for Career and Technical Education (ACTE). 

Last but certainly not least, if you are interested in getting your CTE project funded by IES, see the new FY22 research grant opportunities on the IES funding page. To apply to the CTE topic in the Education Research Grants program specifically, click on the PDF Request for Applications (ALN 84.305A). Contact Corinne Alfeld with any questions you might have.


Written by Corinne Alfeld (Corinne.Alfeld@ed.gov), NCER Program Officer 

 

IES Announces a New Research and Development Center for Self-Directed Learning Skills in Online College Courses

In response to a call from IES for research on how to best support postsecondary teachers and students to thrive in online environment, NCER is establishing a new research and development (R&D) center. This center, led by SRI International (SRI) and the Community College Research Center (CCRC) at Teachers College (Columbia University), aims to help faculty embed support for self-directed learning skills into their online and hybrid courses.

This R&D center will support postsecondary instructors in making optimal use of technology features often available in online course tools to bolster student self-management strategies. Through its research and capacity-building programs, the center aims to strengthen teaching and learning, improve student outcomes, and ensure all students—regardless of race, ethnicity, or socioeconomic status—have equitable learning opportunities and attainment in broad-access institutions.

“Lack of self-directed learning skills can hinder a student’s success in any college course,” says SRI’s Rebecca Griffiths, a lead researcher in the new center, “but the challenge is greater in online courses, which typically place more responsibility on students to manage their own learning.”

 

Self-directed learning skills, also known as self-regulated learning skills, encompass three interrelated and mutually reinforcing areas:

  • Affect, which includes self-efficacy and the motivation to learn
  • Strategic actions, which include planning, goal setting, strategies to organize, code, and rehearse
  • Metacognition, which includes self-monitoring, self-evaluating and self-correction

 

These three areas can form a virtuous cycle. When students believe that studying helps them learn important and useful knowledge, they are more likely to study strategically and effectively. Effective study habits in turn enhance academic performance and build positive mindsets including confidence, motivation around learning, and a sense of personal growth.

SRI and CCRC will partner with Achieving the Dream and nine broad-access, public colleges, and universities across the U.S. to conduct these research program activities.

 

The research goals of the R&D center are to—

  • Generate new knowledge about how faculty can effectively use technology features and instructional practices in online STEM courses to create a positive feedback loop for students
  • Shed light on institutional policies and practices and instructional environments needed to support a coherent, intentional, and sustainable approach to helping students build self-directed learning skills across their coursework
  • Develop and pilot a technology-enabled, skills development model that will use technology features already widely available in learning management systems, adaptive courseware, and mobile apps to deliver instruction on these skills
  • Using research findings to inform the development of a rich, interactive toolkit to support institutions and faculty as they implement self-directed learning skills instruction at scale in online programs.

 

In addition to carrying out the research activities, the center will provide national leadership and capacity-building activities for postsecondary teaching and learning. Through partnership with Achieving the Dream, technology developers, researchers, education equity advocates, and others, the center will establish the critical importance of integrating self-directed learning into instruction to improve teaching and learning and improve equity in postsecondary student outcomes. They will also engage faculty, instructional designers, and educational technology developers to share knowledge and to co-develop and disseminate capacity-building resources that support teaching these skills and strategies. 


The center is led by Dr. Deborah Jonas (PI, SRI International, top photo), Dr. Nicole Edgecombe (Co-PI, Teachers College, Columbia University), Dr. Rebecca Griffiths (Co-PI, SRI International), and Dr. Neil Seftor (Co-PI, SRI International).

This blog was written by the R&D center team. For further information about the grant, contact the program officer: Dr. Meredith Larson.