IES Blog

Institute of Education Sciences

Using Cost Analysis to Inform Replicating or Scaling Education Interventions

A key challenge when conducting cost analysis as part of an efficacy study is producing information that can be useful for addressing questions related to replicability or scale. When the study is a follow up conducted many years after the implementation, the need to collect data retrospectively introduces additional complexities. As part of a recent follow-up efficacy study, Maya Escueta and Tyler Watts of Teachers College, Columbia University worked with the IES-funded Cost Analysis in Practice (CAP) project team to plan a cost analysis that would meet these challenges. This guest blog describes their process and lessons learned and provides resources for other researchers.

What was the intervention for which you estimated costs retrospectively?

We estimated the costs of a pre-kindergarten intervention, the Chicago School Readiness Project (CSRP), which was implemented in nine Head Start Centers in Chicago, Illinois for two cohorts of students in 2004-5 and 2005-6. CSRP was an early childhood intervention that targeted child self-regulation by attempting to overhaul teacher approaches to behavioral management. The intervention placed licensed mental health clinicians in classrooms, and these clinicians worked closely with teachers to reduce stress and improve the classroom climate. CSRP showed signs of initial efficacy on measures of preschool behavioral and cognitive outcomes, but more recent results from the follow-up study showed mainly null effects for the participants in late adolescence.

The IES research centers require a cost study for efficacy projects, so we faced the distinct challenge of conducting a cost analysis for an intervention nearly 20 years after it was implemented. Our goal was to render the cost estimates useful for education decision-makers today to help them consider whether to replicate or scale such an intervention in their own context.

What did you learn during this process?

When enumerating costs and considering how to implement an intervention in another context or at scale, we learned four distinct lessons.

1. Consider how best to scope the analysis to render the findings both credible and relevant given data limitations.

In our case, because we were conducting the analysis 20 years after the intervention was originally implemented, the limited availability of reliable data—a common challenge in retrospective cost analysis—posed two challenges. We had to consider the data we could reasonably obtain and what that would mean for the type of analysis we could credibly conduct. First, because no comprehensive cost analysis was conducted at the time of the intervention’s original implementation (to our knowledge), we could not accurately collect costs on the counterfactual condition. Second, we also lacked reliable measures of key outcomes over time, such as grade retention or special education placement that would be required for calculating a complete cost-benefit analysis. This meant we were limited in both the costs and the effects we could reliably estimate. Due to these data limitations, we could only credibly conduct a cost analysis, rather than a cost-effectiveness analysis or cost-benefit analysis, which generally produce more useful evidence to aid in decisions about replication or scale.

Because of this limitation, and to provide useful information for decision-makers who are considering implementing similar interventions in their current contexts, we decided to develop a likely present-day implementation scenario informed by the historical information we collected from the original implementation. We’ll expand on how we did this and the decisions we made in the following lessons.

2. Consider how to choose prices to improve comparability and to account for availability of ingredients at scale.

We used national average prices for all ingredients in this cost analysis to make the results more comparable to other cost analyses of similar interventions that also use national average prices. This involved some careful thought about how to price ingredients that were unique to the time or context of the original implementation, specific to the intervention, or in low supply. For example, when identifying prices for personnel, we either used current prices (national average salaries plus fringe benefits) for personnel with equivalent professional experience, or we inflation-adjusted the original consulting fees charged by personnel in highly specialized roles. This approach assumes that personnel who are qualified to serve in specialized roles are available on a wider scale, which may not always be the case.

In the original implementation of CSRP, spaces were rented for teacher behavior management workshops, stress reduction workshops, and initial training of the mental health clinicians. For our cost analysis, we assumed that using available school facilities were more likely and tenable when implementing CSRP at large scale. Instead of using rental prices, we valued the physical space needed to implement CSRP by using amortized construction costs of school facilities (for example, cafeteria/gym/classroom). We obtained these from the CAP Project’s Cost of Facilities Calculator.

3. Consider how to account for ingredients that may not be possible to scale.

Some resources are simply not available in similar quality at large scale. For example, the Principal Investigator (PI) for the original evaluation oversaw the implementation of the intervention, was highly invested in the fidelity of implementation, was willing to dedicate significant time, and created a culture that was supportive of the pre-K instructors to encourage buy-in for the intervention. In such cases, it is worth considering what her equivalent role would be in a non-research setting and how scalable this scenario would be. A potential proxy for the PI in this case may be a school principal or leader, but how much time could this person reasonably dedicate, and how similar would their skillset be?  

4. Consider how implementation might work in institutional contexts required for scale.

Institutional settings might necessarily change when taking an intervention to scale. In larger-scale settings, there may be other ways of implementing the intervention that might change the quantities of personnel and other resources required. For example, a pre-K intervention such as CSRP at larger scale may need to be implemented in various types of pre-K sites, such as public schools or community-based centers in addition to Head Start centers. In such cases, the student/teacher ratio may vary across different institutional contexts, which has implications for the per-student cost. If delivered in a manner where the student/ teacher ratio is higher than in the original implementation, the intervention may be less costly, but may also be less impactful. This highlights the importance of the institutional setting in which implementation is occurring, and how this might affect the use and costs of resources.

How can other researchers get assistance in conducting a cost analysis?

In conducting this analysis, we found the following CAP Project tools to be especially helpful (found on the CAP Resources page and the CAP Project homepage):

  • The Cost of Facilities Calculator: A tool that helps estimate the cost of physical spaces (facilities).
  • Cost Analysis Templates: Semi-automated Excel templates that support cost analysis calculations.
  • CAP Project Help Desk: Real-time guidance from a member of the CAP Project team. You will receive help in troubleshooting challenging issues with experts who can share specific resources. Submit a help desk request by visiting this page.

Maya Escueta is a Postdoctoral Associate in the Center for Child and Family Policy at Duke University where she researches the effects of poverty alleviation policies and parenting interventions on the early childhood home environment.

Tyler Watts is an Assistant Professor in the Department of Human Development at Teachers College, Columbia University. His research focuses on the connections between early childhood education and long-term outcomes.

For questions about the CSRP project, please contact the NCER program officer, Corinne.Alfeld@ed.gov. For questions about the CAP project, contact Allen.Ruby@ed.gov.

 

Technical Working Group of Education Policy Leaders and Researchers Advises NCER on Education Policy Research Priorities and Strategies

The National Center for Education Research convened a virtual technical working group (TWG) of education policy leaders and researchers to discuss strategies for improving K-12 education systems and policy research funded by NCER. The official summary is now posted on the IES website. This blog post provides a snapshot of that summary.

 

What are the most pressing education systems and policy issues in need of research?

Prior to the meeting, TWG members were asked to identify what policy topics need evidence to inform their decision making, and they identified over 20 issues of pressing concern. The top-identified education policy issues, with at least 3 nominations across both policy leader and researcher groups, are listed below (with the number of nominations in parentheses).

  • Equitable access to high quality instruction (13)
  • Education technology and online instruction (11)
  • Equity in school and student funding (10)
  • Diversity of teachers and equity of pay (8)
  • Socio-emotional learning (SEL), relationships and student engagement (8)
  • Leader Professional Development (6)
  • Teacher professional development (5)
  • Better capacity for data analysis management, communication (5)
  • Early childhood education (5)
  • Career preparation/advising/transitions (5)
  • Culturally relevant pedagogy/anti-racism/anti-bias (3)
  • Special education/English Learners (3)
  • Covid-related learning loss (3)

The discussion during the meeting focused on a broad range of policy, research, and dissemination issues, which did not always align with the above issues. NCER staff organized the themes from the day’s discussion below to highlight where the TWG members expressed general consensus. More detail about each area can be found in the full TWG summary on the IES website.

  • Understanding and Addressing Inequity in Education Systems: The TWG members agreed that the most pressing issue is equity. The COVID-19 pandemic exacerbated and revealed systemic inequity, which affects students who are from racial/ethnic minority backgrounds, low-income households, or other marginalized groups.
  • Improving Use of and Access to Education Technology and High-Quality Online Instruction: With the heavy reliance on online instruction throughout the pandemic, COVID-19 has created new urgency for closing the digital divide. In addition to understanding how systems support or impede access to education technology, TWG members also noted a need for education research focused on questions of teaching and learning in remote and hybrid environments, including professional development and systems-level support for both teachers and students for using and engaging with technology.
  • Recruiting and Retaining a Diverse Teaching Workforce: TWG members noted that teacher shortages, especially in high poverty districts, have been a concern for years, and that there is a need for evidence on how to make access to high quality teachers more equitable. Researchers could examine how recruitment strategies can diversify the teacher workforce through strategies such as incentives specifically for teacher candidates of color and adopting culturally relevant practices.
  • Providing Access to Student and Educator Mental Health Supports: TWG members observed that COVID-19 has been stressful for students, parents, and educators. Remote learning appears to prevent many students from connections that support emotional and behavioral health and to increase the vulnerability of students with high-risk home environments. Research and research syntheses are needed to guide policymakers on what works best in allocating resources to meet student and educator mental health needs and how to connect with community resources and improve systems of support beyond the school.
  • Engaging and Re-Engaging Students: Chronic absenteeism is a major problem in many schools, exacerbated by the pandemic. TWG members pointed out that a key policy question for many LEAs and SEAs is how to re-engage these “missing” students. Researchers can help education leaders understand what evidence-based interventions are available to re-engage students and begin to address disparities in pandemic learning loss.
  • Preparing K-12 Students for Careers: TWG members agreed that researchers, policymakers, and educators should engage and collaborate more frequently with employers to inform what career-aligned experiences should be offered to students in school. Research could contribute to understanding how best to support local communities of practice that include schools, businesses, intermediaries, and community-based organizations with the shared goal of preparing students for careers.
  • Modernizing Assessments: TWG members agreed that education systems should be modernized to assess and address basic skills and learning needs quickly, such as with interim or formative assessments. Research is needed to understand how to use assessment for both accountability purposes as well as to support diagnosis and student progress monitoring. Additionally, research is needed to guide educators on authentic and performance-based assessments.
  • Improving Data-Driven Decision-Making in Schools: TWG members noted that education researchers could provide guidance on identifying a set of core variables, systematically collecting data and metrics, and building data sharing platforms and data agreements. TWG members pointed out that, in many cases, education agencies and educators do not necessarily need more data but more training to build capacity to analyze and use the data they have.
  • Examining Education Finance: TWG members noted that research can generate high-quality information to help policymakers understand how school systems could leverage financial resources to help the most underserved learners and communities. TWG participants noted that a key question for which additional evidence is needed is how much funding is needed to provide the breadth of services, including wraparound services, required to support learning in the poorest schools.
  • Creating Adaptive Education Systems: COVID-19 has shown that education systems must better prepare for emergencies, and TWG members worried that the pandemic will have a lasting effect on student achievement and attainment. The COVID-19 pandemic has provided the education and research community with an opportunity to learn from what went well, what did not, and to propose strategies to put in place to ensure rapid responses to future emergencies and moments of crisis.

How can NCER better support research on education systems/policy issues?

NCER staff asked for recommendations on improving its engagement with education systems and policy work. TWG member recommendations are organized according to five themes.

  • Support a Systems Approach to Systems and Policy Research: TWG members recommended that NCER encourage researchers to untangle and understand broader, dynamic education systems and processes, and to develop methods that capture and account for changing contexts. The TWG encouraged an interdisciplinary approach with different stakeholder perspectives, methods, and measures to move the field forward.
  • Encourage Partnerships with Key Stakeholders: TWG members felt that the relevance of research proposals could be increased with more collaboration between researchers and education leaders. Partnership between researchers and practitioners is one strategy for increasing the local relevance of research and its applicability to specific local questions.
  • Support Rapid Research to Practice Efforts: TWG members agreed that education policy research results should be disseminated to the field quickly. Rapid cycle evaluation methods, such as plan-do-study-act continuous quality improvement approaches can help to inform policy solutions, and while not nimble enough to support quick turnaround studies, NCER funding may be appropriate for continuous improvement methods that are applied within a longer-term research project.
  • Disseminate Information that is Useful to Policymakers: TWG members agreed that NCER research results should be relevant and presented in easy-to-read formats tailored for specific stakeholder audiences. To address this issue, IES can create easy-to-understand research syntheses, as education leaders do not have time or training to comb through the results of individual studies. In addition, TWG members identified a need for research on what is needed for practitioners to translate research to practice, to support decision making, and to address barriers to implementation.
  • Attend to Equity in Grantmaking and Research Focus: TWG members were concerned about equity in grantmaking and broadening participation in the research process. One way to address this is to provide more structured technical assistance to ensure applicants new to IES funding develop competitive research proposals. Active outreach can also help to encourage experts most likely to address equity research questions to apply. TWG members also pointed out that interdisciplinary research teams could help unearth embedded inequities in data collection, measures, and models.

In addition to the ideas discussed above, TWG members suggested specific ideas for how NCER could support research that leads to policy and systems improvement. These are too numerous to include here, but they are described in full in the TWG summary report.


For questions about this blog or the TWG summary, please contact Corinne.Alfeld@ed.gov, NCER Program Officer for the Improving Education Systems topic.

Do Underrepresented Students Benefit From Gifted Programs?

Recent studies of gifted and talented programs indicate that the extent and quality of services available to gifted students vary from state to state, district to district, and even from school to school within school districts. In a project titled “Are Gifted Programs Beneficial to Underserved Students?” (PI: William Darity, Duke University), IES-funded researchers are examining the variability of Black and Hispanic students’ access to gifted programs in North Carolina and the potential impact of participation in these gifted programs on Black and Hispanic student outcomes. In this interview blog, we asked co-PIs Malik Henfield and Kristen Stephens to discuss the motivation for their study and preliminary findings.

What motivated your team to study the outcomes of Black and Hispanic students in gifted programs?

The disproportionality between the representation of white students and students of color in gifted education programs is both persistent and pervasive. For decades, we’ve both been working with teachers and school counselors seeking to increase the number of students of color in gifted education programs, but what happens once these students are placed in these programs? We know very little about the educational, social, and emotional impact that participation (or non-participation) has on students. Gifted education programs are widely believed to provide the best educational opportunity for students, but given the impacts race and socioeconomic status have on student success factors, this may not be a sound assumption. In fact, there is negligible (and often contradictory) published research that explores whether gifted programs contribute to beneficial academic and social-emotional outcomes for the underserved students who participate in them. Resolving this question will have tremendous implications for future gifted education policies.

Please tell us about your study. What have you learned so far?

With funding from IES, researchers from Duke University and Loyola University Chicago are collaborating to describe how gifted education policies in North Carolina are interpreted, implemented, and monitored at the state, district, and school levels. We are also estimating how these policies are related to Black, Hispanic, and economically disadvantaged students’ academic and social-emotional outcomes. We hope our examination of individual student characteristics, sociocultural contexts, and environmental factors will help improve the ways school systems identify and serve gifted students from traditionally underrepresented groups.

Although preliminary, there are several interesting findings from our study. Our analysis of district-level gifted education plans highlights promising equity practices (for example, using local norms to determine gifted program eligibility) as well as potential equity inhibitors (for example, relying predominantly on teacher referral). Our secondary data analysis reveals that the majority of school districts do not have equitable representation of Black and Hispanic students in gifted programs. Disproportionality was calculated using the Relative Difference in Composition Index (RDCI). The RDCI represents the difference between a group’s composition in gifted education programs and their composition across the school district expressed as a discrepancy percentage.

What’s Next?

In North Carolina, districts are allowed to interpret state policy and implement programs and support services in ways they deem appropriate. Our next step is to conduct an in-depth qualitative exploration of variations in policy within and across North Carolina school districts. In these forthcoming analyses, we will be looking only at youth identified as underserved along the racial/ethnic minority dimension. In each district, we plan to interview four distinct groups to better understand their greatest assets, needs, challenges, and resources they would find most valuable to facilitate successful academic and social-emotional outcomes: (1) high-achieving underserved students identified as gifted, (2) high-achieving underserved students not identified as gifted, (3) teachers, and (4) school counselors.

For example, we are interested in learning—

  • How educators interpret identification processes from policies
  • How educators perceive recruitment and retention processes and their role in them
  • How ethnic minority students identified as gifted perceive recruitment and retention processes
  • How ethnic minority students not selected for participation in gifted education programming perceive the recruitment process
  • How both student groups make sense of their racial identity

We will then combine what we learned from studies 1-3 (using secondary data) with Study 4 (research in schools) and share the results with policymakers, educators, and the research community.

What advice would you like to share with other researchers who are studying access to gifted programs?

There are three recommendations we would like to share:

  • Investigate instructional interventions that impact short- and long-term academic and social-emotional outcomes for gifted students. The field of gifted education has spent significant time and resources attempting to determine the best methods for identifying gifted students across all racial/ethnic groups. Nonetheless, disparities in representation still exist, and this hyper-focus on identification has come at the expense of increasing our understanding of what types of interventions work, for whom, and under what conditions.
  • Conduct more localized research studies. Since gifted education programs are largely de-centralized, there is considerable variance in how policies are created and implemented across states, districts, and schools. For example, eligibility criteria for participation in gifted programs can differ significantly across school systems.  In NC, “cut score” percentages on achievement and aptitude tests can range from the 85th to the 99th percentile. This makes it difficult to generalize research findings across contexts when participant samples aren’t adequately comparable. 
  • Extend beyond the identification question and consider both generalizability and transferability when designing the research methodology. For generalizability, this entails carefully selecting the sample population and the methods for developing causal models. For transferability, this means providing a detailed account of the ecosystem in which the research is taking place so that practitioners can see the utility of the findings and recommendations within their own contexts. Mixed methods studies would certainly help bridge the relationship between the two. 

 


Dr. Malik S. Henfield is a full professor and founding dean of the Institute for Racial Justice at Loyola University Chicago. His scholarship situates Black students' lived experiences in a broader ecological milieu to critically explore how their personal, social, academic, and career success is impeded and enhanced by school, family, and community contexts. His work to date has focused heavily on the experiences of Black students formally identified as gifted/high achieving.

Dr. Kristen R. Stephens is an associate professor of the Practice in the Program in Education at Duke University. She studies legal and policy issues related to gifted education at the federal, state, and local levels--particularly around how such policies contribute to beneficial academic, social-emotional, and behavioral outcomes for traditionally underserved gifted students.

This interview blog is part of a larger IES blog series on diversity, equity, inclusion and accessibility (DEIA) in the education sciences. It was produced by Katina Stapleton (Katina.Stapleton@ed.gov), co-chair of the IES Diversity and Inclusion Council. For more information about the study, please contact the program officer, Corinne Alfeld (Corinne.Alfeld@ed.gov).

 

Assessing Social Emotional Strengths in Schools to Protect Youth Mental Health

The transition into high school is characterized by growing academic demands, more diverse and complex social interactions, and increasing pressure associated with the looming transition into adult life and responsibilities. As part of an IES-funded measurement project, Drs. Michael Furlong, Erin Dowdy, and Karen Nylund-Gibson refined and validated the Social Emotional Health Survey-Secondary (SEHS-S-2020). The SEHS-S-2020 assesses the social-emotional assets of high school students and fits within multi-tiered systems of support and response-to-intervention frameworks schools regularly employ for the identification and care of students with learning or social-emotional needs. We asked the research team that developed the SEHS-S-2020 to tell us more about the development of the measure and how it is being used in schools.

Photos of the authors of the blog (Top to Bottom: Karen Nylund-Gibson; Michael Furlong; Erin Dowdy)What inspired you to develop the Social Emotional Health Survey-Secondary?

We were motivated by two events between 2008 and 2013. First, while we were serving as local evaluators of two Safe Schools/Healthy Students (SSHS) projects in Santa Barbara County, our project school administrators and mental health professionals challenged us to consider alternative ways to assess social-emotional health and the impacts of these projects. Second, around the same time, Michael Furlong was editing the first edition of the Handbook of Positive Psychology in Schools. Examining various positive psychological mindsets for the SSHS projects, we recognized that many of these constructs—such as hope, self-efficacy, and grit—had overlapping content. Based on this, we wanted to see if we could develop an efficient measure of positive psychology mindsets in adolescents.

The traditional mental health disorder literature uses comorbidity to describe the poor psychosocial outcomes for individuals experiencing more than one psychological disorder. We wondered whether students who report multiple social and psychological assets have enhanced developmental outcomes. The term we use for this "whole is greater than the sum of its parts" construct is covitality. Building on this concept, a significant effort of our work at the University of California, Santa Barbara has been to develop measures for schools to monitor social-emotional wellness. We created measures for primary and secondary schools and higher education institutions because fostering social-emotional health is ongoing and responsive to emerging developmental tasks.

How does the Social Emotional Health Survey-Secondary measure covitality?

Through an IES Measurement grant, we refined and validated the SEHS-Secondary form, which measures psychosocial strengths derived from the social emotional learning (SEL) and positive youth development (PYD) literature. SEHS-S-2020 assesses four related general positive social and emotional health domains that contribute to covitality. 

  • Belief in Self consists of three subscales grounded in constructs from self-determination theory literature: self-efficacy, self-awareness, and persistence. 
  • Belief in Others comprises three subscales derived from constructs found in childhood resilience literature: school support, peer support, and family support. 
  • Emotional Competence consists of three subscales: emotion regulation, empathy, and behavioral self-control. 
  • Engaged Living comprises three subscales grounded in constructs derived from the positive youth psychology literature: gratitude, zest, and optimism.

What did you find during the validation study?

The validation project involved a cross-sectional sample of more than 100,000 California secondary school students in partnership with the California State Department of Education and WestEd. We also collected three years of longitudinal data with two collaborating school districts. Our goal was to develop a valid measure to support educator efforts to foster positive development. We wanted to document how the number of developmental assets was associated with mental well-being. This chart shows that students reporting many SEHS-S-2020 assets were substantially more likely to report flourishing well-being. Adolescents with more SEHS-S-2020 assets were less likely to report chronic sadness or past-year suicidal ideation (see the covitality advantage).

Bar chart showing associations between student reports on the SEHS-S-2020 and their mental wellness

Did you have any unanticipated project outcomes?

Data collection immediately predated the COVID-19 pandemic. It provided a baseline to assess the effects of the pandemic and broader social divisiveness in the United States on student well-being. An important unanticipated outcome is that pre-pandemic social well-being declined substantially during and after remote learning.

Our project began collecting longitudinal data from middle and high school students in October 2019, before the COVID-19 pandemic. One participating school district asked us to administer the survey in October 2020 during remote learning and in October 2021 after the students returned to school, in order to understand remote learning's impacts on students' well-being. They also provided support for specific students who were not coping well. Our preliminary findings (paper in progress) showed that the students reported some diminished emotional well-being and global life satisfaction, but their social well-being decreased substantially from 2019 to 2021, about one-half of a standard deviation. Two macro-social items in particular declined markedly. One asks the students to express how often (in the past month) "they felt that society was a good place or becoming a better place for all people." A second asks them, "if the way that society works makes sense." Students reporting the steepest social well-being declines also reported substantial increases in chronic sadness and diminished global life satisfaction. These declines suggest that the broader impacts of the pandemic took a toll on the students.

How are schools using the resources your project developed?

There is a greater emphasis on evaluating social and emotional health and well-being than before. The SEHS-S-2020 is now a core component of the California Healthy Kids Survey (CHKS), a biennial survey used by most California schools. It provides information about student wellness and risk-related behaviors. In addition, several California school districts have adopted the SEHS-S-2020 and other project-developed measures for their Tier 1 universal wellness screening, following up and providing counseling services and supports.

We are eager to see more schools using the resources from our project. For example, researchers in more than 20 countries have adapted the SEHS-S-2020 to explore cross-cultural aspects of well-being. An app to administer, score, report, and track  social and emotional wellness with the SEHS-S-2020 now supports Tier 1 wellness monitoring.


Michael Furlong, Ph.D., is a Distinguished Professor Emeritus of School Psychology and holds a 2021-2022 Edward A. Dickson Emeritus Professorship at the University of California Santa Barbara.

Erin Dowdy, PhD., is a Professor in the Department of Counseling, Clinical, and School Psychology at the University of California Santa Barbara. She is a licensed psychologist and a nationally certified school psychologist.

Karen Nylund-Gibson, Ph.D., is an Associate Professor of Quantitative Methods in the Department of Education at the University of California, Santa Barbara.

This blog was produced by NCER Program Officer, Corinne Alfeld. Please contact Corinne.Alfeld@ed.gov for more information.

Powering Our Future: How Service-Learning Aligned with Next Generation Science Standards Can Promote Science Learning, Social and Emotional Skills, and Civic Engagement

Each generation faces its own societal challenges. Two prominent issues—the climate crisis and America’s political divide—are heavy burdens for today’s youth. Without explicit focus in schools, it is hard to imagine how children will learn to work across differences and collaborate with others to solve complex environmental problems. Youth are very capable people, and school comes alive when they feel agency and see how their efforts matter in the community. Service-learning can help teachers make instruction feel relevant and teach skills that lead to civic engagement as youth learn to design, implement, and evaluate solutions to problems that are important to them. In this interview blog, the Connect Science project team explains how they developed curriculum and professional development to support teachers to engage their students in service-learning experiences.

Can you tell us about Connect Science and what it looks like in action?

Fueled by an IES Development and Innovation grant, our team developed and evaluated a science-based service-learning approach for the upper-elementary school years. In doing so, we answered a need that teachers and schools face as they strive to create engaging experiences aligned with the Next Generation Science Standards (NGSS).

Connect Science is a 12-week project-based learning unit for upper elementary students. Early on, teachers and students explore topics of energy and natural resources using lessons aligned with the NGSS. Teachers guide student learning on what it means to be an engaged citizen and on the social and collaborative skills needed to take action in the community. To prepare, teachers receive five days of professional development and follow-up coaching. Teachers also receive a Connect Science manual, related books, and science materials.

But what does Connect Science actually look like in action? Imagine fourth graders engaged in a science unit on renewable and non-renewable resources. The students learn about different energy sources and then discuss pros and cons of each source. They become aware that non-renewable energy resources are rapidly diminishing and would not always be available to generate electricity. The awareness of this problem energizes them to promote energy conservation. Toward that goal, the students decide to educate other students and families at their school about energy use. At the next open house night, they turn their cafeteria into an energy fair where they share important information. For example, one group of students teaches about what types of energy sources were used in their state to produce electricity and another group teaches ways that people can save energy in their home. Before and after the energy fair, the students administer a pre- and post-survey on energy facts to size up what their visitors learned.

How did the IES grant support the development and pilot testing of Connect Science?

In the first two years of this grant, we developed and tested materials with teachers. In the third year, we conducted a randomized controlled trial of Connect Science involving 41 classrooms with 20 in Connect Science and 21 in a waitlist comparison group, resulting in a student sample of 868 students (423 students participated in the intervention).

We found that Connect Science impacted teacher practices and student outcomes. Teachers in the Connect Science group were more effective at engaging in the two NGSS practices that we measured: eliciting and building on prior knowledge and creating opportunities for student critique, explanation, and argument. Further, we saw higher science achievement and energy attitudes and behaviors in the intervention than control condition. The social skill results hinged on the fidelity of implementation. When teachers used more Connect Science practices, students showed improved communication and social competence. As a result of these findings, Connect Science is designated as a Promising Program by the Collaborative for Academic, Social, and Emotional Learning (CASEL).

What are the implications of your findings?

Too few projects integrate academic and social learning in schools. Often, high-quality NGSS materials are developed with little thought about the social skills students need to engage in that instruction. Likewise, social and emotional learning is often taught separately from academic content. Service-learning is a framework that bridges these two areas and allows students to engage in authentic, science-based work. Given our experiences, we have a few recommendations for educators eager to use service-learning.

  • Teach social, emotional, and collaborative skills with intention before launching into group work. In the elementary schools, children thrive from being in supportive caring classrooms and they respond well to lessons on active listening, respectful communication, and understanding people with multiple perspectives.
  • Leverage the existing curriculum and build in service-learning experiences. Rather than adding one more new topic, look at existing curricular topics and use service-learning to facilitate deep learning on content areas that already part of the curriculum.
  • Amplify youth voice. Teachers need to work with students to identify a relevant community problem and generate solutions to that problem. We carefully developed the Connect Science materials to be more teacher-directed toward the beginning of the unit and more student-directed toward the end. This approach was based both on theoretical and empirical work supporting the importance of student autonomy.

 


Sara Rimm-Kaufman is the Commonwealth Professor of Education at the University of Virginia School of Education and Human Development. Her recent book for teachers, SEL from the Start, is based on the Connect Science work.

Eileen Merritt is a Research Scientist in the College of Natural Resources and the Environment at Virginia Tech. Her research and teaching focus on environmental and sustainability education.

Tracy Harkins is the owner of Harkins Consulting, LLC in Maine. Her focus is providing professional development and resources to engage and motivate student learners through service-learning. She will be offering an upcoming Connect Science Institute in Summer 2022.

For questions about this project, please contact Corinne.Alfeld@ed.gov, NCER program officer.