Inside IES Research

Notes from NCER & NCSER

Unlocking Opportunities: Understanding Connections Between Noncredit CTE Programs and Workforce Development in Virginia

With rapid technological advances, the U.S. labor market exhibits a growing need for more frequent and ongoing skill development. Community college noncredit career and technical education (CTE) programs that allow students to complete workforce training and earn credentials play an essential role in providing workers with the skills they need to compete for jobs in high-demand fields. Yet, there is a dearth of research on these programs because noncredit students are typically not included in state and national postsecondary datasets. In this guest blog for CTE Month, researchers Di Xu, Benjamin Castleman, and Betsy Tessler discuss their IES-funded exploration study in which they build on a long-standing research partnership with the Virginia Community College System and leverage a variety of data sources to investigate the Commonwealth’s FastForward programs. These programs are noncredit CTE programs designed to lead to an industry-recognized credential in one of several high-demand fields identified by the Virginia Workforce Board.

In response to the increasing demand for skilled workers in the Commonwealth, the Virginia General Assembly passed House Bill 66 in 2016 to establish the New Economy Workforce Credential Grant Program (WCG) with the goal of providing a pay-for-performance model for funding noncredit training. The WCG specifically funds FastForward programs that lead to an industry-recognized credential in a high-demand field in the Commonwealth. Under this model, funding is shared between the state, students, and training institutions based on student performance, with the goal of ensuring workforce training is affordable for Virginia residents. An important implication of WCG is that it led to systematic, statewide collection of student-level data on FastForward program enrollment, program completion, industry credential attainment, and labor market performance. Drawing on these unique data, coupled with interviews with key stakeholders, we generated findings on the characteristics of FastForward programs, as well as the academic and labor market outcomes of students enrolled in these programs. We describe the preliminary descriptive findings below.

FastForward programs enroll a substantially different segment of the population from credit-bearing programs and offer a vital alternative route to skill development and workforce opportunities, especially for demographic groups often underrepresented in traditional higher education. FastForward programs in Virginia enroll a substantially higher share of Black students, male students, and older students than short-duration, credit-bearing programs at community colleges that typically require one year or less to complete. Focus groups conducted with FastForward students at six colleges indicate that the students were a mix of workers sent by their employers to learn specific new skills and students who signed up for a FastForward program on their own. Among the latter group were older career changers and recent high school graduates, many of whom had no prior college experience and were primarily interested in landing their first job in their chosen field. Moreover, 61% of FastForward participants have neither prior nor subsequent enrollment in credit-bearing programs, highlighting the program’s unique role in broadening access to postsecondary education and career pathways.

FastForward programs offer an alternative path for students who are unsuccessful in credit-bearing programs. The vast majority of students (78%) enrolled in only one FastForward program, with the average enrollment duration of 1.5 quarters, which is notably shorter than most traditional credit-bearing programs. While 36% have prior credit-bearing enrollment, fewer than 20% of these students earned a degree or certificate from it, and less than 12% of FastForward enrollees transitioned to credit-bearing training afterward. Interviews with administrators and staff indicated that while some colleges facilitate noncredit-to-credit pathways by granting credit for prior learning, others prioritize employment-focused training and support over stackable academic pathways due to students’ primary interest in seeking employment post-training.

FastForward programs have a remarkable completion rate and are related to high industry credential attainment rates. Over 90% of students complete their program, with two-thirds of students obtaining industry credentials. Student focus groups echoed this success. They praised the FastForward program and colleges for addressing both their tuition and non-tuition needs. Many students noted that they had not envisioned themselves as college students and credited program staff, financial aid, and institutional support with helping them to be successful.

Earning an industry credential through FastForward on average increases quarterly earnings by approximately $1,000. In addition, industry credentials also increase the probability of being employed by 2.4 percentage points on average. We find substantial heterogeneity in economic return across different fields of study, where the fields of transportation (for example, commercial driver’s license) and precision production (for example, gas metal arc welding) seem to be associated with particularly pronounced earnings premiums. Within programs, we do not observe significant heterogeneity in economic returns across student subgroups.

What’s Next?

In view of the strong economic returns associated with earning an industry credential and the noticeable variation in credential attainment between training institutions and programs, our future exploration intends to unpack the sources of variation in program-institution credential attainment rates and to identify specific program-level factors that are within the control of an institution and which are associated with higher credential rates and lower equity gaps. Specifically, we will collect additional survey data from the top 10 most highly-enrolled programs at the Virginia Community College System (VCCS) that will provide more nuanced program-level information and identify which malleable program factors are predictive of higher credential attainment rates, better labor market outcomes, and smaller equity gaps associated with these outcomes.


Di Xu is an associate professor in the School of Education at UC, Irvine, and the faculty director of UCI’s Postsecondary Education Research & Implementation Institute.

Ben Castleman is the Newton and Rita Meyers Associate Professor in the Economics of Education at the University of Virginia.

Betsy Tessler is a senior associate at MDRC in the Economic Mobility, Housing, and Communities policy area.

Note: A team of researchers, including Kelli Bird, Sabrina Solanki, and Michael Cooper contributed jointly to the quantitative analyses of this project. The MDRC team, including Hannah Power, Kelsey Brown, and Mark van Dok, contributed to qualitative data collection and analysis. The research team is grateful to the Virginia Community College System (VCCS) for providing access to their high-quality data. Special thanks are extended to Catherine Finnegan and her team for their valuable guidance and support throughout our partnership.

This project was funded under the Postsecondary and Adult Education research topic; questions about it should be directed to program officer James Benson (James.Benson@ed.gov).

This blog was produced by Corinne Alfeld (Corinne.Alfeld@ed.gov), NCER program officer for the CTE research topic.

Developing the Vanderbilt Assessment of Leadership in Education (VALED)

As education accountability policies continue to hold school leaders responsible for the success of their schools, it is crucial to assess and develop leadership throughout the school year. In honor of the IES 20th Anniversary, we are highlighting NCER’s investment in leadership measures. This guest blog discusses the Vanderbilt Assessment of Leadership in Education (VALED). The VALED team was led by Andy Porter and included Ellen Goldring, Joseph Murphy and Steve Elliott, all at Vanderbilt University at the time. Other important contributors to the work are Xiu Cravens, Morgan Polikoff, Beth Minor Covay, and Henry May. The VALED was initially developed with funding from the Wallace Foundation and then further developed and validated with funding from IES.

What motivated your team to develop VALED?

There is currently widespread agreement that school principals have a major impact on schools and student achievement. However, at the time we developed VALED, we noticed that there were limited research-based instruments to measure principal leadership effectiveness aligned to both licensure standards and rooted in the evidence base. Prior to the VALED, principal leadership evaluation focused primarily on managerial tasks. However, we believed that principal leadership centered on improving teaching and learning, school culture, and community and parent engagement (often called learning-centered leadership) is at the core of leadership effectiveness.

What does VALED measure?

The VALED is a multi-rater assessment of learning-centered leadership behaviors. The principal, his/her supervisor, and teachers in the school complete it, which is why VALED is sometimes referred to as a 360 assessment or multi-source feedback.

VALED measures six core components and six key processes that define learning-centered leadership. The core components are high standards for student learning, rigorous curriculum, quality instruction, culture of learning and professional behavior, connections to external communities, and performance accountability. The key processes are planning, implementing, supporting, communicating, monitoring, and advocating.

How is the VALED different from other school leadership assessments?

The VALED is unique because it focuses on school leadership behaviors aligned to school improvement and school effectiveness, incorporates feedback and input from those who collaborate closely with the principal, includes a self- assessment, acknowledges the distributed work of leadership in a school, and has strong psychometric properties. We think there are several elements that contribute to the uniqueness of the instrument.

First, VALED is based on what we have learned from scholarship and academic research rather than less robust frameworks such as personal opinions and or unrepresentative samples. The VALED was crafted from concepts identified as important in that knowledge and understanding. The VALED model is based upon knowledge about connections between leadership and learning and provides a good deal of the required support for the accuracy, viability, and stability of the instrument.

Second, principals rarely receive data-based feedback, even though feedback is essential for growth and improvement. The rationale behind multi-source or 360-degree feedback is that information regarding leadership efficacy resides within the shared experiences of teachers and supervisors, collaborating with the principal, rather than from any one source alone. Data that pinpoint gaps between principal’s own self-assessment, and their teachers’ and supervisors’ ratings of their leadership effectiveness can serve as powerful motivators for change.

Finally, in contrast to some other leadership measures, VALED has undergone extensive psychometric development and testing. We conducted a sorting study to investigate content validity and a pilot study where we addressed ceiling effects, and cognitive interviews to refine wording. We also conducted a known group study that showed the tool’s ability to reliably distinguish principals, test-retest reliability, convergent-divergent validity, and principal value-added to student achievement. As part of this testing, we identified several key properties of VALED. The measure—  

  • Works well in a variety of settings and circumstances
  • Is construct valid
  • Is reliable
  • Is feasible for widespread use
  • Provides accurate and useful reporting of results
  • Is unbiased
  • Yields a diagnostic profile for summative and formative purposes
  • Can be used to measure progress over time in the development of leadership
  • Predicts important outcomes
  • Is part of a comprehensive assessment of the effectiveness of a leader's behaviors

What is the influence of VALED on education leadership research and practice?

VALED is used in schools and districts across the US and internationally for both formative and evaluative purposes to support school leadership development. For example, Baltimore City Public Schools uses VALED as a component of their School Leader Evaluations. VALED has also spurred studies on principal evaluation, including the association between evaluation, feedback and important school outcomes, the implementation of principal evaluation, and its uses to support principal growth and development. In addition, it provides a reliable and valid instrument for scholars to use in their studies as a measure of leadership effectiveness.


Andy Porter is professor emeritus of education at the Pennsylvania State University. He has published widely on psychometrics, student assessment, education indicators, and research on teaching.

Ellen Goldring is Patricia and Rodes Hart Chair, professor of education and leadership at Vanderbilt University. Her research interests focus on the intersection of education policy and school improvement with emphases on education leadership.

Joseph Murphy is an emeritus professor of education and the former Frank W. Mayborn Chair of Education at Peabody College, Vanderbilt University. He has published widely on school improvement, with special emphasis on leadership and policy and has been led national efforts to develop leadership standards. 

Produced by Katina Stapleton (Katina.Stapleton@ed.gov), program officer for NCER’s education leadership portfolio.

 
 
 

The Comprehensive Assessment of Leadership for Learning: How We Can Support School Leaders to Improve Learning for All Students

As educational accountability policies continue to hold school leaders responsible for the success of their schools, it is crucial to assess and develop leadership throughout the school year. In honor of School Principals’ Day and the IES 20th Anniversary, we are highlighting NCER’s investment in formative leadership measures. In this guest blog, researchers Rich Halverson and Carolyn Kelley from the University of Wisconsin-Madison and Mark Blitz from the Wisconsin Center for Education Products and Services discuss the development and evolution of their IES-funded Comprehensive Assessment of Leadership for Learning (CALL).

What is CALL?

CALL is a survey tool based on a distributed leadership model that emphasizes the work of leaders rather than their positions or identities. In 2008, we led a team of researchers at the University of Wisconsin-Madison to identify the key leadership tasks necessary for school improvement, regardless of who made the tasks happen. The CALL survey invites each educator in a school to assess the degree to which these core tasks are conducted, then aggregates these responses to provide a school-level portrait of the state of leadership practice in their school.

How was CALL developed?

Our CALL team relied on over 30 years of research on leadership for school improvement to name about 100 key tasks in five domains of practice. The team then worked over a year with expert educators and leaders to articulate these tasks into survey items phrased in language that teachers would readily understand as describing the work that happens every day in their schools. We designed each item to assess the presence and quality of leadership practices, policies, and programs known to improve school quality and student learning. We validated the survey with qualitative and quantitative analyses of survey content, structure, and reliability.

What inspired you to develop CALL?

We believed a measure like CALL is necessary in the era of data-driven decision-making. Educators are inundated by accountability and contextual data about their schools, but they are left on their own for data to help them understand how to develop and implement the strategies, policies, and programs that support student success. Traditional school data systems leave a hole where feedback matters most for educators–at the practice-level where the work of leaders and educators unfolds. That is the hole that CALL is designed to fill.

How is the CALL different from other leadership surveys?

Traditional surveys include items that invite educators to rate their leaders on important tasks using Likert scale measures. The results of these surveys produce scores that allow leaders to be rated and compared. But, as a school leader, it is hard to know what to do with a 3.5 score on an item like “My principal is an effective instructional leader.” CALL items are designed differently. Each CALL item response represents a distinct level of practice, so respondents can learn about optimum practices simply by taking the survey. If the collected responses by educators in your school averaged a “2” on one of the items, the description of the next level practice (“3”) clearly articulates an improvement goal.

In addition, our online CALL reporting tools provide formative feedback by allowing users to compare item and domain scores between academic departments and grade levels, as well as across schools. The reports name specific areas of strength and improvement, and also suggest research-driven strategies and resources leaders can use to improve specific aspects of leadership.

How did CALL transition into a commercial measure?

The CALL project provides a model of how IES-funded research can have broad impact in schools around the country. We are thrilled that CALL developed into the rare educational survey that was embraced by the people who tested it as well as the research community. Many of our development partners asked about whether they could continue with CALL as the survey took on new life as a commercial product after our grant ended.

The Wisconsin Center for Education Products and Services (WCEPS) provided us with the business services and the support to bring CALL to market. CALL became a WCEPS partner in 2014 and has since developed into a successful leadership and school improvement resource. Under the leadership of WCEPS’s Mark Blitz, the CALL model became a framework to build successful collaborations with learning and research organizations across the country.

Leading professional learning groups such as WestEd, WIDA, the Southern Regional Education Board, and the Georgia Leadership Institute for School Improvement worked with Mark and the WCEPS team to build customized CALL-based formative feedback systems for their clients. Research partners at East Carolina University, Teachers College, and the University of Illinois at Chicago used CALL to collect baseline data on leadership practices for school improvement and principal preparation projects. CALL has also developed customized versions of the survey to support leadership for personalized learning (CALL PL) and virtual learning (Long Distance CALL). These partnerships have provided opportunities for hundreds of schools and thousands of educators to experience the CALL model of formative feedback to improve teaching and learning in schools.

What’s the next step for CALL?

In 2021, the CALL project entered a new era of leadership for equity. With the support of the Wallace Foundation, we created CALL for Equity Centered Leadership (CALL-ECL) to provide school districts with feedback on the leadership practices that create more equitable schools. CALL-ECL is part of a $100 million+ Wallace Foundation initiative to transform how districts across the country develop partnerships to prepare and support a new generation of equity-centered leaders. According to Wallace Research Director Bronwyn Bevan, “The foundation is excited about CALL-ECL because it will help leaders identify the organizational routines that sustain inequality and replace them with routines that help all students thrive.”

Our $8 million, six-year CALL-ECL project will document the development of these new preparation and support program, and will create a new CALL survey as an information tool to describe and assess equity-centered leadership practices. We believe that by 2027, CALL-ECL will be able to share the practices of equity-centered leadership developed through the Wallace initiatives with districts and schools around the world. Our hope is that CALL-ECL will give school leaders and leadership teams the data they need to continually evolve toward better opportunities and outcomes for all young people.


Richard Halverson is the Kellner Family Chair of Urban Education and Professor of Educational Leadership and Policy Analysis in the UW-Madison School of Education. He is also a co-director of the Comprehensive Assessment of Leadership for Learning and leads the Wallace Foundation Equity-Centered Leadership Pipeline research project.

 

Carolyn Kelley is a distinguished professor in the Department of Educational Leadership and Policy Analysis. Dr. Kelley’s research focuses on strategic human resources management in schools, including teacher compensation, principal and teacher evaluation, and leadership development.

 

Mark Blitz is the project director of the Comprehensive Assessment of Leadership for Learning (CALL) at the Wisconsin Center for Education Products & Services.

 

 

This blog was produced by Katina Stapleton (Katina.Stapleton@ed.gov), program officer for NCER’s education leadership portfolio.

 
 
 

Why Doesn't Everyone Get to Ride the Bus? Reflections on Studying (In)Equity in School Busing

In celebration of IES’s 20th anniversary, we are highlighting NCER’s investments in field-initiated research on equity in education. In this guest blog interview, researchers Amy Ellen Schwartz and Sarah Cordes share the equity-related implications of their IES-funded research on school busing. The research team conducted four related studies as part of their IES grant. First, researchers examined the individual and school factors that may explain why some students ride the bus and others do not. Next, they explored the relationship between bus use and school choice, examining whether students who use the bus to attend a choice school attend a higher quality school than their zoned school. The final two studies explored the link between taking the bus and academic outcomes.

Photo of Amy Ellen SchwartzWhat motivated your research on school busing?

Both of us are very interested in how factors outside the classroom matter for students. The school bus is a critical school service; however, at the start of our research, we knew very little about ridership, commutes, or the relationships between school bus ridership and student outcomes. Given what we know about inequities in other school services and the geography of schooling, it seemed natural for us to explore whether sociodemographic disparities exist in access to and provision of school bus service. Although NYC, like many other urban districts, also provides passes for use on public transit, we chose to focus specifically on the school bus because districts have significantly more discretion to set policies around the school bus.

 

Photo of Sarah CordesWhat were your findings about the relationship(s) between school busing and student outcomes?

Despite the popular images of the iconic yellow school bus as a fundamental part of American public education, there is wide variation in the availability and cost of school bus service across schools, districts, and states. As part of our IES-funded research, we examined the relationship between bus access/characteristics of the bus ride in New York City (NYC) and various outcomes including the likelihood that students attend a choice school, the quality of school attended, attendance, and test scores. Our research revealed four key findings:

  1. Among NYC students who attend choice schools, those who use transportation, especially the school bus, are more likely to attend a school that is significantly better than their zoned school.
  2. Transportation plays a particularly important role for Black and Hispanic students in NYC. Black and Hispanic students who use the bus to attend a choice school are 30-40 percentage points more likely to attend a significantly better school than Black or Hispanic students who attend a choice school but do not use transportation.
  3. Access to the school bus in NYC is associated with higher attendance—bus riders are absent approximately one day less than non-riders and are about four percentage points less likely to be chronically absent. However, most of this gap is explained by differences in the schools that bus riders attend, as within-school disparities in attendance are small.
  4. Although long bus rides (over 45 minutes) are relatively uncommon in NYC, students with long bus rides are disproportionately Black and more likely to attend charter or district choice schools. Further, long bus rides have negative effects on attendance and chronic absenteeism of district choice students and may have small negative effects on test scores among charter school students.

What does equity (or lack thereof) look like in the NYC school bus system?

This is a complicated question that is largely context specific. For example, equity in school bus systems in a choice-rich district like NYC looks different than equity in a district where most students attend their zoned schools. In NYC, the main determinant of school bus eligibility is how far a student lives from school based on their grade level. For example, students in K-2 are eligible for free transportation (MetroCard or school bus) if they attend a school that is more than half a mile from home. That said, “eligibility” for school bus transportation does not mean that students will be assigned to a school bus. This creates the potential for inequities.

Among students who attend the same school, we find no strong evidence of racial/ethnic disparities in bus access. This is not the case when we compare students who attend different schools. We found that while Black students are significantly more likely than any other racial/ethnic group to be eligible for the bus, eligible Black students are also less likely than any other group to be assigned to a bus. Specifically, among students who live far enough from school to be eligible for the bus, Black students are 4.3 percentage points less likely than White students and 4.8 percentage points less likely than Asian students to be assigned bus service. Hispanic students are least likely to be eligible for the bus based on how far they live from school. However, Hispanic students who are eligible for bus service are also less likely to receive it than White or Asian students.  

We identified two possible explanations for these disparities—routing restrictions and whether a school offers the bus. Bus routes in NYC cannot exceed 5 miles and cannot cross certain administrative boundaries. For example, a student cannot take a school bus from one borough to another. Due to these restrictions, there are some students who are eligible for the bus but cannot be placed on a route that follows these restrictions, so they receive a MetroCard instead. The second and main explanation for these disparities is that Black and Hispanic students are significantly less likely to attend a school that provides bus service, as the decision of whether to provide bus service is at the discretion of individual principals.

What potential policy implications does your research have?

Based on our findings, there are three important policy implications to consider. First, districts should consider mandating school bus service in all schools. Second, in the absence of universal bus service, districts should increase transparency about school-level bus provision so that families can factor this into their decisions about where to send their children to school. Finally, districts should consider the consequences of policies around school bus provision, such as route restrictions.


Amy Ellen Schwartz is the dean of the Joseph R. Biden, Jr. School of Public Policy and Administration, University of Delaware. Her research spans a broad range of topics in education policy and urban economics, focusing on the nexus of schools, neighborhoods and public services and the causes and consequences of children’s academic, social and health outcomes. Dr. Schwartz is currently a co-PI and director of transportation research for the IES-funded National Center for Research on Education Access and Choice.

Sarah A. Cordes is an associate professor of policy, organizational and leadership studies within Temple University’s College of Education and Human Development and former IES Predoctoral Fellow. Her research focuses on the ways in which the urban context, including neighborhoods, housing, and charter schools, affect student outcomes.

This blog was produced by Katina Stapleton (Katina.Stapleton@ed.gov) and Virtual Student Federal Service Intern Audrey Im. It is part of a larger series on DEIA in Education Research.

 

Measuring In-Person Learning During the Pandemic

Some of the most consequential COVID-19-related decisions for public education were those that modified how much in-person learning students received during the 2020-2021 school year. As part of an IES-funded research project in collaboration with the Virginia Department of Education (VDOE) on COVID’s impact on public education in Virginia, researchers at the University of Virginia (UVA) collected data to determine how much in-person learning students in each grade in each division (what Virginia calls its school districts) were offered over the year. In this guest blog, Erica Sachs, an IES predoctoral fellow at UVA, shares brief insights into this work.

Our Process

COVID-19 has caused uncertainty and disruptions in public education for nearly three years. The purpose of the IES-funded study is to describe how Virginia’s response to COVID-19 may have influenced access to instructional opportunities and equity in student outcomes over multiple time periods. This project is a key source of information for the VDOE and Virginia schools’ recovery efforts. An important first step of this work was to uncover how the decisions divisions made impacted student experiences during the 2020-21 school year. This blog focuses on the processes that were undertaken to identify how much in-person learning students could access.

During 2020-21, students were offered school in three learning modalities: fully remote (no in-person learning), fully in-person (only in-person learning), and hybrid (all students could access some in-person learning). Hybrid learning often occurred when schools split a grade into groups and assigned attendance days to each group. For the purposes of the project, we used the term “attendance rotations” to identify whether and which student group(s) could access in-person school on each day of the week. Each attendance rotation is associated with a learning modality.

Most divisions posted information about learning modality and attendance rotations on their official websites, social media, or board meeting documents. In June and July of 2021, our team painstakingly scoured these sites and collected detailed data on the learning modality and attendance rotations of every grade in every division on every day of the school year. We used these data to create a division-by-grade-by-day dataset.

A More Precise Measure of In-Person Learning

An initial examination of the dataset revealed that the commonly used approach of characterizing student experiences by time in each modality masked potentially important variations in the amount of in-person learning accessible in the hybrid modality. For instance, a division could offer one or four days of in-person learning per week, and both would be considered hybrid. To supplement the modality approach, we created a more precise measure of in-person learning using the existing data on attendance rotations. The new variable counts all in-person learning opportunities across the hybrid and fully in-person modalities, and, therefore, captures the variation obscured in the modality-only approach. To illustrate, when looking only at the time in each modality, just 6.7% of the average student’s school year was in the fully in-person modality. However, using the attendance rotations data revealed that the average student had access to in-person learning for one-third of their school year.

Lessons Learned

One of the biggest lessons I learned working on this project was that we drastically underestimated the scope of the data collection and data management undertaking. I hope that sharing some of the lessons I learned will help others doing similar work.

  • Clearly define terminology and keep records of all decisions with examples in a shared file. It will help prevent confusion and resolve disagreements within the team or with partners. Research on COVID-19 in education was relatively new when we started this work. We encountered two terminology-related issues. First, sources used the same term for different concepts, and second, sources used different terms for the same concept. For instance, the VDOE’s definition of the “in-person modality” required four or more days of access to in-person learning weekly, but our team classified four days of access as hybrid because we define “fully in-person modality” as five days of access to in-person learning weekly. Without agreed-upon definitions, people could categorize the same school week under different modalities. Repeated confusion in discussions necessitated a long meeting to hash out definitions, examples, and non-examples of each term and compile them in an organized file.
  • Retroactively collecting data from documents can be difficult if divisions have removed information from their web pages. We found several sources especially helpful in our data collection, including the Wayback Machine, a digital archive of the internet, to access archived division web pages, school board records, including the agenda, meeting minutes, or presentation materials, and announcements or letters to families via divisions’ Facebook or Twitter accounts.
  • To precisely estimate in-person learning across the year, collect data at the division-by-grade-by-day level. Divisions sometimes changed attendance rotations midweek, and the timing of these changes often differed across grades. Consequently, we found that collecting data at the day level was critical to capture all rotation changes and accurately estimate the amount of in-person learning divisions offered students.

What’s Next?

The research brief summarizing our findings can be downloaded from the EdPolicyWorks website. Our team is currently using the in-person learning data as a key measure of division operations during the reopening year to explore how division operations may have varied depending on division characteristics, such as access to high-speed broadband. Additionally, we will leverage the in-person learning metric to examine COVID’s impact on student and teacher outcomes and assess whether trends differed by the amount of in-person learning divisions offered students.


Erica N. Sachs is an MPP/PhD Student, IES Pre-doctoral Fellow, & Graduate Research Assistant at UVA’s EdPolicyWorks.

This blog was produced by Helyn Kim (Helyn.Kim@ed.gov), Program Officer, NCER.