Under an IES grant, the RAND Corporation, in collaboration with NWEA, is developing strategies for schools and districts to address the impacts of COVID-19 disruptions on student assessment programs. The goal is to provide empirical evidence of the strengths and limitations of strategies for making decisions in the absence of assessment data. Jonathan Schweig, Andrew McEachin, and Megan Kuhfeld describe early findings from surveys and structured interviews regarding key concerns of districts and schools.
As a first step, we surveyed assessment and research coordinators from 23 school districts (from a sample of 100 districts) and completed follow-up interviews with seven of them on a variety of topics, including the re-entry scenario for their district, the planning activities that they were not able to perform this year due to coronavirus-based disruptions to spring 2020 assessments, and the strategies they were employing to support instructional planning in the absence of assessment data. While the research is preliminary and the sample of respondents is not nationally representative, the survey and interview responses identified two key concerns arising from the lack of spring 2020 assessment data which has made it challenging to examine student or school status and change over time, especially as COVID-19 has differential impacts on student subgroups:
- Making course placement decisions. Administrators typically rely on spring assessment scores—often in conjunction with other assessment information, course grades, and teacher recommendations—to make determinations for course placements, such as who should enroll in accelerated or advanced mathematics classes.
- Evaluating programs or district-wide initiatives. Many districts monitor the success of these programs internally by looking at year-to-year change or growth for schools or subgroups of interest.
How are school systems responding to these challenges? Not surprisingly, the responses vary depending on local contexts and resources. Where online assessments were not feasible in spring 2020, some school districts used older testing data to make course recommendations, either from the winter or from the previous school year. Some districts relaxed typical practice and provided more autonomy to individual schools, relying on school staff to exercise local judgment around course placements and using metrics like grades and teacher recommendations. Other districts reported projecting student scores based on student assessment histories. Relatedly, some districts were already prepared for this decision because they had recently experienced difficulties with adopting an online assessment system and had to address similar problems caused by large numbers of missing or invalid tests.
School districts also raised concerns about whether assessments administered during the 2020-21 school year would be valid and comparable so that they could be used in student placement and program evaluation decisions. These concerns included the following:
- Several respondents raised concerns about the trustworthiness of remote assessment data collected this fall and the extent to which results could be interpreted as valid indicators of student achievement or understanding.
- Particularly for districts that started the 2020-21 school year remotely, respondents were concerned about student engagement and motivation and the possibility of students rushing assessments, running into technological or internet barriers, or seeking assistance from guardians or other resources.
- Respondents raised questions about the extent to which available assessment scores are representative of school or district performance as a whole. Given that vulnerable students (for example, students with disabilities, students experiencing homelessness) may be the least likely to have access to remote instruction and assessments, it is likely that the students who are not assessed this year are different from students who are able to be assessed.
- Other respondents noted that they encountered resistance from parents around fall assessment because they prioritized student well-being (for example, safety, sense of community, and social and emotional well-being) more so than academics. This is a perspective that resonates with recent findings from a nationally representative sample of teachers and school leaders drawn from RAND’s American Educator Panel (AEP).
In the next phase of the work, the research team plans to:
- Conduct a series of simulation and empirical studies regarding the most common strategies that the district respondents indicated they were using to make course placement decisions and to evaluate programs or district-wide initiatives.
- Provide a framework to help guide local research on the intended (and unintended) consequences for school and school system decision making when standardized test scores are not available.
We welcome individuals to reach out to RAND with additional recommendations or considerations. We are also interested in hearing how districts are approaching course placement, accountability, and program evaluation across the country. Connect with the research team via email at firstname.lastname@example.org.
Jonathan Schweig is a social scientist at the nonprofit, nonpartisan RAND Corporation.
Andrew McEachin is a senior policy researcher at the nonprofit, nonpartisan RAND Corporation.
Megan Kuhfeld is a researcher at NWEA.