IES Blog

Institute of Education Sciences

New Data Reveal Public School Enrollment Decreased 3 Percent in 2020–21 School Year

NCES recently released revised Common Core of Data (CCD) Preliminary Files, which are the product of the school year (SY) 2020–21 CCD data collection. CCD, the Department of Education’s primary database on public elementary and secondary education in the United States, provides comprehensive annual data on enrollment, school finances, and student graduation rates.

Here are a few key takeaways from the newly released data files:

Public school enrollment in SY 2020–21 was lower than it was in SY 2019–20.

Overall, the number of students enrolled in public schools decreased by 3 percent from SY 2019–20 to SY 2020–21. Note that Illinois did not submit data in time to be included in this preliminary report. The SY 2019–20 and SY 2020–21 total enrollment counts for California, Oregon, American Samoa, and the Bureau of Indian Education do not include prekindergarten counts.

The rate of decline in public school enrollment in SY 2020–21 was not consistent across all states.

Within states, the largest decreases were in Mississippi and Vermont (5 percent each), followed by Washington, New Mexico, Kentucky, New Hampshire, and Maine (each between 4 and 5 percent) (figure 1). Eighteen states had decreases of 3 percent or more; 29 states had decreases between 1 and 3 percent; and the District of Columbia, South Dakota, and Utah had changes of less than 1 percent.



Lower grade levels experienced a greater rate of decline in public school enrollment than did higher grade levels in SY 2020–21.

Public school enrollment decreased by 13 percent for prekindergarten and kindergarten and by 3 percent for grades 1–8. Public school enrollment increased by 0.4 percent for grades 9–12.

Most other jurisdictions experienced declines in public school enrollment in SY 2020–21.

Public school enrollment decreased in Puerto Rico (6 percent), Guam (5 percent), and American Samoa (2 percent). The Virgin Islands, however, experienced an increase of less than 1 percent.

To access the CCD preliminary data files and learn more about public school enrollment in SY 2020–21, visit the CCD data files webpage.

Overcoming Challenges in Conducting Cost Analysis as Part of an Efficacy Trial

This blog is part of a guest series by the Cost Analysis in Practice (CAP) project team to discuss practical details regarding cost studies.

 

Educational interventions come at a cost—and no, it is not just the price tag, but the personnel time and other resources needed to implement them effectively. Having both efficacy and cost information is essential for educators to make wise investments. However, including cost analysis in an efficacy study comes with its own costs.

Experts from the Cost Analysis in Practice (CAP) Project recently connected with the IES-funded team studying Promoting Accelerated Reading Comprehension of Text - Local (PACT-L) to discuss the challenges of conducting cost analysis and cost-effectiveness analysis as part of an efficacy trial. PACT-L is a social studies and reading comprehension intervention with a train-the-trainer professional development model. Here, we share some of the challenges we discussed and the solutions that surfaced.

 

Challenge 1: Not understanding the value of a cost analysis for educational programs

Some people may not understand the value of a cost analysis and focus only on needing to know whether they have the budget to cover program expenses. For those who may be reluctant to invest in a cost analysis, ask them to consider how a thorough look at implementation in practice (as opposed to “as intended”) might help support planning for scale-up of a local program or adoption at different sites.

For example, take Tennessee’s Student/Teacher Achievement Ratio (STAR) project, a class size reduction experiment, which was implemented successfully with a few thousand students. California tried to scale up the approach for several million students but failed to anticipate the difficulty of finding enough qualified teachers and building more classrooms to accommodate smaller classes. A cost analysis would have supplied key details to support decision-makers in California in preparing for such a massive scale-up, including an inventory of the type and quantity of resources needed. For decision-makers seeking to replicate an effective intervention even on a small scale, success is much more likely if they can anticipate whether they have the requisite time, staff, facilities, materials, and equipment to implement the intervention with fidelity.

 

Challenge 2: Inconsistent implementation across cohorts

Efficacy studies often involve two or three cohorts of participants, and the intervention may be adapted from one to the next, leading to varying costs across cohorts. This issue has been particularly acute for studies running prior to the COVID-19 pandemic, then during COVID-19, and into post-COVID-19 times. You may have in-person, online, and hybrid versions of the intervention delivered, all in the course of one study. While such variation in implementation may be necessary in response to real-world circumstances, it poses problems for the effectiveness analysis because it’s hard to draw conclusions about exactly what was or wasn’t effective.

The variation in implementation also poses problems for the cost analysis because substantially different types and amounts of resources might be used across cohorts. At worst, this leads to the need for three cost analyses funded by the study budget intended for one! In the case of PACT-L, the study team modified part of the intervention to be delivered online due to COVID-19 but plans to keep this change consistent through all three cohorts.

For other interventions, if the differences in implementation among cohorts are substantial, perhaps they should not be combined and analyzed as if all participants are receiving a single intervention. Cost analysts may need to focus their efforts on the cohort for which implementation reflects how the intervention is most likely to be used in the future. For less substantial variations, cost analysts should stay close to the implementation team to document differences in resource use across cohorts, so they can present a range of costs as well as an average across all cohorts.

 

Challenge 3: Balancing accuracy of data against burden on participants and researchers

Data collection for an efficacy trial can be burdensome—add a cost analysis and researchers worry about balancing the accuracy of the data against the burden on participants and researchers. This is something that the PACT-L research team grappled with when designing the evaluation plan. If you plan in advance and integrate the data collection for cost analysis with that for fidelity of implementation, it is possible to lower the additional burden on participants. For example, include questions related to time use in interviews and surveys that are primarily designed to document the quality of the implementation (as the PACT-L team plans to do), and ask observers to note the kinds of facilities, materials, and equipment used to implement the intervention. However, it may be necessary to conduct interviews dedicated solely to the cost analysis and to ask key implementers to keep time logs. We’ll have more advice on collecting cost data in a future blog.

 

Challenge 4: Determining whether to use national and/or local prices

Like many other RCTs, the PACT-L team’s study will span multiple districts and geographical locations, so the question arises about which prices to use. When deciding whether to use national or local prices—or both—analysts should consider the audience for the results, availability of relevant prices from national or local sources, the number of different sets of local prices that would need to be collected, and their research budget. Salaries and facilities prices may vary significantly from location to location. Local audiences may be most interested in costs estimated using local prices, but it would be a lot of work to collect local price information from each district or region. The cost analysis research budget would need to reflect the work involved. Furthermore, for cost-effectiveness analysis, prices must be standardized across geographical locations which means applying regional price parities to adjust prices to a single location or to a national average equivalent.

It may be more feasible to use national average prices from publicly available sources for all sites. However, that comes with a catch too: national surveys of personnel salaries don't include a wide variety of school or district personnel positions. Consequently, the analyst must look for a similar-enough position or make some assumptions about how to adjust a published salary for a different position.

If the research budget allows, analysts could present costs using national prices and local prices. This might be especially helpful for an intervention targeting schools in a rural area or an urban area which, respectively, are likely to have lower and higher costs than the national average. The CAP Project’s cost analysis Excel template is set up to allow for both national prices and local prices. You can find the template and other cost analysis tools here: https://capproject.org/resources.


The CAP Project team is interested in learning about new challenges and figuring out how to help. If you are encountering similar or other challenges and would like free technical assistance from the IES-funded CAP Project, submit a request here. You can also email us at helpdesk@capproject.org or tweet us @The_CAP_Project

 

Fiona Hollands is a Senior Researcher at Teachers College, Columbia University who focuses on the effectiveness and costs of educational programs, and how education practitioners and policymakers can optimize the use of resources in education to promote better student outcomes.

Iliana Brodziak is a senior research analyst at the American Institutes for Research who focuses on statistical analysis of achievement data, resource allocation data and survey data with special focus on English Learners and early childhood.

Jaunelle Pratt-Williams is an Education Researcher at SRI who uses mixed methods approaches to address resource allocation, social and emotional learning and supports, school finance policy, and educational opportunities for disadvantaged student populations.

Robert D. Shand is Assistant Professor in the School of Education at American University with expertise in teacher improvement through collaboration and professional development and how schools and teachers use data from economic evaluation and accountability systems to make decisions and improve over time.

Katie Drummond, a Senior Research Scientist at WestEd, has designed and directed research and evaluation projects related to literacy, early childhood, and professional development for over 20 years. 

Lauren Artzi is a senior researcher with expertise in second language education PK-12, intervention research, and multi-tiered systems of support. 

Assessing Math Understanding of Students with Disabilities During a Pandemic

For almost two decades, IES/NCSER has funded Brian Bottge and his teams at the University of Kentucky and University of Wisconsin-Madison to develop and test the efficacy of a teaching method called Enhanced Anchored Instruction (EAI), which helps low-achieving middle school students with math disabilities develop their problem-solving skills by solving meaningful problems related to a real-world problem. The research findings support the efficacy of EAI, especially for students with math disabilities. Most recently, Bottge and his team have been researching innovative forms of assessment that more adequately capture what students with disabilities know both conceptually and procedurally in solving math problems. With supplemental funding, IES/NCSER extended Dr. Bottge’s latest grant to test the use of oral assessment to measure student knowledge and compare that with the knowledge demonstrated on a pencil and paper test. The COVID-19 pandemic introduced added challenges to this work when schools closed and students shifted to online education.

Below we share a recent conversation with Dr. Bottge about the experience of conducting research during a pandemic and what he and his team were still able to learn about the value of oral assessment in mathematics for students with disabilities.

What changes did you observe in the intervention implementation by teachers due to the COVID-related shift to online learning?

Photo of Dr. Brian Bottge

The shift to online learning created changes in class size and structure. For 38 days (22 days in classroom, 16 days online through a virtual meeting platform), the middle school special education teacher first taught concepts through a widely used video-based anchored problem, the Kim’s Komet episode of the Jasper Project, in which characters compete in a “Grand Pentathlon.” The teacher then engaged the students in a hands-on application of the concepts by running a live Grand Pentathlon. In the Grand Pentathlon, students make their own cars, race them on a full-size ramp, time them at various release points on the ramp, and graph the information to estimate the speed of the cars. The purpose of both units was to help students develop their informal understanding of pre-algebraic concepts such as linear function, line of best fit, variables, rate of change (slope), reliability, and measurement error. Midway through the study, in-person instruction was suspended and moved online. Instead of working with groups of three to four students in the resource room throughout the day, the teacher provided online instruction to 14 students at one time and scheduled one-on-one sessions with students who needed extra help.

What challenges did you observe in the students interacting with the activities and their learning once they shifted to online learning?

All students had access to a computer at home and they were able to use the online platform without much confusion because they had used it in other classes. The screen share feature enabled students to interact with much of the curriculum by viewing the activities, listening to the teacher, and responding to questions, although they could not fully participate in the hands-on part of the lessons. Class attendance and student behavior were unexpectedly positive during the days when students were online. For example, one student had displayed frequent behavioral outbursts in school but became a positive and contributing member of the online class. The ability to mute mics in the platform gave the teacher the option of allowing only one student to talk at a time.

Were students still able to participate in the hands-on activities that are part of the intervention?

For the hands-on activities related to the Grand Pentathlon competition, the teacher taught online and a research staff member manipulated the cars, track, and electronic timers from campus. Students watched their computer screens waiting for their turn to time their cars over the length of the straightaway. The staff member handled each student’s cars and one by one released them from the height on the ramp as indicated by each student. After students had recorded the times, the teacher asked students to calculate and share the speeds of their cars for each time trial height.

Do you have any other observations about the impact of COVID-19 on your intervention implementation?

One of the most interesting observations was parent participation in the lessons. Several parents went beyond simply monitoring how their child was doing during the units to actively working out the problems. Some were surprised by the difficulty level of the math problems. One mother jokingly remarked: I thought the math they were going to do was as easy as 5 + 5 = 10. The next time my son might have to be the parent and I might have to be the student. You all make the kids think and I like that.

When COVID-19 shut down your participating schools, how were you able to adjust your data collection to continue with your research?

We used the same problem-solving test that we have administered in several previous studies (Figure 1 shows two of the items). On Day 1 of the study (pre-COVID), students took the math pretest in their resource rooms with pencil and paper. Due to COVID-19 school closures, we mailed the posttest and test administration instructions to student homes. On the scheduled testing day during an online class session, students removed the test from the envelope and followed directions for answering the test questions while we observed remotely. On Days 2 and 3 of the study (pre-COVID), an oral examiner (OE) pretested individual students in person. The OE asked the student questions, prompting the student to describe the overall problem, identify the information needed for solving the problem, indicate how the information related to their problem-solving plan, and provide an answer. Due to COVID-19, students took the oral posttests online. The teacher set up a breakout room in the platform where the OE conducted the oral assessments and a second member of the research team took notes.

A picture depicting two sample questions. The first shows a graph of two running paths along with the text, "3. The total distance covered by two runners is shown in the graph below. a. How much time did it take runner 1 to go 1 mile? b. About how much time after the start of the race did one runner pass the other?" The second image features a marble on top of a ramp accompanied with the question "What is the speed of a marble (feet per second) let go from the top of the ramp? (Round your answer to the nearest tenth.)"Figure 1. Sample Items from the Problem-Solving Test

During the testing sessions, the OE projected each item on the students’ computer screens. Then she asked the student to read the problem aloud and describe how to solve it. The OE used the same problem-solving prompts as was used on the pretests. For problems that involved graphs or charts, the OE used the editing tools to make notations on the screen as the students directed. One challenge is that oral testing online made it more difficult to monitor behavior and keep students on task. For example, sometimes students became distracted and talked to other people in their house.

What were the results of this study of oral assessment in mathematics for students with disabilities?

Our results suggest that allowing students to describe their understanding of problems in multiple ways yielded depth and detail to their answers. We learned from the oral assessment that most students knew how to transfer the data from the table to an approximate location on the graph; however, there was a lack of precision due to a weak understanding of decimals. For item 4 in Figure 1, the use of decimals confused students who did not have much exposure to decimals prior to or during the study. We also found that graphics that were meant to help students understand the text-based items were in some cases misleading. The representation in item 4 was different than the actual ramp and model car activity students experienced virtually. We have used this math test several times in our research and regrettably had no idea that elements of the graphics contributed to misunderstanding.

Unfortunately, our findings suggest that the changes made in response to COVID-19 may have depressed student understanding. Performances on two items (including item 4 in Figure 1) that assessed the main points of the intervention were disappointing compared to results from prior studies. The increase in class size from 3–4 to 14 after COVID and switching to online learning may have reduced the opportunity for repetition and practice. There were reduced opportunities for students to participate in the hands-on activities and participate in conversations about their thinking with other students.

We acknowledge the limitations of this small pilot study to compare knowledge of students when assessed in a pencil and paper format to an oral assessment. We are optimistic about the potential of oral assessments to reveal problem-solving insights of students with math disabilities. The information gained from oral assessment is of value if teachers use it to individualize their instruction. As we learned, oral assessment can also point to areas where graphics or other information are misleading. More research is needed to understand the value of oral assessment despite the increase in time it might add to data collection efforts for students with math disabilities. This experience highlights some of the positive experiences of students learning during COVID-19 virtually at home as well as some of the challenges and risks of reduced outcomes from these virtual learning experiences, especially for students with disabilities.

This blog was written by Sarah Brasiel, program officer for NCSER’s Science, Technology, Engineering, and Math program.

Cost Analysis in Practice: Resources for Cost Analysis Studies

IES supports rigorous research that can provide scientific evidence on how best to address our nation’s most pressing education needs. As part of the Standards for Excellence in Education Research (SEER) principles, IES-funded researchers are encouraged, and in some cases required, to conduct a cost analysis for their projects with the intended goal of supporting education agencies’ decision-making around the adoption of programs, policies, or practices. 

 

The Cost Analysis in Practice (CAP) Project is a 3-year initiative funded by IES to support researchers and practitioners who are planning or conducting a cost analysis of educational programs and practices. This support includes the following freely available resources.

  • Resources developed by the CAP Project
    • Introductory resources on cost analysis including Standards and Guidelines 1.1, an infographic, a video lecture, and FAQs.
    • Tools for planning your cost analysis, collecting and analyzing cost data, and reporting your results.
    • A Help Desk for you to submit inquiries about conducting a cost analysis with a response from a member of the CAP Project Team within two business days.
  • Other resources recommended by the CAP Project
    • Background materials on cost analysis
    • Guidance on carrying out a cost analysis
    • Standards for the Economic Evaluation of Educational and Social Programs
    • Cost analysis software

 

The CAP Project is also involved in longer-term collaborations with IES-funded evaluation projects to better understand their cost analysis needs. As part of this work, the CAP Project will be producing a set of three blogs to discuss practical details regarding cost studies based on its collaboration with a replication project evaluating an intervention that integrates literacy instruction into the teaching of American history. These blogs will discuss the following:

  • Common cost analysis challenges that researchers encounter and recommendations to address them
  • The development of a timeline resource for planning a cost study
  • Data collection for a cost study

 

The CAP Project is interested in your feedback on any of the CAP Project resources and welcomes suggestions for additional resources to support cost analysis. If you have any feedback, please fill out a suggestion form at the bottom of the Resources web page.

CTE Research Is Flourishing at IES!

Since its inception in 2017, the CTE portfolio in the National Center for Education Research (NCER) at IES has grown to 11 research grants and a research network! Several other CTE-related grants have been funded under other topics, such as “Postsecondary/Adult Education” and “Improving Education Systems” in the education research grants program, and in other grant programs such as “Using SLDS to Support State Policymaking.” Two CTE-related grants under the latter program were awarded in FY21—

The newest grants funded in FY21 in the CTE topic of the Education Research Grants program include—

As a causal impact study, the last project (on Virtual Enterprises) has been invited to join NCER’s CTE Research Network as its sixth and final member. Funded in 2018 to expand the evidence base for CTE, the CTE Research Network (led by PI Kathy Hughes at the American Institutes for Research) includes five other CTE impact studies (one project’s interim report by MDRC was recently reviewed by the What Works Clearinghouse and was found to meet standards without reservations). You can read more about the network’s mission and each of its member projects here.  

On AIR’s CTE Research Network website, you can find several new resources and reports, such as: 

The CTE Research Network has also been conducting training, including workshops in causal design for CTE researchers and online modules on data and research for CTE practitioners, shared widely with the field by a Network Lead partner, the Association for Career and Technical Education (ACTE). 

Last but certainly not least, if you are interested in getting your CTE project funded by IES, see the new FY22 research grant opportunities on the IES funding page. To apply to the CTE topic in the Education Research Grants program specifically, click on the PDF Request for Applications (ALN 84.305A). Contact Corinne Alfeld with any questions you might have.


Written by Corinne Alfeld (Corinne.Alfeld@ed.gov), NCER Program Officer