IES Blog

Institute of Education Sciences

Calculating the Costs of School Internet Access

This blog is part of a guest series by the Cost Analysis in Practice (CAP) project team to discuss practical details regarding cost studies.

Internet access has become an indispensable element of many education and social programs. However, researchers conducting cost analyses of education programs often don’t capture these costs due to lack of publicly available information on what school districts pay for internet service. EducationSuperHighway, a nonprofit organization, now collects information about the internet bandwidth and monthly internet costs for each school district in the United States. The information is published on the Connect K-12 website. While Connect K-12 provides a median cost per Mbps in schools nationwide, its applicability in cost analyses is limited. This is because the per student cost varies vastly depending on the school district size.

As customers, we often save money by buying groceries in bulk. One of the reasons that larger sizes offer better value is that the ingredient we consume is sometimes only a small part of the total cost of the whole product; the rest of the cost goes into the process that makes the product accessible, such as packaging, transportation, and rent.

Same thing with internet. To make internet available in schools, necessary facilities and equipment include, but are not limited to web servers, ethernet cables, and Wi-Fi routers. Large school districts, which are often in urban locations, usually pay much less per student than small districts, which are often in rural areas. Costs of infrastructural adaptations need to be considered when new equipment and facilities are required for high-speed internet delivery. Fiber-optic and satellite internet services have high infrastructural costs. While old-fashioned DSL internet uses existing phone lines and thus has less overhead cost, it's much slower, often making it difficult to meet the current Federal Communications Commission recommended bandwidth of 1 Mbps per student.

In short, there is no one-price-for-all when it comes to costs of school internet access. To tackle this challenge, we used the data available on Connect K-12 for districts in each of the 50 U.S. states to calculate some useful metrics for cost analyses. First, we categorized the districts with internet access according to MDR’s definition of small, medium, and large school districts (Small: 0-2,499 students; Medium: 2,500-9,999 students; Large: 10,000+ students). For each category, we calculated the following metrics which are shown in Table 1:

  1. median cost per student per year
  2. median cost per student per hour

 

Table 1: Internet Access Costs

District size

(# of students)

Median mbps per student per month

Median cost per mbps per month

Median cost per student per month

Cost per student per year

Cost per student per hour

Small (0-2,499)

1.40

$1.75

$2.45

$29.40

$0.02

Medium (2,500-9,999)

0.89

$0.95

$0.85

$10.15

$0.007

Large (10,000+)

0.83

$0.61

$0.50

$6.03

$0.004

National median

1.23

$1.36

$1.67

$20.07

$0.014

 

Note: Cost per student per hour is computed based on the assumption that schools open for 1,440 hours (36 weeks) per annum, e.g., for a small district the cost per student per hour is $29.40/1,440 = $0.02). See methods here.

 

Here’s an example of how you might determine an appropriate portion of the costs to attribute to a specific program or practice:  

Sunnyvale School is in a school district of 4,000 students. It offers an afterschool program in the library in which 25 students work online with remote math tutors. The program runs for 1.5 hours per day on 4 days per week for 36 weeks. Internet costs would be:

 

1.5 hours x 4 days x 36 weeks x 25 students x $0.007 = $37.80.

 

The cost per student per hour might seem tiny. Take New York City Public Schools, for example, the cost per Mbps per month is $0.13, and yet the district pays $26,000 each month for internet. For one education program or intervention, internet costs may sometimes represent only a small fraction of the overall costs and may hardly seem worth estimating in comparison to personnel salaries and fringe benefits. However, it is critical for a rigorous cost analysis study to identify all the resources needed to implement a program.


Yuan Chang is a research assistant in the Department of Education Policy & Social Analysis at Teachers College, Columbia University and a researcher on the CAP Project.

 Anna Kushner is a doctoral student in the Department of Education Policy & Social Analysis at Teachers College, Columbia University and a researcher for the CAP Project.

Using Cost Analysis to Inform Replicating or Scaling Education Interventions

A key challenge when conducting cost analysis as part of an efficacy study is producing information that can be useful for addressing questions related to replicability or scale. When the study is a follow up conducted many years after the implementation, the need to collect data retrospectively introduces additional complexities. As part of a recent follow-up efficacy study, Maya Escueta and Tyler Watts of Teachers College, Columbia University worked with the IES-funded Cost Analysis in Practice (CAP) project team to plan a cost analysis that would meet these challenges. This guest blog describes their process and lessons learned and provides resources for other researchers.

What was the intervention for which you estimated costs retrospectively?

We estimated the costs of a pre-kindergarten intervention, the Chicago School Readiness Project (CSRP), which was implemented in nine Head Start Centers in Chicago, Illinois for two cohorts of students in 2004-5 and 2005-6. CSRP was an early childhood intervention that targeted child self-regulation by attempting to overhaul teacher approaches to behavioral management. The intervention placed licensed mental health clinicians in classrooms, and these clinicians worked closely with teachers to reduce stress and improve the classroom climate. CSRP showed signs of initial efficacy on measures of preschool behavioral and cognitive outcomes, but more recent results from the follow-up study showed mainly null effects for the participants in late adolescence.

The IES research centers require a cost study for efficacy projects, so we faced the distinct challenge of conducting a cost analysis for an intervention nearly 20 years after it was implemented. Our goal was to render the cost estimates useful for education decision-makers today to help them consider whether to replicate or scale such an intervention in their own context.

What did you learn during this process?

When enumerating costs and considering how to implement an intervention in another context or at scale, we learned four distinct lessons.

1. Consider how best to scope the analysis to render the findings both credible and relevant given data limitations.

In our case, because we were conducting the analysis 20 years after the intervention was originally implemented, the limited availability of reliable data—a common challenge in retrospective cost analysis—posed two challenges. We had to consider the data we could reasonably obtain and what that would mean for the type of analysis we could credibly conduct. First, because no comprehensive cost analysis was conducted at the time of the intervention’s original implementation (to our knowledge), we could not accurately collect costs on the counterfactual condition. Second, we also lacked reliable measures of key outcomes over time, such as grade retention or special education placement that would be required for calculating a complete cost-benefit analysis. This meant we were limited in both the costs and the effects we could reliably estimate. Due to these data limitations, we could only credibly conduct a cost analysis, rather than a cost-effectiveness analysis or cost-benefit analysis, which generally produce more useful evidence to aid in decisions about replication or scale.

Because of this limitation, and to provide useful information for decision-makers who are considering implementing similar interventions in their current contexts, we decided to develop a likely present-day implementation scenario informed by the historical information we collected from the original implementation. We’ll expand on how we did this and the decisions we made in the following lessons.

2. Consider how to choose prices to improve comparability and to account for availability of ingredients at scale.

We used national average prices for all ingredients in this cost analysis to make the results more comparable to other cost analyses of similar interventions that also use national average prices. This involved some careful thought about how to price ingredients that were unique to the time or context of the original implementation, specific to the intervention, or in low supply. For example, when identifying prices for personnel, we either used current prices (national average salaries plus fringe benefits) for personnel with equivalent professional experience, or we inflation-adjusted the original consulting fees charged by personnel in highly specialized roles. This approach assumes that personnel who are qualified to serve in specialized roles are available on a wider scale, which may not always be the case.

In the original implementation of CSRP, spaces were rented for teacher behavior management workshops, stress reduction workshops, and initial training of the mental health clinicians. For our cost analysis, we assumed that using available school facilities were more likely and tenable when implementing CSRP at large scale. Instead of using rental prices, we valued the physical space needed to implement CSRP by using amortized construction costs of school facilities (for example, cafeteria/gym/classroom). We obtained these from the CAP Project’s Cost of Facilities Calculator.

3. Consider how to account for ingredients that may not be possible to scale.

Some resources are simply not available in similar quality at large scale. For example, the Principal Investigator (PI) for the original evaluation oversaw the implementation of the intervention, was highly invested in the fidelity of implementation, was willing to dedicate significant time, and created a culture that was supportive of the pre-K instructors to encourage buy-in for the intervention. In such cases, it is worth considering what her equivalent role would be in a non-research setting and how scalable this scenario would be. A potential proxy for the PI in this case may be a school principal or leader, but how much time could this person reasonably dedicate, and how similar would their skillset be?  

4. Consider how implementation might work in institutional contexts required for scale.

Institutional settings might necessarily change when taking an intervention to scale. In larger-scale settings, there may be other ways of implementing the intervention that might change the quantities of personnel and other resources required. For example, a pre-K intervention such as CSRP at larger scale may need to be implemented in various types of pre-K sites, such as public schools or community-based centers in addition to Head Start centers. In such cases, the student/teacher ratio may vary across different institutional contexts, which has implications for the per-student cost. If delivered in a manner where the student/ teacher ratio is higher than in the original implementation, the intervention may be less costly, but may also be less impactful. This highlights the importance of the institutional setting in which implementation is occurring, and how this might affect the use and costs of resources.

How can other researchers get assistance in conducting a cost analysis?

In conducting this analysis, we found the following CAP Project tools to be especially helpful (found on the CAP Resources page and the CAP Project homepage):

  • The Cost of Facilities Calculator: A tool that helps estimate the cost of physical spaces (facilities).
  • Cost Analysis Templates: Semi-automated Excel templates that support cost analysis calculations.
  • CAP Project Help Desk: Real-time guidance from a member of the CAP Project team. You will receive help in troubleshooting challenging issues with experts who can share specific resources. Submit a help desk request by visiting this page.

Maya Escueta is a Postdoctoral Associate in the Center for Child and Family Policy at Duke University where she researches the effects of poverty alleviation policies and parenting interventions on the early childhood home environment.

Tyler Watts is an Assistant Professor in the Department of Human Development at Teachers College, Columbia University. His research focuses on the connections between early childhood education and long-term outcomes.

For questions about the CSRP project, please contact the NCER program officer, Corinne.Alfeld@ed.gov. For questions about the CAP project, contact Allen.Ruby@ed.gov.

 

Unexpected Value from Conducting Value-Added Analysis

This is the second of a two-part blog series from an IES-funded partnership project. The first part described how the process of cost-effectiveness analysis (CEA) provided useful information that led to changes in practice for a school nurse program and restorative practices at Jefferson County Public Schools (JCPS) in Louisville, KY. In this guest blog, the team discusses how the process of conducting value-added analysis provided useful program information over and above the information they obtained via CEA or academic return on investment (AROI).

Since we know you loved the last one, it’s time for another fun thought experiment! Imagine that you have just spent more than a year gathering, cleaning, assembling, and analyzing a dataset of school investments for what you hope will be an innovative approach to program evaluation. Now imagine the only thing your results tell you is that your proposed new application of value-added analysis (VAA) is not well-suited for these particular data. What would you do? Well, sit back and enjoy another round of schadenfreude at our expense. Once again, our team of practitioners from JCPS and researchers from Teachers College, Columbia University and American University found itself in a very unenviable position.

We had initially planned to use the rigorous VAA (and CEA) to evaluate the validity of a practical measure of academic return on investment for improving school budget decisions on existing school- and district-level investments. Although the three methods—VAA, CEA, and AROI—vary in rigor and address slightly different research questions, we expected that their results would be both complementary and comparable for informing decisions to reinvest, discontinue, expand/contract, or make other implementation changes to an investment. To that end, we set out to test our hypothesis by comparing results from each method across a broad spectrum of investments. Fortunately, as with CEA, the process of conducting VAA provided additional, useful program information that we would not have otherwise obtained via CEA or AROI. This unexpected information, combined with what we’d learned about implementation from our CEAs, led to even more changes in practice at JCPS.

Data Collection for VAA Unearthed Inadequate Record-keeping, Mission Drift, and More

Our AROI approach uses existing student and budget data from JCPS’s online Investment Tracking System (ITS) to compute comparative metrics for informing budget decisions. Budget request proposals submitted by JCPS administrators through ITS include information on target populations, goals, measures, and the budget cycle (1-5 years) needed to achieve the goals. For VAA, we needed similar, but more precise, data to estimate the relative effects of specific interventions on student outcomes, which required us to contact schools and district departments to gather the necessary information. Our colleagues provided us with sufficient data to conduct VAA. However, during this process, we discovered instances of missing or inadequate participant rosters; mission drift in how requested funds were actually spent; and mismatches between goals, activities, and budget cycles. We suspect that JCPS is not alone in this challenge, so we hope that what follows might be helpful to other districts facing similar scenarios.

More Changes in Practice 

The lessons learned during the school nursing and restorative practice CEAs discussed in the first blog, and the data gaps identified through the VAA process, informed two key developments at JCPS. First, we formalized our existing end-of-cycle investment review process by including summary cards for each end-of-cycle investment item (each program or personnel position in which district funds were invested) indicating where insufficient data (for example, incomplete budget requests or unavailable participation rosters) precluded AROI calculations. We asked specific questions about missing data to elicit additional information and to encourage more diligent documentation in future budget requests. 

Second, we created the Investment Tracking System 2.0 (ITS 2.0), which now requires budget requesters to complete a basic logic model. The resources (inputs) and outcomes in the logic model are auto-populated from information entered earlier in the request process, but requesters must manually enter activities and progress monitoring (outputs). Our goal is to encourage and facilitate development of an explicit theory of change at the outset and continuous evidence-based adjustments throughout the implementation. Mandatory entry fields now prevent requesters from submitting incomplete budget requests. The new system was immediately put into action to track all school-level Elementary and Secondary School Emergency Relief (ESSER)-related budget requests.

Process and Partnership, Redux

Although we agree with the IES Director’s insistence that partnerships between researchers and practitioners should be a means to (eventually) improving student outcomes, our experience shows that change happens slowly in a large district. Yet, we have seen substantial changes as a direct result of our partnership. Perhaps the most important change is the drastic increase in the number of programs, investments, and other initiatives that will be evaluable as a result of formalizing the end-of-cycle review process and creating ITS 2.0. We firmly believe these changes could not have happened apart from our partnership and the freedom our funding afforded us to experiment with new approaches to addressing the challenges we face.   


Stephen M. Leach is a Program Analysis Coordinator at JCPS and PhD Candidate in Educational Psychology Measurement and Evaluation at the University of Louisville.

Dr. Robert Shand is an Assistant Professor at American University.

Dr. Bo Yan is a Research and Evaluation Specialist at JCPS.

Dr. Fiona Hollands is a Senior Researcher at Teachers College, Columbia University.

If you have any questions, please contact Corinne Alfeld (Corinne.Alfeld@ed.gov), IES-NCER Grant Program Officer.

 

Unexpected Benefits of Conducting Cost-Effectiveness Analysis

This is the first of a two-part guest blog series from an IES-funded partnership project between Teachers College, Columbia University, American University, and Jefferson County Public Schools in Kentucky. The purpose of the project is to explore academic return on investment (AROI) as a metric for improving decision-making around education programs that lead to improvements in student education outcomes. In this guest blog entry, the team showcases cost analysis as an integral part of education program evaluation.

Here’s a fun thought experiment (well, at least fun for researcher-types). Imagine you just discovered that two of your district partner’s firmly entrenched initiatives are not cost-effective. What would you do? 

Now, would your answer change if we told you that the findings came amidst a global pandemic and widespread social unrest over justice reform, and that those two key initiatives were a school nurse program and restorative practices? That’s the exact situation we faced last year in Jefferson County Public Schools (JCPS) in Louisville, KY. Fortunately, the process of conducting rigorous cost analyses of these programs unearthed critical evidence to help explain mostly null impact findings and inform very real changes in practice at JCPS.

Cost-Effectiveness Analysis Revealed Missing Program Components

Our team of researchers from Teachers College, Columbia University and American University, and practitioners from JCPS had originally planned to use cost-effectiveness analysis (CEA) to evaluate the validity of a practical measure of academic return on investment for improving school budget decisions. With the gracious support of JCPS program personnel in executing our CEAs, we obtained a treasure trove of additional quantitative and qualitative cost and implementation data, which proved to be invaluable.

Specifically, for the district’s school nurse program, the lack of an explicit theory of change, of standardized evidence-based practices across schools, and of a monitoring plan were identified as potential explanations for our null impact results. In one of our restorative practices cost interviews, we discovered that a key element of the program, restorative conferences, was not being implemented at all due to time constraints and staffing challenges, which may help explain the disappointing impact results.

Changes in Practice

In theory, our CEA findings indicated that JCPS should find more cost-effective alternatives to school nursing and restorative practices. In reality, however, both programs were greatly expanded; school nursing in response to COVID and restorative practices because JCPS leadership has committed to moving away from traditional disciplinary practices. Our findings regarding implementation, however, lead us to believe that key changes can lead to improved student outcomes for both.

In response to recommendations from the team, JCPS is developing a training manual for new nurses, a logic model illustrating how specific nursing activities can lead to better outcomes, and a monitoring plan. For restorative practices, while we still have a ways to go, the JCPS team is continuing to work with program personnel to improve implementation.

One encouraging finding from our CEA was that, despite imperfect implementation, suspension rates for Black students were lower in schools that had implemented restorative practices for two years compared to Black students in schools implementing the program for one year. Our hope is that further research will identify the aspects of restorative practices most critical for equitably improving school discipline and climate.

Process and Partnership

Our experience highlights unexpected benefits that can result when researchers and practitioners collaborate on all aspects of cost-effectiveness analysis, from collecting data to applying findings to practice. In fact, we are convinced that the ongoing improvements discussed here would not have been possible apart from the synergistic nature of our partnership. While the JCPS team included seasoned evaluators and brought front-line knowledge of program implementation, information systems, data availability, and district priorities, our research partners brought additional research capacity, methodological expertise, and a critical outsider’s perspective.

Together, we discovered that the process of conducting cost-effectiveness analysis can provide valuable information normally associated with fidelity of implementation studies. Knowledge gained during the cost analysis process helped to explain our less-than-stellar impact results and led to key changes in practice. In the second blog of this series, we’ll share how the process of conducting CEA and value-added analysis led to changes in practice extending well beyond the specific programs we investigated.


Stephen M. Leach is a Program Analysis Coordinator at JCPS and PhD Candidate in Educational Psychology Measurement and Evaluation at the University of Louisville.

Dr. Fiona Hollands is a Senior Researcher at Teachers College, Columbia University.

Dr. Bo Yan is a Research and Evaluation Specialist at JCPS.

Dr. Robert Shand is an Assistant Professor at American University.

If you have any questions, please contact Corinne Alfeld (Corinne.Alfeld@ed.gov), IES-NCER Grant Program Officer.

Data Collection for Cost Analysis in an Efficacy Trial

This blog is part of a guest series by the Cost Analysis in Practice (CAP) project team to discuss practical details regarding cost studies.

In one of our periodic conversations about addressing cost analysis challenges for an efficacy trial, the Cost Analysis in Practice (CAP) Project and Promoting Accelerated Reading Comprehension of Text-Local (PACT-L) teams took on a number of questions related to data collection. The PACT-L cost analysts have a particularly daunting task with over 100 schools spread across multiple districts participating in a social studies and reading comprehension intervention. These schools will be served over the course of three cohorts. Here, we highlight some of the issues discussed and our advice.

Do we need to collect information about resource use in every district in our study?

For an efficacy study, you should collect data from all districts at least for the first cohort to assess the variation in resource use. If there isn’t much variation, then you can justify limiting data collection to a sample for subsequent cohorts.

Do we need to collect data from every school within each district?

Similar to the previous question, you would ideally collect data from every participating school within each district and assess variability across schools. You may be able to justify collecting data from a stratified random sample of schools, based on study relevant characteristics, within each district and presenting a range of costs to reflect differences. You might consider this option if funding for cost analysis is limited. Note that “district” and “school” refer to an example of one common setup in an educational randomized controlled trial, but other blocking and clustering units can stand in for other study designs and contexts.

How often should we collect cost data? 

The frequency of data collection depends on what the intervention is, length of implementation, and the types of resources (“ingredients”) needed. People’s time is usually the most important resource used for educational interventions, often 90% of the total costs. That’s where you should spend the most effort collecting data. Unfortunately, people are notoriously bad at reporting their time use, so ask for time use as often as you can (daily, weekly). Make it as easy as possible for people to respond and offer financial incentives, if possible. For efficacy trials in particular, be sure to collect cost data for each year of implementation so that you are accurately capturing the resources needed to produce the observed effects.

What’s the best way to collect time use data?

There are a few ways to collect time use data. The PACT-L team has had success with 2-question time logs (see Table 1) administered at the end of each history lesson during the fall quarter, plus a slightly longer 7-question final log (see Figure 2).

 

Table 1. Two-question time log. Copyright © 2021 American Institutes for Research.
1. Approximately, how many days did you spend teaching your [NAME OF THE UNIT] unit?  ____ total days
2. Approximately, how many hours of time outside class did you spend on the following activities for [NAME OF UNIT] unit? 

Record time to the nearest half hour (e.g., 1, 1.5, 2, 2.5)

   a. Developing lesson plans _____ hour(s)
   b. Grading student assignments _____ hour(s)
   c. Developing curricular materials, student assignments, or student assessments _____ hour(s)
   d. Providing additional assistance to students _____ hour(s)
   e. Other activities (e.g., coordinating with other staff; communicating with parents) related to unit _____ hour(s)

 

Table 2. Additional questions for the final log. Copyright © 2021 American Institutes for Research.
3. Just thinking of summer and fall, to prepare for teaching your American History classes, how many hours of professional development or training did you receive so far this year (e.g., trainings, coursework, coaching)? _____ Record time to the nearest half hour (e.g., 1, 1.5, 2, 2.5)
4. So far this year, did each student receive a school-provided textbook (either printed or in a digital form) for this history class? ______Yes     ______No
5. So far this year, did each student receive published materials other than a textbook (e.g., readings, worksheets, activities) for your American history classes? ______Yes     ______No
6. So far this year, what percentage of class time did you use the following materials for your American History classes? Record average percent of time used these materials (It has to add to 100%)
   a. A hardcopy textbook provided by the school _____%
   b. Published materials that were provided to you, other than a textbook (e.g., readings, worksheets, activities) _____%
   c. Other curricular materials that you located/provided yourself _____%
   d. Technology-based curricular materials or software (e.g., books online, online activities) _____%
       Total 100%
7. So far this year, how many hours during a typical week did the following people help you with your American history course? Please answer for all that apply Record time to the nearest half hour (e.g., 1, 1.5, 2, 2.5)
   a. Teaching assistant _____ hours during a typical week
   b. Special education teacher _____ hours during a typical week
   c. English learner teacher _____ hours during a typical week
   d. Principal or assistant principal _____ hours during a typical week
   e. Other administrative staff _____ hours during a typical week
   f. Coach _____ hours during a typical week
   g. Volunteer _____ hours during a typical week

 

They also provided financial incentives. If you cannot use time logs, interviews of a random sample of participants will likely yield more accurate information than surveys of all participants because the interviewer can prompt the interviewee and clarify responses that don’t make sense (see CAP Project Template for Cost Analysis Interview Protocol under Collecting and Analyzing Cost Data). In our experience, participants enjoy interviews about how they spend their time more than trying to enter time estimates in restricted survey questions. There also is good precedent for collecting time use through interviews: the American Time Use Survey is administered by trained interviewers who follow a scripted protocol lasting about 20 minutes.

Does it improve accuracy to collect time use in hours or as a percentage of total time?

Both methods of collecting time use can lead to less than useful estimates like the teacher whose percentage of time on various activities added up to 233%, or the coach who miraculously spent 200 hours training teachers in one week. Either way, always be clear about the relevant time period. For example, “Over the last 7 days, how many hours did you spend…” or “Of the 40 hours you worked last week, what percentage were spent on…” Mutually exclusive multiple-choice answers can also help ensure reasonable responses. For example, the answer options could be “no time; less than an hour; 1-2 hours; 3-5 hours; more than 5 hours.

What about other ingredients besides time?

Because ingredients such as materials and facilities usually represent a smaller share of total costs for educational interventions and are often more stable over time (for example, the number of hours a teacher spends on preparing to deliver an intervention may fluctuate from week to week, but the classrooms tend to be available for use for a consistent amount of time each week), the burden of gathering data on other resources is often lower. You can add a few questions to a survey about facilities, materials and equipment, and other resources such as parental time or travel once or twice per year, or better yet to an interview, or better still, to both. One challenge is that even though these resources may have less of an impact on the bottom line costs, they can involve quantities that are more difficult for participants to estimate than their own time such as the square footage of their office.

If you have additional questions about collecting data for your own cost analysis and would like free technical assistance from the IES-funded CAP Project, submit a request here. The CAP Project team is always game for a new challenge and happy to help other researchers brainstorm data collection strategies that would be appropriate for your analysis.


Robert D. Shand is Assistant Professor in the School of Education at American University

Iliana Brodziak is a senior research analyst at the American Institutes for Research