
Data analytics have dramatically transformed many sports. For example, analytics in basketball have demonstrated that three-point shots are typically better than mid-range two-point shots despite being less likely to go in the hoop. The higher value of the three-point shot more than compensates for their higher difficulty. Teams have changed their shot selection to reflect this understanding, with the number of three-point attempts rising dramatically and a corresponding decline in mid-range two-point shots. The shot chart below (showing each made shot with an O and each missed shot with an X) is typical, with most shots close to the hoop or just outside the three-point line. And the analytics have gone further, identifying the offensive plays that produce the best shots as well as the defensive plays that take away those shots.

In short, data analytics have identified the actions that a player can take to maximize the probability of a favorable outcome.
Analytics have also changed the narrative about games. Coaches (including the coach of my beloved Boston Celtics) now routinely talk in postgame interviews about "making the right play"--even if the play did not produce a win. Consider: the world's best three-point shooters miss more than they make, which means that the right play often fails to produce the desired outcome. Smart coaches recognize when their team made the right play even when they lose, because consistently making the right decision increases the probability of overall success.
How is this relevant to school accountability? It serves as a reminder that student outcomes are not entirely under the control of schools. Schools can affect outcomes, of course, but outcomes are also affected by family and neighborhood factors. But even in a school where student outcomes are negatively affected by external factors, state education agencies can expect schools to "make the right play."
As I've argued in this space before, accountability doesn't have to be exclusively about student outcomes (or a school's impact on those outcomes). It can also measure what schools are doing to promote better student outcomes. A comprehensive and robust accountability system would include accountability for educational processes alongside outcomes and impacts (as I described in School Administrator magazine). Such a system recognizes that even when student outcomes can't be directly attributed to a school's performance, we can still expect schools to create the conditions for effective teaching and rich learning--to make the right play.
Identifying "the right play" in education is harder than in basketball.
In sports, analytics serve a well-defined, straightforward goal: winning the game. In contrast, teaching and learning are extraordinarily complex, with goals that are broadly defined and hard to measure. The right play is not always obvious; there is nothing as simple as a shot chart to guide educators.
Nonetheless, research has identified features of the school environment and classroom instruction that correlate with improved student outcomes. We may not have a definitive playbook, but we know enough to identify well-coached schools and effective classrooms. Most states and districts have clear standards for effective instruction, and decades of research have shown what a constructive learning environment looks like. As I describe below, we can measure these things.
Agencies can measure school processes through climate surveys, inspections, and administrative data analysis.
School climate surveys
A handful of states (including Maryland) include processes in accountability by surveying students and staff about school climate. (Some also survey parents, but parental response rates are typically low, undermining the validity of their results.)
Many districts and schools administer climate surveys for diagnostic purposes, creating transparency that can foster accountability even if no stakes are attached to the measures. Research (including a study conducted in Pittsburgh by REL Mid-Atlantic) has shown that student survey responses at the classroom level correlate with other measures of instructional quality. At the school level, climate measures derived from teacher and school surveys have likewise been shown to be related to improvements in student outcomes.
School inspections
Some school districts and state agencies--emulating a practice used in England for over a century--assess school processes through inspections. School inspections involve classroom observations, interviews or focus groups, and document reviews.
State education agencies in Vermont and Maryland conduct school inspections, as do many district central offices. New York City, for example, has a highly detailed School Quality Review process that begins with the collection of documents including a self-evaluation completed by school staff. Reviewers work with school leaders to co-create a schedule for a daylong school visit that includes meetings with school leaders, teachers, and students; classroom observations; and the examination of student work. The inspection is guided by a detailed rubric of indicators and sub-indicators addressing the instructional core (including curriculum, pedagogy, and assessment), the school's culture (including the learning environment and high expectations), and systems for improvement (including resource use, goals and plans, and the support and supervision of teachers). The reviewer gives the school a rating on each indicator that ranges from underdeveloped to well developed.
Administrative data analysis
Existing administrative data systems can also provide useful information on school processes. Some states (such as Delaware) scrutinize data on school discipline practices, flagging schools that suspend students at high rates and schools that disproportionately suspend students of color or students with disabilities. While suspension records can highlight the excessive use of exclusionary discipline, learning management system records can illuminate aspects of student engagement. Since the pandemic moved school assignments online, districts have enormous amounts of data on the extent to which students are completing assignments (or not). In Pittsburgh, we analyzed data on assignment completion to examine trends in student engagement; districts across the country could do the same.
Process measures, alongside outcomes and impacts, add value to school performance frameworks.
In summary, districts and state agencies have various tools to assess whether schools are making the right play in terms of instruction, climate, or disciplinary action. And these process measures can complement the measures of outcomes and impacts that are more routinely included in accountability systems in two important ways.
First, process measures are diagnostically crucial. Student outcome measures (such as proficiency rates) can tell a state agency that a school's students need help; impact measures (such as student growth measures) can go a step further inferentially, telling a state agency that outcomes are attributable to the performance of the school. But neither outcome measures nor impact measures provide any information about why a school is not performing well. And therefore they provide no guidance about what should be done to improve the school's performance.
To return to our sports analogy: Imagine trying to coach a basketball team based only on the box score from the previous game, without actually watching the game. It won't do much good to tell your players to "score more points." Nor would it do much good to tell a school staff to "teach better."
Process measures are a way to watch the game and examine shot charts to boot. They can begin to fill the information gap left by outcome and impact measures. Surveys, inspections, and administrative data can reveal whether teachers are struggling to provide high-quality instruction, shed light on school leadership practices, and illuminate student engagement. That kind of process data might or might not be formally incorporated into accountability metrics, but either way, it is needed to inform school improvement.
The second reason that process measures are useful is that they can honor and promote a richer, more holistic understanding of school performance. For this, there is no sports analogy. In basketball (or any other sport), there is a single outcome that is explicitly defined by the rules of the game. In schooling, measures of student outcomes are imperfect proxies for broader and richer goals. Standardized tests are useful, but even the best of them don't measure everything that schools are trying to teach and that kids need to learn. The skills, knowledge, and attitudes that contribute to economic success, personal well-being, and effective citizenship are unlikely ever to be fully measurable.
We can't measure those outcomes directly, but we can try to measure whether the school is conducive to them--with a safe and respectful learning environment, a culture of high expectations among staff and students alike, instructional practices informed by evidence, a commitment to inclusion over exclusion, and a professional ethic promoting continuous improvement. That's a school that is making the right play, even if we'll never know the full scope of student outcomes that result.