Researchers are, by nature, creative. They devise new interventions, tools, and methods as they try to identify ways to improve learning outcomes across the life course. But there is one crucial area in which their creativity has complicated our ability to identify what works: the failure to use commonly accepted and well-understood outcome measures.
Researchers develop their own measures for many reasons. For example, available measures may not cover an aspect of a learner's experience that is the target of an intervention or they may not be sensitive enough to pick up subtle changes in knowledge or behavior. But without common measures, we have little ability to look across interventions for what works and what is most cost effective.
Bob Slavin has written extensively about this issue, critiquing the use of researcher/developer-created measures as the sole basis for making statements about "what works." His research has shown effect sizes for interventions using bespoke measures are usually higher (often far higher) than when outcomes are measured using commonly accepted measures. These inflated claims can lead to heightened expectations about the effectiveness of interventions and inhibit our ability to compare the true effectiveness across interventions, which often use their own non-standard measures. Educators and policymakers attempting to judge the relative impact and cost effectiveness of any given intervention are out of luck.
IES will be increasing its support for the use of common measures. As one of the world's largest investors in education research, IES needs to push the field forward, fulfilling its mission of identifying what works for whom and under what circumstances. We must also acknowledge that for school leaders making difficult spending decisions, the question is not just "what works?" but "what gives me the best return on my choice of programs?" Neither question can be addressed without common measures.
To encourage the use of common measures, we must begin by identifying the best measures already in wide use. We are pursuing this in several ways. First, we are searching the What Works Clearinghouse to identify the measures most often found in studies catalogued in its large library. Second, we are identifying the measures that are widely used across the close to 2,000 research grants we have funded over the years. Third, we are forming a panel of experts to help identify a set of common metrics defined by grade/subject band (for example, early reading, middle school algebra). Fourth, we are closely following Susanna Loeb's work at EdInstruments.com and looking for other compendia similar to her growing library of metrics. Finally, we will be soliciting input from the field (including your responses to this post) about how to proceed.
Our goal is to identify measures that (a) are widely enough used so that they have name recognition and will not be additional tests imposed on too many schools/districts/students, (b) have clear implications so teachers/schools can use the results to better inform instruction and the selection of interventions that work, and (c) can help policymakers/regulators have a better sense of the relative payoff of interventions.
Since IES' strongest tool to affect education research is its RFAs, we will be piloting the use of a set of common measures in next year's RFAs (not the ones that are being released over the next month). While researchers and developers will still be free to create and use their own measures, we will not even consider any application that does not additionally include the recommended common measure(s) for the relevant grade/subject bands. This will apply to development, efficacy, and replication grants.
IES is in the business of identifying what works for whom under what conditions. Use of common measures will help clear the path for answering these questions.
As always, I encourage you to write to me with your comments, suggestions, and ideas on this topic: email@example.com.