This blog is part 1 of a three-year anniversary card to myself—and hopefully not an overly self-indulgent one. Here, I look back at some of the highpoints of the three years I have served as director of IES. In part 2, I will outline some of the challenges IES will be taking on for the remaining three years of my term.
Living by the Standards for Excellence in Education Research
I think perhaps the most important accomplishment of my tenure to date has been the articulation of SEER principles. Indeed, much of what I think IES has accomplished during these last few years can be understood through the lens of SEER.
SEER grew out of my desire to understand better how IES could ensure that the work we support moves the education sciences forward in a way that affects conditions on the ground. I also wanted to understand better how IES could increase the cumulative effects of its investments. As IES staff helped work through strategies to accomplish those goals, it became clear that there were several basic principles to guide us. We did not set out to create a list, but these principles became identifiable touchstones around which we organized much of the work during the last three years and will continue to do so in the future. These principles are embodied in SEER.
Like every parent, I have said to my daughters more than once or twice: "No, I don't love your sister more than I love you. I love all my children the same." By extension, I love all the SEER principles equally, even if some are more "challenging," "difficult," or "complicated" than others. (Parents often have a complex vocabulary when comparing their children.)
Among the SEER principles, we knew that cost analysis was going to be a big lift for the field, but we also knew how important it is to make cost information available. We are still clarifying which types of cost analyses best fit which types of studies and articulating clear standards for guiding cost work. Overall, the field has moved far in a relatively short time.
IES has progressed in making replication core to our work. IES was already beginning to support more replication work before I took office, but I think my description of the status of our science was (and still is) unfortunately on target: "The education sciences don't have a replication crisis, since we don't replicate anything." We are working to fix this with a growing emphasis on replication of findings to accelerate evidence about what works for whom, under what conditions.
We have now run several rounds of systematic replication competitions. However, those first rounds were tied to our traditional approach of long-lived RCTs. In mid-March, IES announced an XPRIZE competition for using digital learning platforms to deliver and replicate experiments. Clearly, many education outcomes cannot be changed in weeks or even months, but many can—and many changes that can be achieved and measured in the short term provide the foundation for more long-term outcomes. I am anxious to see how the XPRIZE plays out—and I hope that many platforms will take up the challenge so that we will learn far more about how to deliver rigorous tests of ideas more quickly and across a wide range of demographics and geographies.
Identifying longer-term outcomes is part of the SEER principle "Focus on meaningful outcomes." We launched an initiative to use the State Longitudinal Data System (SLDS) grants program to identify such outcomes and recently announced awards to seven states to help them use these data in decision-making. In 2020 NCSER and NCER were also able to use existing grant regulations to fund several projects to conduct long-term follow-up studies of intervention impacts. We will continue to support more work focused on long-term outcomes.
I have great hope for the latest addition to SEER—common measures. We know that one way to register big effect sizes is to assess the effects of an intervention using measures that are "over aligned" to the intervention itself. Several researchers, including colleagues here at IES' What Works Clearinghouse, have noted that this problem may be particularly acute when researchers design their own measures. Often, the best solution is requiring the use of existing, high-quality measures. These common measures standardize the measurement of outcomes and make it easier to compare effects across studies.
Because we believe in the importance of common, comparable, and high-quality measures, we are exploring how to use our greatest point of leverage—our annual RFAs—to encourage their use. In the meantime, our What Works Clearinghouse team will continue to explore how we can make the best use of effects derived from measures of all sorts.
I wish that "core components" was further along—but, sticking with the analogy to children, everyone develops at their own pace. And sometimes, we tend to neglect those kids who show up on time, do their homework, and eat their broccoli without complaint. I will provide updates on other SEER principles in a later blog (this one is already long).
Progress at the National Center for Education Statistics
I all too often leave NCES out of my updates. NCES is a venerable federal statistical agency that has been hard to move—in part, because NCES already does so much good work. Many of the changes that have taken place in NCES are managerial and not the stuff of blogs or anniversary cards. NCES is a large contracting shop—that is, outside contractors do most of its work. Modernizing their contracting processes is fundamental to the Center's long-term success and has been a major focus over the past three years.
NCES is one of our test beds for moving IES into the modern era of data science. This includes expanding our capacity to merge multiple data sets while protecting privacy—a sine qua non of all data work. For example, NCES has partnered with the Coleridge Initiative to modernize how researchers can access our restricted data more easily.
NCES is also leading IES' work in the use and visual display of geocoded data. The Education Demographic and Geographic Estimates (EDGE) program is an exemplar: the program develops data resources about the social and spatial context of local schools and school systems by merging our data with data from the Census Bureau's American Community Survey. A simple indicator illustrates the potential value of data science applied to IES' education data: there are over 25,000 requests for EDGE data every day.
For nearly 30 years, the National Assessment of Education Progress (NAEP) focused on measuring and reporting the percent of students reaching the NAEP Proficient achievement level. But this narrow focus, which the No Child Left Behind Act raised to even greater prominence, meant that the growing population of students who performed below the NAEP Basic level did not receive the laser-like attention they needed. Equally important was a recognition that NAEP's basic technology did not allow the nation to adequately measure what the nation's lowest-performing students knew and could do. The transition of NAEP over the last few years from paper-based to digital-based assessments will help shift the focus to the lowest-performing students and will allow more adaptive testing. During the last few years other technological innovations have laid the foundation for a better—and hopefully less expensive—NAEP that should be more useful to policy makers as they seek to address systematic inequalities in student performance.
Broadening Who and What IES Supports
Over the course of the last year, especially since late 2020, IES has made a strong and ongoing effort to broaden the pool of researchers we support and to study how we can bring more and different universities into our research community. For example, we brought together over a dozen experts to advise us on how to broaden participation in our discretionary grant programs. And we launched an Institute-wide Diversity & Inclusion Council to provide input to IES leadership on how to increase diversity, equity, and inclusion across the entire organization. Our ongoing efforts also include identifying and reducing barriers for researchers, communities, and institutions that IES has not historically funded and systematically examining and diversifying the pool of peer reviewers who largely determine who receives IES research grants.
We also continue to invest in broadening the pipeline of researchers into the field of education research. We currently require all our training programs to recruit participants from diverse backgrounds so that, over time, the field can benefit from new ideas, approaches, and perspectives to address long-standing education issues. In 2016, we launched the Pathways to the Education Sciences Research Training Program to encourage students to consider education research as a career path. The Pathways program awards grants to minority serving institutions (MSIs) and their partners to prepare undergraduate, post-baccalaureate, and masters-level students for doctoral study in the education sciences or in fields relevant to education research. The program places special emphasis on recruiting students from groups underrepresented in the education sciences, including racial/ethnic minorities, first-generation college students, economically disadvantaged students, veterans, and students with disabilities. Our six existing Pathways programs provide year-long training experiences that include a research apprenticeship, mentoring, and career development. Results are promising. Since 2016, approximately 59 percent of Pathways alumni have already gone on to graduate school.
Supporting Knowledge Use and Evidence Building
The best education research accomplishes little if it's not used to change education itself. That task is the primary charge of the National Center for Education Evaluation and Regional Assistance. As its name implies, NCEE fills a particular niche within IES and the whole of the Department. First, it is the Department's independent evaluator. Second, through the Regional Educational Laboratories (REL) program and What Works Clearinghouse, it is the synthesizer and broker of high-quality research developed both within and outside IES.
As many of you know, I have made a commitment to shorten IES reports and use clear, concise language, with the goal of making the data and analyses we report more accessible to more people. You may remember IES' style guide: "Short sentences. Strong verbs." Because of its charge to meet the applied research needs of stakeholders across the nation, the REL program is responsible for more reports than any other part of IES. NCES is not far behind, with NCEE's What Works Clearinghouse and Evaluation Division bringing up the rear. My colleagues across NCEE and NCES have worked hard to meet this new mandate, and in the process, hopefully, IES has made education research and statistics more accessible and usable.
The work of NCEE's Evaluation Division has grown dramatically. Although NCEE was always responsible for evaluating federal education programs, a provision of the Every Student Succeeds Act of 2015 made more resources available for high-quality research on the effect of the Department's K–12 programs. In FY 2018, NCEE used the authority to support six evaluations. In our most recent biennial report to Congress, that number had grown to more than two dozen. (And this is only K–12 work and does not include studies that focus on students with disabilities; adult, career, or technical education; or postsecondary education.) Separately, NCEE has written about its work to implement another new law, the Foundations of Evidence-based Policymaking Act of 2018. The Evidence Act, as it is known, has agency-wide implications—and two of the law's three pillars, evidence-building and statistical collections, are squarely within IES' purview.
Finally, I want to return to a topic I have written and spoken about many times: my desire that the REL program put the full weight of its expertise to work in service of supporting state and local stakeholders solve their most vexing problems. During the past three years, the REL team and each of the regional labs has focused on two related issues: intentionality and impact. Logic models and measurement plans for each of our more than 100 partnerships were refined, stakeholder feedback surveys revised, and data analyzed, and a new competition for the REL 2022–2027 cycle launched.
There have been so many more accomplishments and IES staff deserve kudos for their good humor and good will as we continue to transform so much of what we do and how we think. If there are any other IES accomplishments you would like to remind me about, feel free to contact me: email@example.com