IES Blog

Institute of Education Sciences

Meeting the Moment: The Role of Evidence-Based Practice in Innovation and Improvement

On March 5, 2024, the U.S. Department of Education convened state education agency (SEA) leaders and their teams to an event called “Meeting the Moment: How State Leaders are Using Innovation for Impact.” As part of this meeting, my colleagues and I presented IES initiatives that support the use of evidence-based practices in states and districts. Below are my remarks that introduce this important work.

Thank you – and good morning, everyone. It truly is a privilege to be with you today.

If you don’t know IES, we are the Department’s independent research, statistics, and evaluation office. Perhaps many of you know us better by two of our signature programs: the What Works Clearinghouse and the Regional Educational Labs.

I’m excited to be here to learn alongside each of you as you share the good work your states are doing.

In just a few minutes, we’re going to hear from my IES colleagues about work they’re doing, and resources they’re creating, on improving teaching and learning. I’m hopeful that you’ll see they can be used across many of the initiatives you’re pioneering.

As you might imagine, coming from an outfit named IES, what they’ll be sharing falls into the category of “evidence-based practices.” That is, practices that high-quality research has shown improved student outcomes in the past, or at the very least has suggested are likely to do so in the future.

Before they do, I want to talk briefly about evidence-based practice more generally. In this cone of silence that is a ballroom filled with 250 people, I’m going to offer one statement that might be a little provocative, and then I’m going to make an ask of each of you.

Here’s the provocative bit: Increasingly, I worry that “evidence-based practice” is becoming a buzzword. And that, like many buzzwords, people are translating it into “yada, yada, yada” when they hear the phrase. Or, worse, the phrase “evidence-based practice” is evoking skepticism such that people dismiss the concept when they hear it. Often, this is paired with concerns that the available evidence is misaligned to what educators and policymakers most need, or that it doesn’t feel like a fit to their context.  

Unfortunately, not leaning into high-quality evidence is, I think, a huge risk for us all.

It’s my absolute belief that, as a nation, we will not reach the goal of every student having the opportunity to achieve their full potential and then realizing that potential, and we will not ensure that every educator has the preparation and support to do their best work and then see that work done, without a sustained commitment to evidence-based practice. Without a commitment to supporting high-quality evidence use for the long-haul and a commitment to learn from that use by building high-quality evidence.

Absent a belief in, the use of, and efforts to sustain evidence-based practice, I do not believe we will “meet the moment” or “raise the bar” as the Secretary has just challenged us to do.  

This brings me to my ask of you. Here it is: use the high-quality evidence like that which my colleagues are about to show you to ground your initiatives, to support the professional development of the educators in your state, and to make smart policy. Indeed, I know many of you already are.

But as you do expect, dare I say demand (in a collegial way), more and better from the process of evidence building and use so that it is truly meeting your needs and the needs of the districts and communities you serve. You can do so in at least five ways:

  • Expect that you should be a full partner in the process of identifying the issues around which you need more evidence;
  • Expect that as we build new evidence we do so together, and with the authentic engagement of the communities that we hope to benefit;
  • Expect that, together, we find ways to make meaning of what we’re learning, and then communicating those learnings, in ways that aren’t just “right” when viewed through the lens of rigorous research but also “rich” in terms of the depth of content and the nuance needed by those who we expect to use that evidence;
  • Expect that we’re collaborating to place what we’ve learned into the hands of educators, policymakers, and those who support them ... and in ways they can most effectively use those learnings; and finally
  • Expect that you will be able to get the assistance you need implementing and sustaining evidence-based practices—and, if you’re willing, get assistance building evidence about your own programs.

I believe unfailingly in the potential of evidence-based practice, and of our work together, to transform students’ lives. As a leader of evidence builders, let me say we will expect these same five things of ourselves in service of that belief becoming a reality. We would welcome accountability from you, as users of that evidence, as we move forward.

With that, let me cede the floor to my colleagues. Thank you again for the opportunity to be here.

Have questions about these remarks? Please email me at matthew.soldner@ed.gov.

Celebrating the ECLS-K:2024: Providing Key National Data on Our Country’s Youngest Learners

It’s time to celebrate!

This spring, the Early Childhood Longitudinal Study, Kindergarten Class of 2023–24 (ECLS-K:2024) is wrapping up its first school year of data collection with tens of thousands of children in hundreds of schools across the nation. You may not know this, but NCES is congressionally mandated to collect data on early childhood. We meet that charge by conducting ECLS program studies like the ECLS-K:2024 that follow children through the early elementary grades. Earlier studies looked at children in the kindergarten classes of 1998–99 and 2010–11. We also conducted a study, the Early Childhood Longitudinal Study, Birth Cohort (ECLS-B), that followed children from birth through kindergarten entry.

As the newest ECLS program study, the ECLS-K:2024 will collect data from both students and adults in these students’ lives (e.g., parents, teachers, school administrators) to help us better understand how different factors at home and at school relate to children’s development and learning. In fact, the ECLS-K:2024 allows us to provide data not only on the children in the cohort but also on kindergarten teachers and the schools that educate kindergartners.

What we at NCES think is worthy of celebrating is that the ECLS-K:2024—like other ECLS program studies,

  • provides the statistics policymakers need to make data-driven decisions to improve education for all;
  • contributes data that researchers need to answer today’s most pressing questions related to early childhood and early childhood education; and
  • allows us to produce resources for parents, families, teachers, and schools to better inform the public at large about children’s education and development.

Although smaller-scale studies can answer numerous questions about education and development, the ECLS-K:2024 allows us to provide answers at a national level. For example, you may know that children arrive to kindergarten with different skills and abilities, but have you ever wondered how those skills and abilities vary for children who come from different areas of the country? How they vary for children who attended prekindergarten programs versus those who did not? How they vary for children who come from families of different income levels? The national data from the ECLS-K:2024 allow us to dive into these—and other—issues.

The ECLS-K:2024 is unique in that it’s the first of our early childhood studies to provide data on a cohort of students who experienced the coronavirus pandemic. How did the pandemic affect these children’s early development and how did it change the schooling they receive? By comparing the experiences of the ECLS-K:2024 cohort to those of children who were in kindergarten nearly 15 and 25 years ago, we’ll be able to answer these questions.

What’s more, the ECLS-K:2024 will provide information on a variety of topics not fully examined in previous national early childhood studies. The study is including new items on families’ kindergarten selection and choice; availability and use of home computers and other digital devices; parent-teacher association/organization contributions to classrooms; equitable school practices; and a myriad of other constructs.

Earlier ECLS program studies have had a huge impact on our understanding of child development and early education, with hundreds of research publications produced using their data (on topics such as academic skills and school performance; family activities that promote learning; and children’s socioemotional development, physical health, and well-being). ECLS data have also been referenced in media outlets and in federal and state congressional reports. With the launch of the ECLS-K:2024, we cannot wait to see the impact of research using the new data.

Want to learn more? 

Plus, be on the lookout late this spring for the next ECLS blog post celebrating the ECLS-K:2024, which will highlight children in the study. Future blog posts will focus on parents and families and on teachers and schools. Stay tuned!

 

By Jill McCarroll and Korrie Johnson, NCES

Using IPEDS Data: Available Tools and Considerations for Use

The Integrated Postsecondary Education Data System (IPEDS) contains comprehensive data on postsecondary institutions. IPEDS gathers information from every college, university, and technical and vocational institution that participates in federal student financial aid programs. The Higher Education Act of 1965, as amended, requires institutions that participate in federal student aid programs to report data on enrollments, program completions, graduation rates, faculty and staff, finances, institutional prices, and student financial aid.

These data are made available to the public in a variety of ways via the IPEDS Use the Data webpage. This blog post provides a description of available IPEDS data tools as well as considerations for determining the appropriate tool to use.


Available Data Tools

College Navigator

College Navigator is a free consumer information tool designed to help students, parents, high school counselors, and others access information about postsecondary institutions.

Note that this tool can be found on the Find Your College webpage (under "Search for College"), along with various other resources to help users plan for college.

IPEDS provides data tools for a variety of users that are organized into three general categories: (1) Search Existing Data, (2) Create Custom Data Analyses, and (3) Download IPEDS Data.

Search Existing Data

Users can search for aggregate tables, charts, publications, or other products related to postsecondary education using the Data Explorer or access IPEDS data via NCES publications like the Digest of Education Statistics or the Condition of Education.

Create Custom Data Analyses

Several data tools allow users to create their own custom analyses with frequently used and derived variables (Data Trends) or all available data collected within IPEDS (Statistical Tables). Users can also customize tables for select subgroups of institutions (Summary Tables). Each of these options allows users to generate analyses within the limitations of the tool itself.

For example, there are three report types available under the Data Feedback Report (DFR) tool. User can

  1. select data from the most recent collection year across frequently used and derived variables to create a Custom DFR;
     
  2. create a Statistical Analysis Report using the variables available for the Custom DFR; and
     
  3. access the NCES developed DFR for any institution.

Download IPEDS Data

Other data tools provide access to raw data through a direct download (Complete Data Files) or through user selections in the IPEDS Custom Data Files tool. In addition, IPEDS data can be downloaded for an entire collection year for all survey components via the Access Database.

IPEDS Data Tools Help

The IPEDS Data Tools User Manual is designed to help guide users through the various functions, processes, and abundant capabilities of IPEDS data tools. The manual contains a wealth of information, hints, tips, and insights for using the tools.

 

Data Tool Considerations

Users may consider several factors—related to both data selection and data extraction—when determining the right tool for a particular question or query.

Data Selection

  1. Quick access – Accessing data in a few steps may be helpful for users who want to find data quickly. Several data tools provide data quickly but may be limited in their selection options or customizable output.

  2. Data release – IPEDS data are released to the public in two phases: Provisional and Final. Provisional data have undergone quality control procedures and imputation for missing data but have not been updated based on changes within the Prior Year Revision System. Final data reflect changes made within the Prior Year Revision System and additional quality control procedures and will not change. Some tools allow users to access only final data. Table 1 summarizes how provisional and final data are used by various data tools. The IPEDS resource page “Timing of IPEDS Data Collection, Coverage, and Release Cycle” provides more information on data releases.


    Table 1. How provisional and final data are used in various data tools

  1. Select institutions – Users may want to select specific institutions for their analyses. Several tools allow users to limit the output for a selected list of institutions while others include all institutions in the output.
     
  2. Multiple years – While some tools provide a single year of data, many tools provide access to multiple years of data in a single output.
     
  3. Raw data – Some data tools provide access to the raw data as submitted to IPEDS. For example, Look Up an Institution allows users access to survey forms submitted by an institution.
     
  4. Institution-level data – Many data tools provide data at the institution level, since this is the unit of analysis within the IPEDS system.
     
  5. All data available – Many data tools provide access to frequently used and derived variables, but others provide access to the entirety of variables collected within the IPEDS system.

Data Extraction

  1. Save/upload institutions – Several data tools allow a user to create and download a list of institutions, which can be uploaded in a future session.

  2. Save/upload variables – Two data tools allow a user to save the variables selected and upload in a future session.
     
  3. Export data – Many data tools allow a user to download data into a spreadsheet, while others provide information within a PDF. Note that several tools have limitations on the number of variables that can be downloaded in a session (e.g., Compare Institutions has a limit of 250 variables).
     
  4. Produce visuals – Several data tools produce charts, graphs, or other visualizations. For example, Data Trends provides users with the opportunity to generate a bar or line chart and text table.


Below is a graphic that summarizes these considerations for each IPEDS data tool (click the image to enlarge it). 

 

By Tara B. Lawley, NCES, and Eric S. Atchison, Arkansas State University and Association for Institutional Research IPEDS Educator

Going beyond existing menus of statistical procedures: Bayesian multilevel modeling with Stan

For nearly 15 years, NCER has supported the development and improvement of innovative methodological and statistical tools and approaches that will better enable applied education researchers to conduct high-quality, rigorous education research. This blog spotlights the work of Andrew Gelman, a professor of statistics and political science at Columbia University, and Sophia Rabe-Hesketh, a professor of statistics at the School of Education at the University of California, Berkeley. IES has supported their research on hierarchical modeling and Bayesian computation has for many years. In this interview blog, Drs. Gelman and Rabe-Hesketh reflect on how Bayesian modeling applies to educational data and describe the general principles and advantages of Bayesian analysis.

What motivates your research on hierarchical modeling and Bayesian computation?

Education data can be messy. We need to adjust for covariates in experiments and observational studies, and we need to be able to generalize from non-random, non-representative samples to populations of interest.

The general motivation for multilevel modeling is that we are interested in local parameters, such as public opinion by states, small-area disease incidence rates, individual performance in sports, school-district-level learning loss, and other quantities that vary among people, across locations, and over time. In non-Bayesian settings, the local parameters are called random effects, varying intercepts/slopes, or latent variables.

Bayesian and non-Bayesian models differ in how completely the researcher using them must specify the probability distributions of the parameters. In non-Bayesian models, typically only the data model (also called the likelihood function) must be specified. The underlying parameters, such as the variances of random intercepts, are treated as unknown constants. On the other hand, the Bayesian approach requires specifying a full probability model for all parameters.  

A researcher using Bayesian inference encodes additional assumptions about all parameters into prior distributions, then combines information about the parameters from the data model with information from the prior distributions. This results in a posterior distribution for each parameter, which, compared to non-Bayesian model results, provides more information about the appropriateness of the model and supports more complex inferences.

What advantages are there to the Bayesian approach?

Compared to other estimates, Bayesian estimates are based on many more assumptions. One advantage of this is greater stability at small sample sizes. Another advantage is that Bayesian modeling can be used to produce flexible, practice-relevant summaries from a fitted model that other approaches cannot produce. For instance, when modeling school effectiveness, researchers using the Bayesian approach can rely on the full probability model to justifiably obtain the rankings of schools or the probabilities that COVID-related declines in NAEP mean test scores for a district or state have exceeded three points, along with estimates for the variability of these summaries. 

Further, Bayesian inference supports generalizability and replicability by freely allowing uncertainty from multiple sources to be integrated into models. Without allowing for uncertainty, it’s difficult to understand what works for whom and why. A familiar example is predicting student grades in college courses. A regression model can be fit to obtain a forecast with uncertainty based on past data on the students, and then this can be combined with student-specific information. Uncertainties in the forecasts for individual students or groups of students will be dependent and can be captured by a joint probability model, as implemented by posterior simulations. This contrasts with likelihood-based (non-Bayesian) inference where predictions and their uncertainty are typically considered only conditionally on the model parameters, with maximum likelihood estimates plugged in. Ignoring uncertainty leads to standard error estimates that are too small on average (see this introduction to Bayesian multilevel regression for a detailed demonstration and discussion of this phenomenon).

What’s an important disadvantage to the Bayesian approach?

Specifying a Bayesian model requires the user to make more decisions than specifying a non-Bayesian model. Until recently, many of these decisions had to be implemented using custom programming, so the Bayesian approach had a steep learning curve. Users who were not up to the programming and debugging task had to work within some restricted class of models that had already been set up with existing software. 

This disadvantage is especially challenging in education research, where we often need to adapt and expand our models beyond a restricted class to deal with statistical challenges such as imperfect treatment assignments, nonlinear relations, spatial correlations, and mixtures, along with data issues such as missingness, students changing schools, guessing on tests, and predictors measured with error.

How did your IES-funded work address this disadvantage?

In 2011, we developed Stan, our open-source Bayesian software, with funding from a Department of Energy grant on large-scale computing. With additional support from the National Science Foundation and IES, we have developed model types, workflows, and case studies for education researchers and also improved Stan’s computational efficiency.

By combining a state-of-the-art inference engine with an expressive modeling language, Stan allows education researchers to build their own models, starting with basic linear and logistic regressions and then adding components of variation and uncertainty and expanding as needed to capture challenges that arise in applied problems at hand.  We recommend the use of Stan as part of a Bayesian workflow of model building, checking, and expansion, making use of graphs of data and fitted models.

Stan can be accessed using R, Python, Stata, Julia, and other software. We recommend getting started by looking at the Stan case studies. We also have a page on Stan for education research and a YouTube channel.

In terms of dealing with the issues that arise in complex educational data, where do we stand today?

Put all this together, and we are in the business of fitting complex models in an open-ended space that goes beyond any existing menu of statistical procedures. Bayesian inference is a flexible way to fit such models, and Stan is a flexible tool that we have developed, allowing general models to be fit in reasonable time using advanced algorithms for statistical computing.  As always with research, there are many loose ends and there is more work to be done, but we can now routinely fit, check, and display models of much greater generality than was before possible, facilitating the goals of understanding processes in education.


This blog was produced by Charles Laurin (Charles.Laurin@ed.gov), NCER program officer for the Statistical and Research Methodology in Education grant program.

Measuring Student Safety: New Data on Bullying Rates at School

NCES is committed to providing reliable and up-to-date national-level estimates of bullying. As such, a new set of web tables focusing on bullying victimization at school was just released.  

These tables use data from the School Crime Supplement to the National Crime Victimization Survey, which collects data on bullying by asking a nationally representative sample of students ages 12–18 who were enrolled in grades 6–12 in public and private schools if they had been bullied at school. This blog post highlights data from these newly released web tables.

Some 19 percent of students reported being bullied during the 2021–22 school year. More specifically, bullying was reported by 17 percent of males and 22 percent of females and by 26 percent of middle school students and 16 percent of high school students. Moreover, among students who reported being bullied, 14 percent of males and 28 percent of females reported being bullied online or by text.

Students were also asked about the recurrence and perpetrators of bullying and about the effects bullying has on them. During the 2021–22 school year, 12 percent of students reported that they were bullied repeatedly or expected the bullying to be repeated and that the bullying was perpetrated by someone who was physically or socially more powerful than them and who was not a sibling or dating partner. When these students were asked about the effects this bullying had on them,

  • 38 percent reported negative feelings about themselves;
  • 27 percent reported negative effects on their schoolwork;
  • 24 percent reported negative effects on their relationships with family and friends; and
  • 19 percent reported negative effects on their physical health.

Explore the web tables for more data on how bullying victimization varies by student characteristics (e.g., sex, race/ethnicity, grade, household income) and school characteristics (e.g., region, locale, enrollment size, poverty level) and how rates of bullying victimization vary by crime-related variables such as the presence of gangs, guns, drugs, alcohol, and hate-related graffiti at school; selected school security measures; student criminal victimization; personal fear of attack or harm; avoidance behaviors; fighting; and the carrying of weapons.

Find additional information on this topic in the Condition of Education indicator Bullying at School and Electronic Bullying. Plus, explore more School Crime and Safety data and browse the Report on Indicators of School Crime and Safety: 2022.