IES Blog

Institute of Education Sciences

Get to Know NCES in Just Five Minutes!

By Lauren Musu-Gillette

Have you ever read one of our reports and wondered where the data came from? Are you familiar with NAEP, but have never heard of IPEDS? Are you curious about the history of NCES? If so, our new video is perfect for you!

The full scope of NCES activities can be daunting for those not familiar with the Center. Our data collections include samples from early childhood to postsecondary education, and cover such diverse topics as math and reading achievement, the experiences of teachers and principals, and school crime. In addition, the Center has a rich history both within the Department of Education and as a federal statistical agency. To make our data, reports, and tools more accessible to the public, we’ve created a new video to help introduce you to who we are and what we do.

To learn more about the Center’s work, watch the video below and follow us on Twitter and Facebook.

Using the WWC as a Teaching Tool

EDITOR'S NOTE:The What Works Clearinghouse (WWC), a program of the Institute of Education Sciences, is a trusted source of scientific evidence on education programs, products, practices, and policies. The WWC also has many tools and resources for education researchers and students.  In this guest blog post, Jessaca Spybrook (pictured, below right), Associate Professor of Evaluation, Measurement and Research at Western Michigan University, discusses how she uses WWC procedures and standards as a teaching tool.


By Jessaca Spybrook, Western Michigan University

TraiJessaca Spybrookning the next generation of researchers so they are prepared to enter the world of education research is a critical part of my role as a faculty member in the Evaluation, Measurement, and Research program. I want to ensure that my students have important technical skills in a host of subject areas including, but not limited to, research design, statistics, and measurement. At the same time, I want to be sure they know how to apply the skills to design and analyze real-world studies. I often struggle to find resources for my classes that help me meet both goals.

One resource that has emerged as an important tool in meeting both goals is the What Works Clearinghouse website. I frequently integrate materials from the WWC into the graduate research design and statistics courses I teach.

For example, in a recent class I taught, Design of Experiments and Quasi-Experiments, I used the WWC Procedures and Standards Handbook Version 3.0 throughout (an image from the publication is pictured below). The Handbook met four important criteria as I was selecting resources for my class:

  1. Inclusion of important technical detail on design and analysis;
  2. Up-to-date and current thinking and “best practice” in design and analysis;
  3. Clear writing that is accessible for graduate students; and
  4. It was free (always a bonus when searching for class materials).Image from the What Works Clearinghouse Standards & Practices Guide 3.0

By no means did the Handbook replace classic and well-regarded textbooks in the class. Rather, it helped connect classic texts on design to both recent advances related to design, as well as real-life considerations and standards that designs are judged against.

At the end of my class, students may have been tired of hearing the question, “what is the highest potential rating for this study?” But I feel confident that using the WWC Handbook helped me not only prepare graduates with the technical know-how they need to design a rigorous experiment or quasi-experiment, but also raised awareness of current best practice and how to design a study that meets important standards set for the field.

 

Building Evidence: What Comes After an Efficacy Study?

Over the years, the Institute of Education Sciences (IES) has funded over 300 studies across its research programs that evaluate the efficacy of specific programs, policies, or practices. This work has contributed significantly to our understanding of the interventions that improve outcomes for students under tightly controlled or ideal conditions. But is this information enough to inform policymakers’ and practitioners’ decisions about whether to adopt an intervention? If not, what should come after an efficacy study?

In October 2016, IES convened a group of experts for a Technical Working Group (TWG) meeting to discuss next steps in building the evidence base after an initial efficacy study, and the specific challenges that are associated with this work. TWGs are meant to encourage stakeholders to discuss the state of research on a topic and/or to identify gaps in research.  

Part of this discussion focused on replication studies and the critical role they play in the evidence-building process. Replication studies are essential for verifying the results of a previous efficacy study and for determining whether interventions are effective when certain aspects of the original study design are altered (for example, testing an intervention with a different population of students). IES has supported replication research since its inception, but there was general consensus that more replications are needed.

TWG participants discussed some of the barriers that may be discouraging researchers from doing this work. One major obstacle is the idea that replication research is somehow less valuable than novel research—a bias that could be limiting the number of replication studies that are funded and published. A related concern is that the field of education lacks a clear framework for conceptualizing and conducting replication studies in ways that advance evidence about beneficial programs, policies and practices (see another recent IES blog post on the topic).

IES provides support for studies to examine the effectiveness of interventions that have prior evidence of efficacy and that are implemented as part of the routine and everyday practice occurring in schools without special support from researchers. However, IES has funded a relatively small number of these studies (14 across both Research Centers). TWG participants discussed possible reasons for this and pointed out several challenges related to replicating interventions under routine conditions in authentic education settings. For instance, certain school-level decisions can pose challenges for conducting high-quality effectiveness studies, such as restricting the length that interventions or professional development can be provided and choosing to offer the intervention to students in the comparison condition. These challenges can result in findings that are influenced more by contextual factors rather than the intervention itself. TWG participants also noted that there is not much demand for this level of evidence, as the distinction between evidence of effectiveness and evidence of efficacy may not be recognized as important by decision-makers in schools and districts.

In light of these challenges, TWG participants offered suggestions for what IES could do to further support the advancement of evidence beyond an efficacy study. Some of these recommendations were more technical and focused on changes or clarifications to IES requirements and guidance for specific types of research grants. Other suggestions included:

  • Prioritizing and increasing funding for replication research;
  • Making it clear which IES-funded evaluations are replication studies on the IES website;
  • Encouraging communication and partnerships between researchers and education leaders to increase the appreciation and demand for evidence of effectiveness for important programs, practices, and policies; and
  • Supporting researchers in conducting effectiveness studies to better understand what works for whom and under what conditions, by offering incentives to conduct this work and encouraging continuous improvement.

TWG participants also recommended ways IES could leverage its training programs to promote the knowledge, skills, and habits that researchers need to build an evidence base. For example, IES could emphasize the importance of training in designing and implementing studies to develop and test interventions; create opportunities for postdoctoral fellows and early career researchers to conduct replications; and develop consortiums of institutions to train doctoral students to conduct efficacy, replication, and effectiveness research in ways that will build the evidence base on education interventions that improve student outcomes.

To read a full summary of this TWG discussion, visit the Technical Working Group website or click here to go directly to the report (PDF).

Written by Katie Taylor, National Center for Special Education Research, and Emily Doolittle, National Center for Education Research

Provide Input on Proposed Changes to Statistical Standards for Federal Collection of Race and Ethnicity Data

By Jill Carlivati McCarroll and Tom Snyder

Each Federal agency is responsible for collecting and disseminating different types of data on topics of interest and importance to the American public. In order to look across data sources to get a more complete picture of any one topic, it is important that those datasets are comparable.

Federal agencies that collect and report race and ethnicity data use the Office of Management and Budget (OMB) Standards for Maintaining, Collecting, and Presenting Federal Data on Race and Ethnicity to promote uniformity and comparability.  The standards guide information collected and presented for the decennial census, household surveys, administrative forms (e.g., school registration and mortgage lending applications), and numerous other statistical collections, as well as for civil rights enforcement and program administrative reporting.

Periodically, these standards are reviewed. The Federal Interagency Working Group for Research on Race and Ethnicity has been tasked with reviewing the standards on race and ethnicity. A March 1st Federal Register Notice and associated interim report by the Working Group communicates the current status of this work and requests public feedback on the following four areas:

 

  1. The use of separate questions versus a combined question to measure race and Hispanic origin, and question phrasing as a solution to race/ethnicity question nonresponse;
  2. The classification of a Middle Eastern and North African (MENA) group and distinct reporting category;
  3. The description of the intended use of minimum reporting categories (e.g., requiring or encouraging more detailed reporting within each minimum reporting category); and
  4. The terminology used for race and ethnicity classifications and other language in the standard.

 

Additional details on each of these four areas are available in the full notice, posted on the regulations.gov website. All members of the public are encouraged to provide feedback on these topics.  OMB will use all the public comments, along with recommendations from the Federal Interagency Working Group, to determine if any proposed revisions to the standards are warranted. According to established practice, OMB plans to notify the public of its final decision, along with its rationale.

Comments on the Federal Register Notice are due by April 30, 2017 and can be submitted electronically to Race-Ethnicity@omb.eop.gov or via the Federal E-Government website. Comments may also be sent by mail to U.S. Chief Statistician, Office of Management and Budget, 1800 G St., 9th Floor, Washington, DC 20503. All public feedback will be considered by the Federal Interagency Working Group as they write their final report, which will be used by OMB as they decide on any possible revisions to the standards.  

Additional information on how federal agenices use race and ethnicity data as well as more a more detailed description of the potential changes to the current standards are available in this webinar:

How to Use the Improved ERIC Identifiers

ERIC has made recent improvements to help searchers find the education research they are looking for. One major enhancement relates to the ERIC identifiers, which have been improved to increase their usefulness as search tools. It is now easier than ever to refine searches to obtain specific resources in ERIC.

The identifier filters can be found on the search results page in three separate categories: (1) laws, policies, and programs, (2) assessments and surveys, and (3) location. After running a search on an education topic, users can scroll to the category on the left of the results page, select the desired identifier limiter within a category, and limit the results to only those materials tagged with that identifier.

We recently released a video that describes the enhanced identifiers, and walks through how to best use them to find materials in the ERIC collection. (We've embedded the video below.) 

Using the improved identifiers, searchers are now able to find materials related to specific locations, laws, or assessments no matter how the author referred to them in the article.

In other words, identifiers can now be used as an effective controlled vocabulary for ERIC, but this has not always been the case. While they have been part of ERIC since 1966, identifiers were not rigorously standardized, and they were often created "on the fly" by indexers. Also, the previous identifiers field had a character limit, meaning that some terms needed to be truncated to fit into the space allowed by the available technology. Therefore, over time, the identifiers proliferated with different spellings, abbreviations, and other variations, making them less useful as search aids.

To solve these issues, we launched a project in 2016 to review the lists of identifiers, and devise an approach for making them more user-friendly. Our solution was to streamline and standardize them, which eliminated redundancy and reduced their number from more than 7,800 to a more manageable 1,200. We also added the updated identifiers to the website’s search limiters to make them easier to use.

In addition to our new video, which demonstrates the best ways to use identifiers in your search, we also have a new infographic (pictured above) that depicts what identifiers are. You can use these companion pieces to learn more about identifiers, and begin putting them to work in your research.