IES Blog

Institute of Education Sciences

Building Evidence: What Comes After an Efficacy Study?

Over the years, the Institute of Education Sciences (IES) has funded over 300 studies across its research programs that evaluate the efficacy of specific programs, policies, or practices. This work has contributed significantly to our understanding of the interventions that improve outcomes for students under tightly controlled or ideal conditions. But is this information enough to inform policymakers’ and practitioners’ decisions about whether to adopt an intervention? If not, what should come after an efficacy study?

In October 2016, IES convened a group of experts for a Technical Working Group (TWG) meeting to discuss next steps in building the evidence base after an initial efficacy study, and the specific challenges that are associated with this work. TWGs are meant to encourage stakeholders to discuss the state of research on a topic and/or to identify gaps in research.  

Part of this discussion focused on replication studies and the critical role they play in the evidence-building process. Replication studies are essential for verifying the results of a previous efficacy study and for determining whether interventions are effective when certain aspects of the original study design are altered (for example, testing an intervention with a different population of students). IES has supported replication research since its inception, but there was general consensus that more replications are needed.

TWG participants discussed some of the barriers that may be discouraging researchers from doing this work. One major obstacle is the idea that replication research is somehow less valuable than novel research—a bias that could be limiting the number of replication studies that are funded and published. A related concern is that the field of education lacks a clear framework for conceptualizing and conducting replication studies in ways that advance evidence about beneficial programs, policies and practices (see another recent IES blog post on the topic).

IES provides support for studies to examine the effectiveness of interventions that have prior evidence of efficacy and that are implemented as part of the routine and everyday practice occurring in schools without special support from researchers. However, IES has funded a relatively small number of these studies (14 across both Research Centers). TWG participants discussed possible reasons for this and pointed out several challenges related to replicating interventions under routine conditions in authentic education settings. For instance, certain school-level decisions can pose challenges for conducting high-quality effectiveness studies, such as restricting the length that interventions or professional development can be provided and choosing to offer the intervention to students in the comparison condition. These challenges can result in findings that are influenced more by contextual factors rather than the intervention itself. TWG participants also noted that there is not much demand for this level of evidence, as the distinction between evidence of effectiveness and evidence of efficacy may not be recognized as important by decision-makers in schools and districts.

In light of these challenges, TWG participants offered suggestions for what IES could do to further support the advancement of evidence beyond an efficacy study. Some of these recommendations were more technical and focused on changes or clarifications to IES requirements and guidance for specific types of research grants. Other suggestions included:

  • Prioritizing and increasing funding for replication research;
  • Making it clear which IES-funded evaluations are replication studies on the IES website;
  • Encouraging communication and partnerships between researchers and education leaders to increase the appreciation and demand for evidence of effectiveness for important programs, practices, and policies; and
  • Supporting researchers in conducting effectiveness studies to better understand what works for whom and under what conditions, by offering incentives to conduct this work and encouraging continuous improvement.

TWG participants also recommended ways IES could leverage its training programs to promote the knowledge, skills, and habits that researchers need to build an evidence base. For example, IES could emphasize the importance of training in designing and implementing studies to develop and test interventions; create opportunities for postdoctoral fellows and early career researchers to conduct replications; and develop consortiums of institutions to train doctoral students to conduct efficacy, replication, and effectiveness research in ways that will build the evidence base on education interventions that improve student outcomes.

To read a full summary of this TWG discussion, visit the Technical Working Group website or click here to go directly to the report (PDF).

Written by Katie Taylor, National Center for Special Education Research, and Emily Doolittle, National Center for Education Research

Provide Input on Proposed Changes to Statistical Standards for Federal Collection of Race and Ethnicity Data

By Jill Carlivati McCarroll and Tom Snyder

Each Federal agency is responsible for collecting and disseminating different types of data on topics of interest and importance to the American public. In order to look across data sources to get a more complete picture of any one topic, it is important that those datasets are comparable.

Federal agencies that collect and report race and ethnicity data use the Office of Management and Budget (OMB) Standards for Maintaining, Collecting, and Presenting Federal Data on Race and Ethnicity to promote uniformity and comparability.  The standards guide information collected and presented for the decennial census, household surveys, administrative forms (e.g., school registration and mortgage lending applications), and numerous other statistical collections, as well as for civil rights enforcement and program administrative reporting.

Periodically, these standards are reviewed. The Federal Interagency Working Group for Research on Race and Ethnicity has been tasked with reviewing the standards on race and ethnicity. A March 1st Federal Register Notice and associated interim report by the Working Group communicates the current status of this work and requests public feedback on the following four areas:

 

  1. The use of separate questions versus a combined question to measure race and Hispanic origin, and question phrasing as a solution to race/ethnicity question nonresponse;
  2. The classification of a Middle Eastern and North African (MENA) group and distinct reporting category;
  3. The description of the intended use of minimum reporting categories (e.g., requiring or encouraging more detailed reporting within each minimum reporting category); and
  4. The terminology used for race and ethnicity classifications and other language in the standard.

 

Additional details on each of these four areas are available in the full notice, posted on the regulations.gov website. All members of the public are encouraged to provide feedback on these topics.  OMB will use all the public comments, along with recommendations from the Federal Interagency Working Group, to determine if any proposed revisions to the standards are warranted. According to established practice, OMB plans to notify the public of its final decision, along with its rationale.

Comments on the Federal Register Notice are due by April 30, 2017 and can be submitted electronically to Race-Ethnicity@omb.eop.gov or via the Federal E-Government website. Comments may also be sent by mail to U.S. Chief Statistician, Office of Management and Budget, 1800 G St., 9th Floor, Washington, DC 20503. All public feedback will be considered by the Federal Interagency Working Group as they write their final report, which will be used by OMB as they decide on any possible revisions to the standards.  

Additional information on how federal agenices use race and ethnicity data as well as more a more detailed description of the potential changes to the current standards are available in this webinar:

How to Use the Improved ERIC Identifiers

ERIC has made recent improvements to help searchers find the education research they are looking for. One major enhancement relates to the ERIC identifiers, which have been improved to increase their usefulness as search tools. It is now easier than ever to refine searches to obtain specific resources in ERIC.

The identifier filters can be found on the search results page in three separate categories: (1) laws, policies, and programs, (2) assessments and surveys, and (3) location. After running a search on an education topic, users can scroll to the category on the left of the results page, select the desired identifier limiter within a category, and limit the results to only those materials tagged with that identifier.

We recently released a video that describes the enhanced identifiers, and walks through how to best use them to find materials in the ERIC collection. (We've embedded the video below.) 

Using the improved identifiers, searchers are now able to find materials related to specific locations, laws, or assessments no matter how the author referred to them in the article.

In other words, identifiers can now be used as an effective controlled vocabulary for ERIC, but this has not always been the case. While they have been part of ERIC since 1966, identifiers were not rigorously standardized, and they were often created "on the fly" by indexers. Also, the previous identifiers field had a character limit, meaning that some terms needed to be truncated to fit into the space allowed by the available technology. Therefore, over time, the identifiers proliferated with different spellings, abbreviations, and other variations, making them less useful as search aids.

To solve these issues, we launched a project in 2016 to review the lists of identifiers, and devise an approach for making them more user-friendly. Our solution was to streamline and standardize them, which eliminated redundancy and reduced their number from more than 7,800 to a more manageable 1,200. We also added the updated identifiers to the website’s search limiters to make them easier to use.

In addition to our new video, which demonstrates the best ways to use identifiers in your search, we also have a new infographic (pictured above) that depicts what identifiers are. You can use these companion pieces to learn more about identifiers, and begin putting them to work in your research. 

What Are the Payoffs to College Degrees, Credentials, and Credits?

The Center for Analysis of Postsecondary Education and Employment (CAPSEE) is an IES-funded Research and Development Center that seeks to advance knowledge regarding the link between postsecondary education and the labor market. CAPSEE was funded through a 2011 grant from the National Center for Education Research (NCER) and is in the process of completing its work. CAPSEE will hold a final conference to discuss its findings on April 6 & 7 in Washington, DC.

Recently, Tom Bailey (pictured), Director of the Community College Research Center, Columbia University, Teachers College, and the Principal Investigator for CAPSEE, answered questions from James Benson, the NCER Program Officer for the R & D center.

Can you describe some of the original goals of CAPSEE?

We were especially interested in the economic benefits of a college education for community college students, including those who complete awards (Associate’s degrees or certificates) and those who do not, as well as those who transfer to four-year colleges. We were also interested in differences in earnings by field of study. When we started CAPSEE in 2012 there were a lot of studies that used survey datasets to look, in general, at the returns to completing a Bachelor’s degree. The CAPSEE approach was to use large-scale statewide databases and follow college students over time, to look in detail at their earnings before, during, and after college.

In addition, CAPSEE researchers sought to examine two key policy issues. One was how financial aid and working while enrolled affect students’ performance in college and their labor market outcomes. The other was whether for-profit colleges help students get better jobs.

You have synthesized findings from analyses in six states. What are your main findings?

We found that, in general, Associate’s degrees have good returns in the labor market; they’re a good investment for the individual and for society. However, there is quite a bit of variation in returns by program. For students in Associate degree programs primarily designed to prepare them for transfer to a four-year college, if they don’t transfer, their degrees will not be worth very much. But when they complete vocational degrees, especially in health-related fields, the earnings gains are usually strong and persistent (and robust to how we estimated them). Also, we did a lot of research on certificates, credentials that many see as the best fit for students on the margin of going to college. We found benefits to students who completed certificates, again especially in fields that directly relate to an occupation or industry. And finally, we examined outcomes for students who enrolled and took courses without attaining a degree or certificate. We found that their after-college earnings increased in proportion to the number of credits they earned.

"The fundamental policy implication is that college is a good investment."

What do you see as the key policy implications of these findings?

The fundamental policy implication is that college is a good investment. This merits emphasis because there are repeated critiques of college in terms of how much it costs and how much debt students accumulate. That said policymakers do need to think about the value of each postsecondary program. Even within the same institution, programs have very different outcomes. Yet on average, attending college for longer and attaining more credits has beneficial effects. Policymakers should see this evidence as supporting public and private investments in college.

What did you discover about the relationships between financial aid, college outcomes, and labor market outcomes?

In an era of tight public resources, the effectiveness of financial aid policy is a crucial issue. Financial aid does help students persist in college, but one way to promote greater effectiveness is through academic performance standards for students receiving federal financial aid. These have existed in the federal need-based aid programs for nearly 40 years, in the form of Satisfactory Academic Progress (SAP) requirements. These have not received much attention. Our research on SAP suggests such policies have heterogeneous effects on students in the short term: they increase the likelihood that some students will drop out, but appear to motivate higher grades for students who remain enrolled. After three years, however, the negative effects dominate. Though it has little benefit for students in the long term, SAP policy appears to increase the efficiency of aid expenditures because it discourages students who have lower-than-average course completion rates from persisting. But the policy also appears to exacerbate inequality in higher education by pushing out low-performing, low-income students faster than their equally low-performing, higher-income peers.

Many students work while in college.  Does this seem to help or hurt students in the long run?

Our research found that for Federal Work-Study (FWS) participants who would have worked even in the absence of the program, FWS reduces hours worked and improves academic outcomes but has little effect on post-college employment outcomes. For students who would not have worked, the effects are reversed: the program has little effect on graduation, but a positive effect on post-college employment.  Results are more positive for participants at public institutions, who tend to be lower income than participants at private institutions. Our findings suggest that better targeting to low-income and lower-scoring students could improve FWS outcomes. This is consistent with much of the CAPSEE research—you need more detail and specificity to really understand the relationship between education and employment and earnings.

What did you learn about credentials from for-profit institutions?

Our findings on students at for-profit colleges were quite pessimistic. Although enrollment in for-profit colleges grew significantly after 2000, the sector has been declining during the last two years, as evidence on inferior outcomes – particularly with regard to student debt – emerged. In general, our researchers found that for-profit students have worse labor market outcomes than comparable community college students although in some cases the difference is not statistically significant. Our evidence suggests that these colleges need to be monitored to ensure they are delivering a high-quality, efficient education.

You are holding the final CAPSEE conference in April. What do you hope people will get out of it?

At the conference, we will focus on several important and controversial policy questions related to higher education: 

  • Have changes in tuition and the labor market created conditions in which college is not worth it for some students, contributing to an unsupportable increase in student debt? 
  • Has higher education contributed to inequality rather than promoting economic mobility? 
  • Is continued public funding of college a worthwhile investment? 
  • Should public funding be used only for some programs of study?  
  • What are the arguments for and against making community college free?
  • Can changes in the operations and functioning of colleges change the return on investment from a college education for both the individual and society?
  • How important should information on earnings outcomes be for accreditation decisions and/or for eligibility of students to receive financial aid? 

At the conference participants will have the opportunity to discuss and learn about these issues drawing on five years of CAPSEE research as well as input from other experts.

Measuring the Achievement and Experiences of American Indian and Alaska Native Youth: National Indian Education Study 2015

By Lauren Musu-Gillette and James Deaton

In order to measure the progress of education in the United States, it is important to examine equity and growth for students from many different demographic groups. The educational experiences of American Indian and Alaska Native (AI/AN) youth are of particular interest to educators and policymakers because of the prevalence of academic risk factors for this group. For example, the percentage of students served under the Individuals with Disabilities Education Act (IDEA) in 2013-14 was highest for AI/AN students,[1] and in 2013 a higher percentage of American Indian/Alaska Native 8th-grade students than of Hispanic, White, or Asian 8th-grade students were absent more than 10 days in the last month.[2]  

Although NCES attempts to collect data from AI/AN students in all of our surveys, disaggregated data for this group are sometimes not reportable due to their relatively small population size. Therefore, data collections that specifically target this group of students can be particularly valuable in ensuring the educational research and policy community has the information they need. The National Indian Education Survey is one of the primary resources for data on AI/AN youth.

The National Indian Education Study (NIES) is administered as part of the National Assessment of Educational Progress (NAEP) to allow more in-depth reporting on the achievement and experiences of AI/AN students in grade 4 and 8. NIES provides data at the national level and for select states with relatively high percentages of American Indians and/or Alaska Natives.[3] It also provides data by the concentration of AI/AN students attending schools in three mutually exclusive categories: Low density public schools (less than 25 percent AI/AN);[4] High density public schools (more than 25 percent AI/AN);[5] and Bureau of Indian Education (BIE) schools.[6]

In a recently released report on the results of the 2015 NIES, differences in performance on the reading and mathematics assessments emerged across school type. In 2015, students in low density public schools had higher scores in both subjects than those in high density public or BIE schools, and scores for students in high density public schools were higher than for those in BIE schools. Additionally, there were some score differences over time. For example, at grade 8, average reading scores in 2015 for students in BIE schools were higher than scores in 2009 and 2007, but were not significantly different from scores in 2011 and 2005 (Figure 2). 


* Significantly different (p < .05) from 2015.
NOTE: AI/AN = American Indian/Alaska Native. BIE = Bureau of Indian Education. School density indicates the proportion of AI/AN students enrolled. Low density public schools have less than 25 percent AI/AN students. High density public schools have 25 percent or more. All AI/AN students (public) includes only students in public and BIE schools. Performance results are not available for BIE schools at fourth grade in 2015 because school participation rates did not meet the 70 percent criteria.
SOURCE: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Assessment of Educational Progress (NAEP), various years, 2005-15 National Indian Education Studies.


The characteristics of students attending low density, high density, and BIE schools differed at both grades. For example, BIE schools had a significantly higher percentage of students who were English language learners (ELL) and eligible for the National School Lunch Program (NSLP). Additionally, high density schools had a significantly higher percentage of ELL students and NSLP-eligible students than low density schools.

The report also explored to what extent AI/AN culture and language are part of the school curricula. AI/AN students in grades 4 and 8 reported that family members taught them the most about Native traditions. Differences by school type and density were observed in responses to other questions about the knowledge AI/AN students had of their family’s Native culture, the role AI/AN languages played in their lives, and their involvement in Native cultural ceremonies and gatherings in the community. For example, 28 percent of 4th-grade students in BIE schools reported they knew “a lot” about the history, traditions, or arts and crafts of their tribe compared to 22 percent of their AI/AN peers in high density schools, and 18 percent of those in low density schools. Similarly, 52 percent of 8th-grade students at BIE schools participated several times a year in ceremonies and gatherings of their AI/AN tribe or group, compared to 28 percent of their peers at high density public schools, and 20 percent of their peers at low density public schools.

If you’re interested in learning more about NIES, including what the study means for American Indian and Alaska Native students and communities, you can view the video below. Access the compete report and find out more about the study here: https://nces.ed.gov/nationsreportcard/nies/


[1] See https://nces.ed.gov/programs/coe/indicator_cgg.asp

[2] See https://nces.ed.gov/programs/raceindicators/indicator_rcc.asp

[3] American Indian and Alaska Native state-specific 2015 NIES results are available for the following 14 states:  Alaska, Arizona, Minnesota, Montana, New Mexico, North Carolina, North Dakota, Oklahoma, Oregon, South Dakota, Utah, Washington, Wisconsin, and Wyoming. 

[4] Less than 25 percent of the student body is American Indian or Alaska Native. In low density schools, AI/AN students represented 1 percent of the students at grades 4 and 8.

[5] 25 percent or more of the student body is American Indian or Alaska Native. In high density schools, 53 percent of 4th-graders and 54 percent of 8th-graders were AI/AN students.

[6] In BIE schools, 97 percent of 4th-graders and 99 percent of 8th-graders were AI/AN students.