IES Blog

Institute of Education Sciences

Do Underrepresented Students Benefit From Gifted Programs?

Recent studies of gifted and talented programs indicate that the extent and quality of services available to gifted students vary from state to state, district to district, and even from school to school within school districts. In a project titled “Are Gifted Programs Beneficial to Underserved Students?” (PI: William Darity, Duke University), IES-funded researchers are examining the variability of Black and Hispanic students’ access to gifted programs in North Carolina and the potential impact of participation in these gifted programs on Black and Hispanic student outcomes. In this interview blog, we asked co-PIs Malik Henfield and Kristen Stephens to discuss the motivation for their study and preliminary findings.

What motivated your team to study the outcomes of Black and Hispanic students in gifted programs?

The disproportionality between the representation of white students and students of color in gifted education programs is both persistent and pervasive. For decades, we’ve both been working with teachers and school counselors seeking to increase the number of students of color in gifted education programs, but what happens once these students are placed in these programs? We know very little about the educational, social, and emotional impact that participation (or non-participation) has on students. Gifted education programs are widely believed to provide the best educational opportunity for students, but given the impacts race and socioeconomic status have on student success factors, this may not be a sound assumption. In fact, there is negligible (and often contradictory) published research that explores whether gifted programs contribute to beneficial academic and social-emotional outcomes for the underserved students who participate in them. Resolving this question will have tremendous implications for future gifted education policies.

Please tell us about your study. What have you learned so far?

With funding from IES, researchers from Duke University and Loyola University Chicago are collaborating to describe how gifted education policies in North Carolina are interpreted, implemented, and monitored at the state, district, and school levels. We are also estimating how these policies are related to Black, Hispanic, and economically disadvantaged students’ academic and social-emotional outcomes. We hope our examination of individual student characteristics, sociocultural contexts, and environmental factors will help improve the ways school systems identify and serve gifted students from traditionally underrepresented groups.

Although preliminary, there are several interesting findings from our study. Our analysis of district-level gifted education plans highlights promising equity practices (for example, using local norms to determine gifted program eligibility) as well as potential equity inhibitors (for example, relying predominantly on teacher referral). Our secondary data analysis reveals that the majority of school districts do not have equitable representation of Black and Hispanic students in gifted programs. Disproportionality was calculated using the Relative Difference in Composition Index (RDCI). The RDCI represents the difference between a group’s composition in gifted education programs and their composition across the school district expressed as a discrepancy percentage.

What’s Next?

In North Carolina, districts are allowed to interpret state policy and implement programs and support services in ways they deem appropriate. Our next step is to conduct an in-depth qualitative exploration of variations in policy within and across North Carolina school districts. In these forthcoming analyses, we will be looking only at youth identified as underserved along the racial/ethnic minority dimension. In each district, we plan to interview four distinct groups to better understand their greatest assets, needs, challenges, and resources they would find most valuable to facilitate successful academic and social-emotional outcomes: (1) high-achieving underserved students identified as gifted, (2) high-achieving underserved students not identified as gifted, (3) teachers, and (4) school counselors.

For example, we are interested in learning—

  • How educators interpret identification processes from policies
  • How educators perceive recruitment and retention processes and their role in them
  • How ethnic minority students identified as gifted perceive recruitment and retention processes
  • How ethnic minority students not selected for participation in gifted education programming perceive the recruitment process
  • How both student groups make sense of their racial identity

We will then combine what we learned from studies 1-3 (using secondary data) with Study 4 (research in schools) and share the results with policymakers, educators, and the research community.

What advice would you like to share with other researchers who are studying access to gifted programs?

There are three recommendations we would like to share:

  • Investigate instructional interventions that impact short- and long-term academic and social-emotional outcomes for gifted students. The field of gifted education has spent significant time and resources attempting to determine the best methods for identifying gifted students across all racial/ethnic groups. Nonetheless, disparities in representation still exist, and this hyper-focus on identification has come at the expense of increasing our understanding of what types of interventions work, for whom, and under what conditions.
  • Conduct more localized research studies. Since gifted education programs are largely de-centralized, there is considerable variance in how policies are created and implemented across states, districts, and schools. For example, eligibility criteria for participation in gifted programs can differ significantly across school systems.  In NC, “cut score” percentages on achievement and aptitude tests can range from the 85th to the 99th percentile. This makes it difficult to generalize research findings across contexts when participant samples aren’t adequately comparable. 
  • Extend beyond the identification question and consider both generalizability and transferability when designing the research methodology. For generalizability, this entails carefully selecting the sample population and the methods for developing causal models. For transferability, this means providing a detailed account of the ecosystem in which the research is taking place so that practitioners can see the utility of the findings and recommendations within their own contexts. Mixed methods studies would certainly help bridge the relationship between the two. 

 


Dr. Malik S. Henfield is a full professor and founding dean of the Institute for Racial Justice at Loyola University Chicago. His scholarship situates Black students' lived experiences in a broader ecological milieu to critically explore how their personal, social, academic, and career success is impeded and enhanced by school, family, and community contexts. His work to date has focused heavily on the experiences of Black students formally identified as gifted/high achieving.

Dr. Kristen R. Stephens is an associate professor of the Practice in the Program in Education at Duke University. She studies legal and policy issues related to gifted education at the federal, state, and local levels--particularly around how such policies contribute to beneficial academic, social-emotional, and behavioral outcomes for traditionally underserved gifted students.

This interview blog is part of a larger IES blog series on diversity, equity, inclusion and accessibility (DEIA) in the education sciences. It was produced by Katina Stapleton (Katina.Stapleton@ed.gov), co-chair of the IES Diversity and Inclusion Council. For more information about the study, please contact the program officer, Corinne Alfeld (Corinne.Alfeld@ed.gov).

 

Introducing a New Resource Page for the IPEDS Outcome Measures (OM) Survey Component

The National Center for Education Statistics (NCES) has introduced a new resource page for the Integrated Postsecondary Education Data System (IPEDS) Outcome Measures (OM) survey component. This blog post provides an overview of the webpage and is the first in a series of blog posts that will showcase OM data.

Measuring Student Success in IPEDS: Graduation Rates (GR), Graduation Rates 200% (GR200), and Outcome Measures (OM) is a new resource page designed to help data reporters and users better understand the value of OM data and how the OM survey component works, particularly when compared with the Graduation Rates (GR) and Graduation Rates 200% (GR200) survey components.

The OM survey component was added to IPEDS in 2015–16 in an effort to capture postsecondary outcomes for more than so-called “traditional” college students. From 1997–98 to 2015–16, IPEDS graduation rate data were collected only for first-time, full-time (FTFT) degree/certificate-seeking (DGCS) undergraduates through the GR and GR200 survey components. Unlike these survey components, OM collects student outcomes for all entering DGCS undergraduates, including non-first-time students (i.e., transfer-in students) and part-time students.

Outcome measures are useful as characteristics of students vary by the level of institution. In 2009, some 4.7 million students began at 2-year postsecondary institutions, and 25 percent were full-time students who were attending college for the first time. During the same period, some 4.5 million students began at 4-year institutions, and 44 percent were first-time, full-time students.1

The new resource page answers several important questions about OM, GR, and GR200, including the following:

  • Which institutions complete each survey component?
  • Does the survey form vary by institutional type?
  • What student success measures are included?
  • Which students are included in the cohort?
  • What is the timeframe for establishing student cohorts?
  • Which subgroups (disaggregates) are included?
  • What is the timing of data collection and release?

In answering these questions, the resource page highlights that OM provides a more comprehensive view of student success than do GR and GR200. Furthermore, it suggests that OM, GR, and GR200 are not directly comparable, as the survey components differ in terms of which institutions complete them, which students are captured, and how each measures cohorts. Here are some of the key differences:

  • Institutions with FTFT cohorts complete the GR and GR200 components, whereas degree-granting institutions complete the OM component.
  • GR and GR200 include only FTFT DGCS undergraduates, whereas OM includes all DGCS undergraduates.
  • GR and GR200 cohorts are based on a fall term for academic reports and a full year (September 1–August 31) for program reporters, whereas OM cohorts are based on a full year (July 1–June 30) for all degree-granting institutions.

Finally, the resource page outlines how OM works, including how cohorts and subcohorts are established, which outcomes are collected at various status points, and when the public have access to submitted data. Exhibit 1 presents the current 2021–22 data collection timeline, including the cohort year, outcome status points, data collection period, and public release of OM data.


Exhibit 1. 2021­–22 Outcome Measures (OM) data collection timeline (2013–14 entering degree/certificate-seeking cohort)

Infographic showing the 2020—21 OM data collection timeline, including the cohort year, outcome status points, data collection period, and public release of OM data


Data reporters and users are encouraged to utilize the new OM survey component resource page to better understand the scope of OM, how it works, and how it differs from GR and GR200. Stay tuned for a follow-up blog post featuring data from OM that further highlights the survey component’s usefulness in measuring student success for all DGCS undergraduate students.

 

By Tara Lawley, NCES; Roman Ruiz, AIR; Aida Ali Akreyi, AIR; and McCall Pitcher, AIR


[1] U.S. Department of Education, National Center for Education Statistics, Integrated Postsecondary Education Data System (IPEDS), Winter 2017–18, Outcome Measures component; and IPEDS Fall 2009, Institutional Characteristics component. See Digest of Education Statistics 2018, table 326.27.