IES Blog

Institute of Education Sciences

You’ve Been Asked to Participate in a Study

Dear reader,

You’ve been asked to participate in a study.

. . . I know what you’re thinking. Oh, great. Another request for my time. I am already so busy.

Hmm, if I participate, what is my information going to be used for? Well, the letter says that collecting data from me will help researchers study education, and it says something else about how the information I provide would “inform education policy . . .”

But what does that mean?

If you’re a parent, student, teacher, school administrator, or district leader, you may have gotten a request like this from me or a colleague at the National Center for Education Statistics (NCES). NCES is one of 13 federal agencies that conducts survey and assessment research in order to help federal, state, and local policymakers better understand public needs and challenges. It is the U.S. Department of Education’s (ED’s) statistical agency and fulfills a congressional mandate to collect, collate, analyze, and report statistics on the condition of American education. The law also directs NCES to do the same about education across the globe.

But how does my participation in a study actually support the role Congress has given NCES?

Good question. When NCES conducts a study, participants are asked to provide information about themselves, their students or child/children, teachers, households, classrooms, schools, colleges, or other education providers. What exactly you will be asked about is based on many considerations, including previous research or policy needs. For example, maybe a current policy might be based on results from an earlier study, and we need to see if the results are still relevant. Maybe the topic has not been studied before and data are needed to determine policy options. In some cases, Congress has charged NCES with collecting data for them to better understand education in general.

Data collected from participants like you are combined so that research can be conducted at the group level. Individual information is not the focus of the research. Instead, NCES is interested in the experiences of groups of people or groups of institutions—like schools—based on the collected data. To protect respondents, personally identifiable information like your name (and other information that could identify you personally) is removed before data are analyzed and is never provided to others. This means that people who participate in NCES studies are grouped in different ways, such as by age or type of school attended, and their information is studied to identify patterns of experiences that people in these different groups may have had.

Let’s take a look at specific examples that show how data from NCES studies provide valuable information for policy decisions.

When policymakers are considering how data can inform policy—either in general or for a specific law under consideration—data from NCES studies play an important role. For example, policymakers concerned that students in their state/district/city often struggle to pay for college may be interested in this question:

“What can education data tell me about how to make college more affordable?”

Or policymakers further along in the law development process might have more specific ideas about how to help low-income students access college. They may have come across research linking programs such as dual enrollment—when high school students take college courses—to college access for underrepresented college students. An example of this research is provided in the What Works Clearinghouse (WWC) dual-enrollment report produced by ED’s Institute for Education Sciences (IES), which shows that dual-enrollment programs are effective at increasing students’ access to and enrollment in college and attainment of degrees. This was found to be the case especially for students typically underrepresented in higher education.   

Then, these policymakers might need more specific questions answered about these programs, such as:

What is the benefit of high school students from low-income households also taking college courses?”

Thanks to people who participate in NCES studies, we have the data to address such policy questions. Rigorous research using data from large datasets, compiled from many participants, can be used to identify differences in outcomes between groups. In the case of dual-enrollment programs, college outcomes for dual-enrollment participants from low-income households can be compared with those of dual-enrollment participants from higher-income households, and possible causes of those differences can be investigated.

The results of these investigations may then inform enactment of laws or creation of programs to support students. In the case of dual enrollment, grant programs might be set up at the state level for districts and schools to increase students’ local access to dual-enrollment credit earning.

This was very close to what happened in 2012, when I was asked by analysts in ED’s Office of Planning, Evaluation, and Policy Development to produce statistical tables with data on students’ access to career and technical education (CTE) programs. Research, as reviewed in the WWC dual-enrollment report, was already demonstrating the benefits of dual enrollment for high school students. Around 2012, ED was considering a policy that would fund the expansion of dual enrollment specifically for CTE. The reason I was asked to provide tables on the topic was my understanding of two important NCES studies, the Education Longitudinal Study of 2002 (ELS:2002) and the High School Longitudinal Study of 2009 (HSLS:09). Data provided by participants in those studies were ideal for studying the question. The tables were used to evaluate policy options. Based on the results, ED, through the President, made a budget request to Congress to support dual-enrollment policies. Ultimately, dual-enrollment programs were included in the Strengthening Career and Technical Education for the 21st Century Act (Perkins V).  

The infographic below shows that this scenario—in which NCES data provided by participants like you were used to provide information about policy—has happened on different scales for different policies many times over the past few decades. The examples included are just some of those from the NCES high school longitudinal studies. NCES data have been used countless times in its 154-year history to improve education for American students. Check out the full infographic (PDF) with other examples.


Excerpt of full infographic showing findings and actions for NCES studies on Equity, Dropout Prevention, and College and Career Readiness


However, it’s not always the case that a direct line can be drawn between data from NCES studies and any one policy. Research often informs policy indirectly by educating policymakers and the public they serve on critical topics. Sometimes, as in the dual-enrollment and CTE programs research question I investigated, it can take time before policy gets enacted or a new program rolls out. This does not lessen the importance of the research, nor the vital importance of the data participants provide that underpin it.

The examples in the infographic represent experiences of actual individuals who took the time to tell NCES about themselves by participating in a study.  

If you are asked to participate in an NCES study, please consider doing so. People like you, schools like yours, and households in your town do matter—and by participating, you are helping to inform decisions and improve education across the country.

 

By Elise Christopher, NCES

Measuring “Traditional” and “Non-Traditional” Student Success in IPEDS: Data Insights from the IPEDS Outcome Measures (OM) Survey Component

This blog post is the second in a series highlighting the Integrated Postsecondary Education Data System (IPEDS) Outcome Measures (OM) survey component. The first post introduced a new resource page that helps data reporters and users understand OM and how it compares to the Graduation Rates (GR) and Graduation Rates 200% (GR200) survey components. Using data from the OM survey component, this post provides key findings about the demographics and college outcomes of undergraduates in the United States and is designed to spark further study of student success using OM data.

What do Outcome Measures cohorts look like?

OM collects student outcomes for all entering degree/certificate-seeking undergraduates, including non-first-time (i.e., transfer-in) and part-time students. Students are separated into eight subcohorts by entering status (i.e., first-time or non-first-time), attendance status (i.e., full-time or part-time), and Pell Grant recipient status.1 Figure 1 shows the number and percentage distribution of degree/certificate-seeking undergraduates in each OM subcohort from 2009–10 to 2012–13, by institutional level.2

Key takeaways:

  • Across all cohort years, the majority of students were not first-time, full-time (FTFT) students, a group typically referred to as “traditional” college students. At 2-year institutions, 36 percent of Pell Grant recipients and 16 percent of non-Pell Grant recipients were FTFT in 2012–13. At 4-year institutions, 43 percent of Pell Grant recipients and 44 percent of non-Pell Grant recipients were FTFT in 2012–13.
  • Pell Grant recipient cohorts have become less “traditional” over time. In 2012–13, some 36 percent of Pell Grant recipients at 2-year institutions were FTFT, down 5 percentage points from 2009–10 (41 percent). At 4-year institutions, 43 percent of Pell Grant recipients were FTFT in 2012–13, down 5 percentage points from 2009–10 (48 percent).

Figure 1. Number and percentage distribution of degree/certificate-seeking undergraduate students in the adjusted cohort, by Pell Grant recipient status, institutional level, and entering and attendance status: 2009–10 to 2012–13 adjusted cohorts

Stacked bar chart showing the number and percentage distribution of degree/certificate-seeking undergraduate students by Pell Grant recipient status (recipients and non-recipients), institutional level (2-year and 4-year), and entering and attendance status (first-time/full-time, first-time/part-time, non-first-time/full-time, and non-first-time/part-time) for 2009–10 to 2012–13 adjusted cohorts

NOTE: This figure presents data collected from Title IV degree-granting institutions in the United States. Percentages may not sum to 100 due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, Integrated Postsecondary Education Data System (IPEDS) Outcome Measures component final data (2017­–19) and provisional data (2020).


What outcomes does Outcome Measures collect?

The OM survey component collects students’ highest credential earned (i.e., certificate, associate’s, or bachelor’s) at 4,3 6, and 8 years after entry. Additionally, for students who did not earn a credential by the 8-year status point, the survey component collects an enrollment status outcome (i.e., still enrolled at the institution, enrolled at another institution, or enrollment status unknown). Figure 2 shows these outcomes for the 2012–13 adjusted cohort.

Key takeaways:

  • The percentage of students earning an award (i.e., certificate, associate’s, or bachelor’s) was higher at each status point, with the greatest change occurring between the 4- and 6-year status points (a 7-percentage point change, from 32 percent to 39 percent).
  • At the 8-year status point, more than a quarter of students were still enrolled in higher education: 26 percent had “transferred-out” to enroll at another institution and 1 percent were still enrolled at their original institution. This enrollment status outcome fills an important gap left by the GR200 survey component, which does not collect information on students who do not earn an award 8 years after entry.

Figure 2. Number and percentage distribution of degree/certificate-seeking undergraduate students, by award and enrollment status and entry status point: 2012–13 adjusted cohort

Waffle chart showing award status (certificate, associate’s, bachelor’s, and did not receive award) and enrollment status (still enrolled at institution, enrolled at another institution, and enrollment status unknown) of degree/certificate-seeking undergraduate students, by status point (4-year, 6-year, and 8-year) for 2012–13 adjusted cohort

NOTE: One square represents 1 percent. This figure presents data collected from Title IV degree-granting institutions in the United States.

SOURCE: U.S. Department of Education, National Center for Education Statistics, Integrated Postsecondary Education Data System (IPEDS) Outcome Measures component provisional data (2020).


How do Outcome Measures outcomes vary across student subgroups?

Every data element collected by the OM survey component (e.g., cohort counts, outcomes by time after entry) can be broken down into eight subcohorts based on entering, attendance, and Pell Grant recipient statuses. In addition to these student characteristics, data users can also segment these data by key institutional characteristics such as sector, Carnegie Classification, special mission (e.g., Historically Black College or University), and region, among others.4 Figure 3 displays the status of degree/certificate-seeking undergraduates 8 years after entry by each student subcohort within the broader 2012–13 degree/certificate-seeking cohort.

Key takeaways:

  • Of the eight OM subcohorts, FTFT non-Pell Grant recipients had the highest rate of earning an award or still being enrolled 8 years after entry. Among this subcohort, 18 percent had an unknown enrollment status 8 years after entry.
  • Among both Pell Grant recipients and non-Pell Grant recipients, full-time students had a higher rate than did part-time students of earning an award or still being enrolled 8 years after entry.
  • First-time, part-time (FTPT) students had the lowest rate of the subcohorts of earning a bachelor’s degree. One percent of FTPT Pell Grant recipients and 2 percent of FTPT non-Pell Grant recipients had earned a bachelor’s degree by the 8-year status point.

Figure 3. Number and percentage distribution of degree/certificate-seeking undergraduate students 8 years after entry, by Pell Grant Recipient status, entering and attendance status, and award and enrollment status: 2012–13 adjusted cohort

Horizontal stacked bar chart showing award (certificate, associate’s, and bachelor’s) and enrollment statuses (still enrolled at institution, enrolled at another institution, and enrollment status unknown) of degree/certificate-seeking undergraduate students by Pell Grant recipient status (recipients and non-recipients), institutional level (2-year and 4-year), and entering and attendance status (first-time/full-time, first-time/part-time, non-first-time/full-time, and non-first-time/part-time) for 2012–13 adjusted cohort

 

NOTE: This figure presents data collected from Title IV degree-granting institutions in the United States. Percentages may not sum to 100 due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, Integrated Postsecondary Education Data System (IPEDS) Outcome Measures component provisional data (2020).


How do Outcome Measures outcomes vary over time?

OM data are comparable across 4 cohort years.5 Figure 4 shows outcomes of degree/certificate-seeking undergraduates 8 years after entry from the 2009–10 cohort through the 2012–13 cohort for so-called “traditional” (i.e., FTFT) and “non-traditional” (i.e., non-FTFT) students.

Key takeaways:

  • For both traditional and non-traditional students, the percentage of students earning an award was higher for the 2012–13 cohort than for the 2009–10 cohort, climbing from 47 percent to 51 percent for traditional students and from 32 percent to 35 percent for non-traditional students.
  • The growth in award attainment for traditional students was driven by the share of students earning bachelor’s degrees (30 percent for the 2009–10 cohort vs. 35 percent for the 2012–13 cohort).
  • The growth in award attainment for non-traditional students was driven by the share of students earning both associate’s degrees (15 percent for the 2009–10 cohort vs. 16 percent for the 2012–13 cohort) and bachelor’s degrees (13 percent for the 2009–10 cohort vs. 15 percent for the 2012–13 cohort).

Figure 4. Number and percentage distribution of degree/certificate-seeking undergraduate students 8 years after entry, by first-time, full-time (FTFT) status and award and enrollment status: 2009–10 to 2012–13 adjusted cohorts

Stacked bar chart showing award status (certificate, associate’s, and bachelor’s) and enrollment status (still enrolled at institution, enrolled at another institution, and enrollment status unknown) of degree/certificate-seeking undergraduate students 8 years after entry by first-time, full-time status (traditional or first-time, full-time students and non-traditional or non-first-time, full-time students) for 2009–10 to 2012–13 adjusted cohorts

NOTE: This figure presents data collected from Title IV degree-granting institutions in the United States. “Non-traditional” (i.e., non-first-time, full-time) students include first-time, part-time, non-first-time, full-time, and non-first-time, part-time subcohorts. Percentages may not sum to 100 due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, Integrated Postsecondary Education Data System (IPEDS) Outcome Measures component final data (2017–2019) and provisional data (2020).


To learn more about the IPEDS OM survey component, visit the Measuring Student Success in IPEDS: Graduation Rates (GR), Graduation Rates 200% (GR200), and Outcome Measures (OM) resource page and the OM survey component webpage. Go to the IPEDS Use the Data page to explore IPEDS data through easy-to-use web tools, access data files to conduct your own analyses like those presented in this blog post, or view OM web tables.  

By McCall Pitcher, AIR


[1] The Federal Pell Grant Program (Higher Education Act of 1965, Title IV, Part A, Subpart I, as amended) provides grant assistance to eligible undergraduate postsecondary students with demonstrated financial need to help meet education expenses.

[2] Due to the 8-year measurement lag between initial cohort enrollment and student outcome reporting for the Outcome Measures survey component, the most recent cohort for which data are publicly available is 2012–13. Prior to the 2009–10 cohort, OM did not collect cohort subgroups by Pell Grant recipient status. Therefore, this analysis includes data only for the four most recent cohorts.

[3] The 4-year status point was added in the 2017–18 collection.

[4] Data users can explore available institutional variables on the IPEDS Use the Data webpage.

[5] For comparability purposes, this analysis relies on data from the 2017–18 collection (reflecting the 2009–10 adjusted cohort) through the 2020–21 collection (reflecting the 2012–13 adjusted cohort). Prior to the 2017–18 collection, OM cohorts were based on a fall term for academic reporters and a full year for program reporters.

Technical Working Group of Education Policy Leaders and Researchers Advises NCER on Education Policy Research Priorities and Strategies

The National Center for Education Research convened a virtual technical working group (TWG) of education policy leaders and researchers to discuss strategies for improving K-12 education systems and policy research funded by NCER. The official summary is now posted on the IES website. This blog post provides a snapshot of that summary.

 

What are the most pressing education systems and policy issues in need of research?

Prior to the meeting, TWG members were asked to identify what policy topics need evidence to inform their decision making, and they identified over 20 issues of pressing concern. The top-identified education policy issues, with at least 3 nominations across both policy leader and researcher groups, are listed below (with the number of nominations in parentheses).

  • Equitable access to high quality instruction (13)
  • Education technology and online instruction (11)
  • Equity in school and student funding (10)
  • Diversity of teachers and equity of pay (8)
  • Socio-emotional learning (SEL), relationships and student engagement (8)
  • Leader Professional Development (6)
  • Teacher professional development (5)
  • Better capacity for data analysis management, communication (5)
  • Early childhood education (5)
  • Career preparation/advising/transitions (5)
  • Culturally relevant pedagogy/anti-racism/anti-bias (3)
  • Special education/English Learners (3)
  • Covid-related learning loss (3)

The discussion during the meeting focused on a broad range of policy, research, and dissemination issues, which did not always align with the above issues. NCER staff organized the themes from the day’s discussion below to highlight where the TWG members expressed general consensus. More detail about each area can be found in the full TWG summary on the IES website.

  • Understanding and Addressing Inequity in Education Systems: The TWG members agreed that the most pressing issue is equity. The COVID-19 pandemic exacerbated and revealed systemic inequity, which affects students who are from racial/ethnic minority backgrounds, low-income households, or other marginalized groups.
  • Improving Use of and Access to Education Technology and High-Quality Online Instruction: With the heavy reliance on online instruction throughout the pandemic, COVID-19 has created new urgency for closing the digital divide. In addition to understanding how systems support or impede access to education technology, TWG members also noted a need for education research focused on questions of teaching and learning in remote and hybrid environments, including professional development and systems-level support for both teachers and students for using and engaging with technology.
  • Recruiting and Retaining a Diverse Teaching Workforce: TWG members noted that teacher shortages, especially in high poverty districts, have been a concern for years, and that there is a need for evidence on how to make access to high quality teachers more equitable. Researchers could examine how recruitment strategies can diversify the teacher workforce through strategies such as incentives specifically for teacher candidates of color and adopting culturally relevant practices.
  • Providing Access to Student and Educator Mental Health Supports: TWG members observed that COVID-19 has been stressful for students, parents, and educators. Remote learning appears to prevent many students from connections that support emotional and behavioral health and to increase the vulnerability of students with high-risk home environments. Research and research syntheses are needed to guide policymakers on what works best in allocating resources to meet student and educator mental health needs and how to connect with community resources and improve systems of support beyond the school.
  • Engaging and Re-Engaging Students: Chronic absenteeism is a major problem in many schools, exacerbated by the pandemic. TWG members pointed out that a key policy question for many LEAs and SEAs is how to re-engage these “missing” students. Researchers can help education leaders understand what evidence-based interventions are available to re-engage students and begin to address disparities in pandemic learning loss.
  • Preparing K-12 Students for Careers: TWG members agreed that researchers, policymakers, and educators should engage and collaborate more frequently with employers to inform what career-aligned experiences should be offered to students in school. Research could contribute to understanding how best to support local communities of practice that include schools, businesses, intermediaries, and community-based organizations with the shared goal of preparing students for careers.
  • Modernizing Assessments: TWG members agreed that education systems should be modernized to assess and address basic skills and learning needs quickly, such as with interim or formative assessments. Research is needed to understand how to use assessment for both accountability purposes as well as to support diagnosis and student progress monitoring. Additionally, research is needed to guide educators on authentic and performance-based assessments.
  • Improving Data-Driven Decision-Making in Schools: TWG members noted that education researchers could provide guidance on identifying a set of core variables, systematically collecting data and metrics, and building data sharing platforms and data agreements. TWG members pointed out that, in many cases, education agencies and educators do not necessarily need more data but more training to build capacity to analyze and use the data they have.
  • Examining Education Finance: TWG members noted that research can generate high-quality information to help policymakers understand how school systems could leverage financial resources to help the most underserved learners and communities. TWG participants noted that a key question for which additional evidence is needed is how much funding is needed to provide the breadth of services, including wraparound services, required to support learning in the poorest schools.
  • Creating Adaptive Education Systems: COVID-19 has shown that education systems must better prepare for emergencies, and TWG members worried that the pandemic will have a lasting effect on student achievement and attainment. The COVID-19 pandemic has provided the education and research community with an opportunity to learn from what went well, what did not, and to propose strategies to put in place to ensure rapid responses to future emergencies and moments of crisis.

How can NCER better support research on education systems/policy issues?

NCER staff asked for recommendations on improving its engagement with education systems and policy work. TWG member recommendations are organized according to five themes.

  • Support a Systems Approach to Systems and Policy Research: TWG members recommended that NCER encourage researchers to untangle and understand broader, dynamic education systems and processes, and to develop methods that capture and account for changing contexts. The TWG encouraged an interdisciplinary approach with different stakeholder perspectives, methods, and measures to move the field forward.
  • Encourage Partnerships with Key Stakeholders: TWG members felt that the relevance of research proposals could be increased with more collaboration between researchers and education leaders. Partnership between researchers and practitioners is one strategy for increasing the local relevance of research and its applicability to specific local questions.
  • Support Rapid Research to Practice Efforts: TWG members agreed that education policy research results should be disseminated to the field quickly. Rapid cycle evaluation methods, such as plan-do-study-act continuous quality improvement approaches can help to inform policy solutions, and while not nimble enough to support quick turnaround studies, NCER funding may be appropriate for continuous improvement methods that are applied within a longer-term research project.
  • Disseminate Information that is Useful to Policymakers: TWG members agreed that NCER research results should be relevant and presented in easy-to-read formats tailored for specific stakeholder audiences. To address this issue, IES can create easy-to-understand research syntheses, as education leaders do not have time or training to comb through the results of individual studies. In addition, TWG members identified a need for research on what is needed for practitioners to translate research to practice, to support decision making, and to address barriers to implementation.
  • Attend to Equity in Grantmaking and Research Focus: TWG members were concerned about equity in grantmaking and broadening participation in the research process. One way to address this is to provide more structured technical assistance to ensure applicants new to IES funding develop competitive research proposals. Active outreach can also help to encourage experts most likely to address equity research questions to apply. TWG members also pointed out that interdisciplinary research teams could help unearth embedded inequities in data collection, measures, and models.

In addition to the ideas discussed above, TWG members suggested specific ideas for how NCER could support research that leads to policy and systems improvement. These are too numerous to include here, but they are described in full in the TWG summary report.


For questions about this blog or the TWG summary, please contact Corinne.Alfeld@ed.gov, NCER Program Officer for the Improving Education Systems topic.

Introducing NCES’s New Locale-Focused Resource Hub: Education Across America

NCES is excited to announce the release of a resource hub that focuses on data by geographic locale—Education Across America: Cities, Suburbs, Towns, and Rural Areas—using a three-phased approach. Released today, Phase I of this new resource hub involves the consolidation of locale-focused data across NCES surveys and programs and makes updates to the latest data available. The result of this work is 140 tables with data disaggregated by all four locales (i.e., cities, suburbs, towns, and rural areas). These tables cover a wide range of topics grouped into broad themes: family characteristics, educational experiences, school resources and staffing, and educational outcomes. Phases II and III will focus on rural areas and involve summarizing findings in text.

To make these data more relevant and useful, NCES adopted a pyramid approach1 to attend to various user segments with tiered products (exhibit 1). Source tables containing data disaggregated by locale form the base of the pyramid. These tables, which contain the most detailed statistical information about education in each locale, target data-savvy users such as researchers.


Exhibit 1. Tiered Approach to Products in Education Across America Resource Hub

Infographic showing pyramid with five levels of NCES products; from bottom to top: source tables, indicators, thematic summaries, briefs, and digital media


The next level is indicators. These indicators, comprising text and figures, will supply in-depth analyses that focus on rural areas. In order to make our data relevant and useful, literature review and focused groups were conducted to identify the topics that are important to education in rural areas. The target audience for these indicators is those who are looking for comprehensive discussions on specific topics in rural education.

The middle level of the pyramid is thematic summaries. These summaries synthesize findings across multiple indicators grouped together by a theme. In addition to thematic summaries, we will create a spotlight that focuses on distant and remote rural areas because these areas are confronted with unique challenges and are of particular policy interest. These products target education leaders in higher education and at the state and local levels.

The next level of the pyramid is briefs, which includes an executive summary on key findings about rural education and an at-a-glance resource that highlights important statistics about schools and students in rural areas. These products are designed as quick reads and target nontechnical audiences—such as state and local education leaders, associations, and policymakers—as well as individuals with an interest in education—such as educators and parents.

The final level of the pyramid is digital media, which includes blogs and social media posts that highlight key findings and resources available in the Education Across America resource hub. These products are designed to connect the media, parents, and educators with information on educational experiences across America.

Phase II involves the development of 5 to 10 indicators focused on the experience of schools and students in rural areas and is expected to be completed in June 2022. Phase III—which is expected to be completed in October 2022—consists of the development of the remaining indicators as well as the products in the thematic summaries and briefs tiers.

Check out our locale-focused research hub, Education Across America, today. Be sure to check back over the summer and fall to explore the hub as we release new products focusing on education in rural areas.

 

By Xiaolei Wang, Ph.D., NCES; and Jodi Vallaster, Ed.D., NCES


[1] Schwabish, J. (2019). “Use the Pyramid Philosophy’ to Better Communicate Your Research.” Urban Institute. https://www.urban.org/urban-wire/use-pyramid-philosophy-better-communicate-your-research; Scanlan, C. (2003). “Writing from the Top Down: Pros and Cons of the Inverted Pyramid.” Poynter. https://www.poynter.org/reporting-editing/2003/writing-from-the-top-down-pros-and-cons-of-the-inverted-pyramid/

Do Underrepresented Students Benefit From Gifted Programs?

Recent studies of gifted and talented programs indicate that the extent and quality of services available to gifted students vary from state to state, district to district, and even from school to school within school districts. In a project titled “Are Gifted Programs Beneficial to Underserved Students?” (PI: William Darity, Duke University), IES-funded researchers are examining the variability of Black and Hispanic students’ access to gifted programs in North Carolina and the potential impact of participation in these gifted programs on Black and Hispanic student outcomes. In this interview blog, we asked co-PIs Malik Henfield and Kristen Stephens to discuss the motivation for their study and preliminary findings.

What motivated your team to study the outcomes of Black and Hispanic students in gifted programs?

The disproportionality between the representation of white students and students of color in gifted education programs is both persistent and pervasive. For decades, we’ve both been working with teachers and school counselors seeking to increase the number of students of color in gifted education programs, but what happens once these students are placed in these programs? We know very little about the educational, social, and emotional impact that participation (or non-participation) has on students. Gifted education programs are widely believed to provide the best educational opportunity for students, but given the impacts race and socioeconomic status have on student success factors, this may not be a sound assumption. In fact, there is negligible (and often contradictory) published research that explores whether gifted programs contribute to beneficial academic and social-emotional outcomes for the underserved students who participate in them. Resolving this question will have tremendous implications for future gifted education policies.

Please tell us about your study. What have you learned so far?

With funding from IES, researchers from Duke University and Loyola University Chicago are collaborating to describe how gifted education policies in North Carolina are interpreted, implemented, and monitored at the state, district, and school levels. We are also estimating how these policies are related to Black, Hispanic, and economically disadvantaged students’ academic and social-emotional outcomes. We hope our examination of individual student characteristics, sociocultural contexts, and environmental factors will help improve the ways school systems identify and serve gifted students from traditionally underrepresented groups.

Although preliminary, there are several interesting findings from our study. Our analysis of district-level gifted education plans highlights promising equity practices (for example, using local norms to determine gifted program eligibility) as well as potential equity inhibitors (for example, relying predominantly on teacher referral). Our secondary data analysis reveals that the majority of school districts do not have equitable representation of Black and Hispanic students in gifted programs. Disproportionality was calculated using the Relative Difference in Composition Index (RDCI). The RDCI represents the difference between a group’s composition in gifted education programs and their composition across the school district expressed as a discrepancy percentage.

What’s Next?

In North Carolina, districts are allowed to interpret state policy and implement programs and support services in ways they deem appropriate. Our next step is to conduct an in-depth qualitative exploration of variations in policy within and across North Carolina school districts. In these forthcoming analyses, we will be looking only at youth identified as underserved along the racial/ethnic minority dimension. In each district, we plan to interview four distinct groups to better understand their greatest assets, needs, challenges, and resources they would find most valuable to facilitate successful academic and social-emotional outcomes: (1) high-achieving underserved students identified as gifted, (2) high-achieving underserved students not identified as gifted, (3) teachers, and (4) school counselors.

For example, we are interested in learning—

  • How educators interpret identification processes from policies
  • How educators perceive recruitment and retention processes and their role in them
  • How ethnic minority students identified as gifted perceive recruitment and retention processes
  • How ethnic minority students not selected for participation in gifted education programming perceive the recruitment process
  • How both student groups make sense of their racial identity

We will then combine what we learned from studies 1-3 (using secondary data) with Study 4 (research in schools) and share the results with policymakers, educators, and the research community.

What advice would you like to share with other researchers who are studying access to gifted programs?

There are three recommendations we would like to share:

  • Investigate instructional interventions that impact short- and long-term academic and social-emotional outcomes for gifted students. The field of gifted education has spent significant time and resources attempting to determine the best methods for identifying gifted students across all racial/ethnic groups. Nonetheless, disparities in representation still exist, and this hyper-focus on identification has come at the expense of increasing our understanding of what types of interventions work, for whom, and under what conditions.
  • Conduct more localized research studies. Since gifted education programs are largely de-centralized, there is considerable variance in how policies are created and implemented across states, districts, and schools. For example, eligibility criteria for participation in gifted programs can differ significantly across school systems.  In NC, “cut score” percentages on achievement and aptitude tests can range from the 85th to the 99th percentile. This makes it difficult to generalize research findings across contexts when participant samples aren’t adequately comparable. 
  • Extend beyond the identification question and consider both generalizability and transferability when designing the research methodology. For generalizability, this entails carefully selecting the sample population and the methods for developing causal models. For transferability, this means providing a detailed account of the ecosystem in which the research is taking place so that practitioners can see the utility of the findings and recommendations within their own contexts. Mixed methods studies would certainly help bridge the relationship between the two. 

 


Dr. Malik S. Henfield is a full professor and founding dean of the Institute for Racial Justice at Loyola University Chicago. His scholarship situates Black students' lived experiences in a broader ecological milieu to critically explore how their personal, social, academic, and career success is impeded and enhanced by school, family, and community contexts. His work to date has focused heavily on the experiences of Black students formally identified as gifted/high achieving.

Dr. Kristen R. Stephens is an associate professor of the Practice in the Program in Education at Duke University. She studies legal and policy issues related to gifted education at the federal, state, and local levels--particularly around how such policies contribute to beneficial academic, social-emotional, and behavioral outcomes for traditionally underserved gifted students.

This interview blog is part of a larger IES blog series on diversity, equity, inclusion and accessibility (DEIA) in the education sciences. It was produced by Katina Stapleton (Katina.Stapleton@ed.gov), co-chair of the IES Diversity and Inclusion Council. For more information about the study, please contact the program officer, Corinne Alfeld (Corinne.Alfeld@ed.gov).