NCES Blog

National Center for Education Statistics

Data on the High School Coursetaking of American Indian and Alaska Native Students

Understanding the racial/ethnic equity of educational experiences is a vital objective. The National Assessment of Educational Progress (NAEP) High School Transcript Study (HSTS) collects and analyzes transcripts from a nationally representative sample of America’s public and private high school graduates, including information about the coursetaking of students by race/ethnicity.

In 2019, NCES collected and coded high school transcript data from graduates who participated in the grade 12 NAEP assessments. The participants included American Indian and Alaska Native (AI/AN) students as well as students from other racial/ethnic groups. The main HSTS 2019 results do not include AI/AN findings because the sample sizes for AI/AN students in earlier collection periods were too small to report NAEP performance linked to coursetaking measures. Therefore, this blog post serves to highlight available AI/AN data. Find more information about NAEP's race/ethnicity categories and trends.
 

About HSTS 2019

The 2019 collection is the eighth wave of the study, which was last conducted in 2009 and first conducted in 1987. Data from 1990, 2000, 2009, and 2019—representing approximately decade-long spans—are discussed here. Data from HSTS cover prepandemic school years.
 

How many credits did AI/AN graduates earn?

For all racial/ethnic groups, the average number of Carnegie credits AI/AN graduates earned in 2019 was higher than in 2009 and earlier decades (figure 1). AI/AN graduates earned 27.4 credits on average in 2019, an increase from 23.0 credits in 1990. However, AI/AN graduates earned fewer overall credits in 2019 than did Asian/Pacific Islander, Black, and White graduates, a pattern consistent with prior decades.


Figure 1. Average total Carnegie credits earned by high school graduates, by student race/ethnicity: Selected years, 1990 through 2019 

[click to enlarge image]

Horizontal bar chart showing average total Carnegie credits earned by high school graduates by student race/ethnicity in selected years from 1990 through 2019.

* Significantly different (p < .05) from American Indian/Alaska Native group in the given year.                                                              
+ Significantly different (p < .05) from 2019 within racial/ethnic group.                                                   
NOTE: Race categories exclude Hispanic origin. Black includes African American, Hispanic includes Latino, and Pacific Islander includes Native Hawaiian.                                                               
SOURCE: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Assessment of Educational Progress (NAEP) High School Transcript Study, various years, 1990 to 2019.


In 2019, the smaller number of total credits earned by AI/AN graduates—compared with graduates in other racial/ethnic groups—was driven by the smaller number of academic credits earned. On average, AI/AN graduates earned about 1 to 3 academic credits less (19.3 credits) than graduates in other racial/ethnic groups (e.g., 22.2 for Asian/Pacific Islander graduates and 20.6 for Hispanic graduates) (figure 2). In contrast, AI/AN graduates earned more or a similar number of credits in career and technical education (CTE) (3.6 credits) and other courses (4.5 credits) compared with graduates in other racial/ethnic groups.


Figure 2. Average Carnegie credits earned by high school graduates in academic, career and technical education (CTE), and other courses, by student race/ethnicity: 2019

[click to enlarge image]

Horizontal bar chart showing average Carnegie credits earned by high school graduates in academic, career and technical education (CTE), and other courses by student race/ethnicity in 2019

* Significantly different (p < .05) from American Indian/Alaska Native group.                                                                            
NOTE: Race categories exclude Hispanic origin. Black includes African American, Hispanic includes Latino, and Pacific Islander includes Native Hawaiian.                                                                                                                                                            
SOURCE: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Assessment of Educational Progress (NAEP) High School Transcript Study, 2019.         
  



What was the grade point average (GPA) of AI/AN graduates?

As with credits earned, GPA has been generally trending upward since 1990. AI/AN graduates had an average GPA of 2.54 in 1990 and an average GPA of 3.02 in 2019 (figure 3). Unlike with credits earned, however, the average GPA for AI/AN graduates was between the GPA of graduates in other racial/ethnic groups in 2019: it was lower than the GPAs for Asian/Pacific Islander and White graduates and higher than the GPAs for Black and Hispanic graduates.


Figure 3. Average overall grade point average (GPA) earned by high school graduates, by student race/ethnicity: Selected years, 1990 through 2019

[click to enlarge image]

Horizontal bar chart showing average overall grade point average (GPA) earned by high school graduates by student race/ethnicity in selected years from 1990 through 2019.

* Significantly different (p < .05) from American Indian/Alaska Native group in the given year.                                            
+ Significantly different (p < .05) from 2019 within racial/ethnic group.                                                                                       
NOTE: Race categories exclude Hispanic origin. Black includes African American, Hispanic includes Latino, and Pacific Islander includes Native Hawaiian.                                                                                                                                                            
SOURCE: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Assessment of Educational Progress (NAEP) High School Transcript Study, various years, 1990 to 2019.



What curriculum level did AI/AN graduates reach?

HSTS uses curriculum levels to measure the rigor of high school graduates’ coursework as a potential indicator of college preparedness. There are three curriculum levels: standard, midlevel, and rigorous. Students who did not meet the requirements for a standard curriculum are considered to have a “below standard” curriculum.

Reflecting the smaller numbers of academic credits earned by AI/AN graduates, as described above, a lower percentage of AI/AN graduates reached the rigorous level (the highest level): only 5 percent of AI/AN graduates had completed a rigorous curriculum in 2019, compared with 10 percent of Hispanic, 13 percent of White, and 28 percent of Asian/Pacific Islander graduates (table 1). Similarly, a lower percentage of AI/AN graduates completed a midlevel curriculum than did White, Black, or Hispanic graduates. At the standard and below-standard levels, therefore, AI/AN graduates were overrepresented relative to most other groups.


Table 1. Percentage distribution of high school graduates across earned curriculum levels, by student race/ethnicity: 2019

Table showing the percentage distribution of high school graduates across earned curriculum levels (below standard, standard, midlevel, and rigorous) by student race/ethnicity in 2019.

* Significantly different (p < .05) from American Indian/Alaska Native group.
NOTE: Details may not sum to total due to rounding. A graduate who achieves the standard curriculum earned at least four Carnegie credits of English and three Carnegie credits each of social studies, mathematics, and science. A graduate who achieves a midlevel curriculum earned at least four Carnegie credits in English, three Carnegie credits in mathematics (including credits in algebra and geometry), three Carnegie credits in science (including credits in two among the subjects of biology, chemistry, and physics), three Carnegie credits in social studies, and one Carnegie credit in world languages. A graduate who achieves a rigorous curriculum earned at least four Carnegie credits in English, four Carnegie credits in mathematics (including credits in precalculus or calculus), three Carnegie credits in science (including credits in all three subjects of biology, chemistry, and physics), three Carnegie credits in social studies, and three Carnegie credits in world languages. Graduates with curriculum that do not meet the requirements for the standard level are considered as “Below standard.” Race categories exclude Hispanic origin. Black includes African American, Hispanic includes Latino, and Pacific Islander includes Native Hawaiian.
SOURCE: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Assessment of Educational Progress (NAEP) High School Transcript Study, 2019.


Explore the HSTS 2019 website to learn more about the study, including how courses are classified, grade point average is calculated, and race/ethnicity categories have changed over time. Be sure to follow NCES on XFacebookLinkedIn, and YouTube and subscribe to the NCES News Flash to stay informed about future HSTS data and resources.

 

By Ben Dalton, RTI International, and Robert Perkins, Westat

Making Meaning Out of Statistics

By Dr. Peggy G. Carr, NCES Commissioner

The United States does not have a centralized statistical system like Canada or Sweden, but the federal statistical system we do have now speaks largely with one voice thanks to the Office of Management and Budget’s U.S. Chief Statistician, the Evidence Act of 2018, and proposed regulations to clearly integrate extensive and detailed OMB statistical policy directives into applications of the Act. The Evidence Act guides the work of the federal statistical system to help ensure that official federal statistics, like those we report here at NCES, are collected, analyzed, and reported in a way that the public can trust. The statistics we put out, such as the number and types of schools in the United States, are the building blocks upon which policymakers make policy, educators plan the future of schooling, researchers develop hypotheses about how education works, and parents and the public track the progress of the education system. They all need to know they can trust these statistics—that they are accurate and unbiased, and uninfluenced by political interests or the whims of the statistical methodologist producing the numbers. Through the Evidence Act and our work with colleagues in the federal statistical system, we’ve established guidelines and standards for what we can say, what we won’t say, and what we can’t say. And they help ensure that we do not drift into territory that is beyond our mission.

Given how much thought NCES and the federal statistical system more broadly has put into the way we talk about our statistics, a recent IES blog post, “Statistically Significant Doesn't Mean Meaningful, naturally piqued my interest. I thought back to a question on this very topic that I had on my Ph.D. qualifying statistical comprehensive essay exam. I still remember nailing the answer to that question all these years later. But it’s a tough one—the difference between “statistically significant” and “meaningful” findings—and it’s one that cuts to the heart of the role of statistical agencies in producing numbers that people can trust.

I want to talk about the blog post—the important issue it raises and the potential solution it proposes—as a way to illustrate key differences in how we, as a federal agency producing statistics for the public, approach statistics and how researchers sometimes approach statistics. Both are properly seeking information but often for very different purposes requiring different techniques. And I want to say I was particularly empathetic with the issues raised in the blog post given my decades of background managing the National Assessment of Educational Progress (NAEP) and U.S. participation in major international assessments like the Program for International Student Assessment (PISA). In recent years, given NAEP’s large sample size, it is not unheard of for two estimates (e.g., average scores) to round to the same whole number, and yet be statistically different. Or, in the case of U.S. PISA results, for scores to be 13 points apart, but yet not be statistically different. So, the problem that the blog post raises is both long standing and quite familiar to me.


The Problem   

Here’s the knotty problem the blog post raises: Sometimes, when NCES says there’s no statistically significant difference between two numbers, some people think we are saying there’s no difference between those two numbers at all. For example, on the 2022 NAEP, we estimated an average score of 212 for the Denver Public School District in grade 4 reading. That score for Denver in 2019 was 217. When we reported the 2022 results, we said that there was no statistically significant difference between Denver’s grade 4 reading scores between 2019 and 2022 even though the estimated scores in the two years were 5 points apart. This is because the Denver scores in 2019 and 2022 were estimates based on samples of students and we could not conclude that if we assessed every single Denver fourth-grader in both years that we wouldn’t have found, say, that the scores were 212 in both years. NAEP assessments are like polls: there is uncertainty (a margin of error) around the results. Saying that there was no statistically significant difference between two estimates is not the same as saying that there definitely was no difference. We’re simply saying we don’t have enough evidence to say for sure (or nearly sure) there was a difference.

Making these kinds of uncertain results clear to the public can be very difficult, and I applaud IES for raising the issue and proposing a solution. Unfortunately, the proposed solution—a “Bayesian” approach that “borrows” data from one state to estimate scores for another and that relies more than we are comfortable with, as a government statistical agency, on the judgment of the statistician running the analysis—can hurt more than help.


Two Big Concerns With a Bayesian Approach for Releasing NAEP Results

Two Big Concerns With a Bayesian Approach for NAEP

Big Concern #1: It “borrows” information across jurisdictions, grades, and subjects.

Big Concern #2: The statistical agency decides the threshold for what’s “meaningful.”

Let me say more about the two big concerns I have about the Bayesian approach proposed in the IES blog post for releasing NAEP results. And, before going into these concerns, I want to emphasize that these are concerns specifically with using this approach to release NAEP results. The statistical theory on which Bayesian methods are based is central to our estimation procedures for NAEP. And you’ll see later that we believe there are times when the Bayesian approach is the right statistical approach for releasing results.


Big Concern #1: The Proposed Approach Borrows Information Across Jurisdictions, Grades, and Subjects

The Bayesian approach proposed in the IES blog post uses data on student achievement in one state to estimate performance in another, performance at grade 8 to estimate performance at grade 4, and performance in mathematics to estimate performance in reading. The approach uses the fact that changes in scores across states often correlate highly with each other. Certainly, when COVID disrupted schooling across the nation, we saw declines in student achievement across the states. In other words, we saw apparent correlations. The Bayesian approach starts from an assumption that states’ changes in achievement correlate with each other and uses that to predict the likelihood that the average score for an individual state or district has increased or decreased. It can do the same thing with correlations in changes in achievement across subjects and across grade levels—which also often correlate highly. This is a very clever approach for research purposes.

However, it is not an approach that official statistics, especially NAEP results, should be built upon. In a country where curricular decisions are made at the local level and reforms are targeted at specific grade levels and in specific subjects, letting grade 8 mathematics achievement in, say, Houston influence what we report for grade 4 reading in, say, Denver, would be very suspect. Also, if we used Houston results to estimate Denver results, or math results to estimate reading results, or grade 8 results to estimate grade 4 results, we might also miss out on chances of detecting interesting differences in results.


Big Concern #2: The Bayesian Approach Puts the Statistical Agency in the Position of Deciding What’s “Meaningful”

A second big concern is the extent to which the proposed Bayesian approach would require the statisticians at NCES to set a threshold for what would be considered a “meaningful” difference. In this method, the statistician sets that threshold and then the statistical model reports out the probability that a reported difference is bigger or smaller than that threshold. As an example, the blog post suggests 3 NAEP scale score points as a “meaningful” change and presents this value as grounded in hard data. But in reality, the definition of a “meaningful” difference is a judgment call. And making the judgment is messy. The IES blog post concedes that this is a major flaw, even as it endorses broad application of these methods: “Here's a challenge: We all know how the p<.05 threshold leads to ‘p-hacking’; how can we spot and avoid Bayesian bouts of ‘threshold hacking,’ where different stakeholders argue for different thresholds that suit their interests?”

That’s exactly the pitfall to avoid! We certainly do our best to tell our audiences, from lay people to fellow statisticians, what the results “mean.” But we do not tell our stakeholders whether changes or differences in scores are large enough to be deemed "meaningful," as this depends on the context and the particular usage of the results.

This is not to say that we statisticians don’t use judgement in our work. In fact, the “p<.05” threshold for statistical significance that is the main issue the IES blog post has with reporting of NAEP results is a judgement. But it’s a judgement that has been widely established across the statistics and research worlds for decades and is built into the statistical standards of NCES and many other federal statistical agencies. And it’s a judgement specific to statistics: It’s meant to help account for margins of error when investigating if there is a difference at all—not a judgement about whether the difference exceeds a threshold to count as “meaningful.” By using this widely established standard, readers don’t have to wonder, “is NAEP setting its own standards?” or, perhaps more important, “is NAEP telling us, the public, what is meaningful?” Should the “p<.05” standard be revisited? Maybe. As, I note below, this is a question that is often asked in the statistical community. Should NCES and NAEP go on their own and tell our readers what is a meaningful result? No. That’s for our readers to decide.


What Does the Statistical Community Have to Say?

The largest community of statistical experts in the United States—the American Statistical Association (ASA)—has a lot to say on this topic. In recent years, they grappled with the p-value dilemma and put out a statement in 2016 that described misuses of tests of statistical significance. An editorial that later appeared in the American Statistician (an ASA journal) even recommended eliminating the use of statistical significance and the so-called “p-values” on which they are based. As you might imagine, there was considerable debate in the statistical and research community as a result. So in 2019, the president of the ASA convened a task force, which clarified that the editorial was not an official ASA policy. The task force concluded: “P-values are valid statistical measures that provide convenient conventions for communicating the uncertainty inherent in quantitative results. . . . Much of the controversy surrounding statistical significance can be dispelled through a better appreciation of uncertainty, variability, multiplicity, and replicability.”

In other words: Don't throw the baby out with the bathwater!


So, When Should NCES Use a Bayesian Approach?

Although I have been arguing against the use of a Bayesian approach for the release of official NAEP results, there’s much to say for Bayesian approaches when you need them. As the IES blog post notes, the Census Bureau uses a Bayesian method in estimating statistics for small geographic areas where they do not have enough data to make a more direct estimation. NCES has also used similar Bayesian methods for many years, where appropriate. For example, we have used Bayesian approaches to estimate adult literacy rates for small geographic areas for 20 years, dating back to the National Assessment of Adult Literacy (NAAL) of 2003. We use them today in our “small area estimates” of workplace skill levels in U.S. states and counties from the Program for the International Assessment of Adult Competencies (PIAAC). And when we do, we make it abundantly clear that these are indirect, heavily model-dependent estimates.

In other words, the Bayesian approach is a valuable tool in the toolbox of a statistical agency. However, is it the right tool for producing official statistics, where samples, by design, meet the reporting standards for producing direct estimates? The short answer is “no.”


Conclusion

Clearly and accurately reporting official statistics can be a challenge, and we are always looking for new approaches that can help our stakeholders better understand all the data we collect. I began this blog post noting the role of the federal statistical system and our adherence to high standards of objectivity and transparency, as well as our efforts to express our sometimes-complicated statistical findings as accurately and clearly as we can. IES has recently published another blog post describing some great use cases for Bayesian approaches, as well as methodological advances funded by our sister center, the National Center for Education Research. But the key point I took away from this blog post was that the Bayesian approach was great for research purposes, where we expect the researcher to make lots of assumptions (and other researchers to challenge them). That’s research, not official statistics, where we must stress clarity, accuracy, objectivity, and transparency.  

I will end with a modest proposal. Let NCES stick to reporting statistics, including NAEP results, and leave questions about what is meaningful to readers . . . to the readers!

NCES Presentation at National HBCU Week Conference

In NCES’s recently released Strategic Plan, Goal 3 identifies our commitment to foster and leverage beneficial partnerships. To fulfill that goal, NCES participates in multiple conferences and meetings throughout the year. Recently, NCES participated in the National Historically Black Colleges and Universities (HBCU) Week Conference. NCES’s presentation at this conference helps us to establish a dialogue with HBCUs and develop partnerships to address critical issues in education.

NCES Commissioner Peggy G. Carr kicked off the presentation with an overview of HBCU data—such as student characteristics, enrollment, and financial aid. Then, NCES experts explored how data from various NCES surveys can help researchers, educators, and policymakers better understand the condition and progress of HBCUs. Read on to learn about these surveys.

 

Integrated Postsecondary Education Data System (IPEDS)

The Integrated Postsecondary Education Data System (IPEDS) is an annual administrative data collection that gathers information from more than 6,000 postsecondary institutions, including 99 degree-granting, Title IV–eligible HBCUs (in the 2021–22 academic year).

The data collected in IPEDS includes information on institutional characteristics and resources; admissions and completions; student enrollment; student financial aid; and human resources (i.e., staff characteristics). These data are disaggregated, offering insights into student and employee demographics by race/ethnicity and gender, students’ age categories, first-time/non-first-time enrollment statuses, and full-time/part-time attendance intensity.

Data from IPEDS can be explored using various data tools—such as Data Explorer, Trend Generator, and College Navigator—that cater to users with varying levels of data knowledge and varying data needs.

 

National Postsecondary Student Aid Study (NPSAS)

The National Postsecondary Student Aid Study (NPSAS) is a nationally representative study that examines the characteristics of students in postsecondary institutions—including HBCUs—with a special focus on how they finance their education. NPSAS collects data on the percentage of HBCU students receiving financial aid and the average amounts received from various sources (i.e., federal, state, and institution) by gender and race/ethnicity.

Conducted every 3 or 4 years, this study combines data from student surveys, student-level school records, and other administrative sources and is designed to describe the federal government’s investment in financing students’ postsecondary education.

Data from NPSAS can be explored using DataLab and PowerStats.

 

National Teacher and Principal Survey (NTPS)

The National Teacher and Principal Survey (NTPS) is the U.S. Department of Education’s primary source of information on K–12 public and private schools from the perspectives of teachers and administrators. NTPS consists of coordinated surveys of schools, principals, and teachers and includes follow-up surveys to study principal and teacher attrition.

Among many other topics, NTPS collects data on the race/ethnicity of teachers and principals. These data—which show that Black teachers and principals make up a relatively small portion of the K–12 workforce—can be used to explore the demographics and experiences of teachers and principals. NTPS provides postsecondary institutions, like HBCUs, a snapshot of the preK–12 experiences of students and staff.

Data from NTPS can be explored using DataLab and PowerStats.

 

National Assessment of Educational Progress (NAEP)

The National Assessment of Educational Progress (NAEP)—also known as the Nation’s Report Card—is the the largest nationally representative and continuing assessment of what students in public and private schools in the United States know and are able to do in various subjects.

Main NAEP assesses students in grades 4, 8, and 12 in subjects like reading, mathematics, science, and civics, while NAEP Long-Term Trend assesses 9-, 13-, and 17-year-olds in reading and mathematics.

Among many other topics, NAEP collects data on students by race/ethnicity. These data can help to shed light on students’ experiences, academic performance, and level of preparedness before they enroll in HBCUs.

Data from NAEP can be explored using the NAEP Data Explorer.

 

To explore more HBCU data from these and other NCES surveys—including enrollment trends from 1976 to 2021—check out this annually updated Fast Fact. Be sure to follow NCES on X, Facebook, LinkedIn, and YouTube and subscribe to the NCES News Flash to stay up to date on the latest from NCES.

 

By Megan Barnett, AIR

Access NCES-Led Sessions From the 2022 American Educational Research Association (AERA) Annual Meeting

In April, several NCES experts presented at the AERA 2022 Annual Meeting, a 6-day event focused on the theme of “Cultivating Equitable Education Systems for the 21st Century.” Access their presentations below to learn more about their research.

Be sure to follow NCES on Twitter, Facebook, and LinkedIn to stay up-to-date on NCES presentations at upcoming conferences and events.

 

By Megan Barnett, AIR

NCES Activities Dedicated to Understanding the Condition of Education During the Coronavirus Pandemic

The emergence of the coronavirus pandemic 2 years ago shifted not only how students received educational services around the world but also how the National Center for Education Statistics (NCES) carried out its mission, which is to collect, analyze, and report statistics on the condition of education in the United States.

NCES has conducted several surveys to measure educational enrollment, experiences, and outcomes as part of existing data collections and created new, innovative, and timely data initiatives. NCES is currently fielding more than 15 projects with information related to the pandemic. Since early 2020, NCES has collected information about educational experiences of students from elementary through postsecondary institutions. A few of the data collections will extend beyond 2022, providing rich data resources that will document changes in the educational landscape throughout the lifecycle of the pandemic.


NCES Coronavirus Pandemic Data Collection Coverage


In order to respond to the call for information about how students learned during widespread school disruptions, NCES modified existing and created new data collection avenues to receive and report vital information in unprecedented ways. Below are summaries of some of the data products available.

Looking ahead, NCES will provide NAEP data on how student performance has changed in various subjects since the coronavirus pandemic began. NCES will also collect and report information about learning contexts, which are critical for understanding educational outcomes. NCES will also develop a new system to share pandemic-related data collected across the center.

All of these resources are currently available or will be available on the NCES website.

 

By Ebony Walton and Josh DeLaRosa, NCES