NCES Blog

National Center for Education Statistics

Knock, Knock! Who’s There? Understanding Who’s Counted in IPEDS

The Integrated Postsecondary Education Data System (IPEDS) is a comprehensive federal data source that collects information on key features of higher education in the United States, including characteristics of postsecondary institutions, college student enrollment and academic outcomes, and institutions’ employees and finances, among other topics.

The National Center for Education Statistics (NCES) has created a new resource page, Student Cohorts and Subgroups in IPEDS, that provides data reporters and users an overview of how IPEDS collects information related to postsecondary students and staff. This blog post highlights key takeaways from the resource page.

IPEDS survey components collect counts of key student and staff subgroups of interest to the higher education community.

Data users—including researchers, policy analysts, and prospective college students—may be interested in particular demographic groups within U.S. higher education. IPEDS captures data on a range of student and staff subgroups, including race/ethnicity, gender, age categories, Federal Pell Grant recipient status, transfer-in status, and part-time enrollment status.

The Outcome Measures (OM) survey component stands out as an example of how IPEDS collects student subgroups that are of interest to the higher education community. Within this survey component, all entering degree/certificate-seeking undergraduates are divided into one of eight subgroups by entering status (i.e., first-time or non-first-time), attendance status (i.e., full-time or part-time), and Pell Grant recipient status.

Although IPEDS is not a student-level data system, many of its survey components collect counts of students and staff by subgroup.

Many IPEDS survey components—such as Admissions, Fall Enrollment, and Human Resources—collect data as counts of individuals (i.e., students or staff) by subgroup (e.g., race/ethnicity, gender) (exhibit 1). Other IPEDS survey components—such as Graduation Rates, Graduation Rates 200%, and Outcome Measures—also include selected student subgroups but monitor cohorts of entering degree/certificate-seeking students over time to document their long-term completion and enrollment outcomes. A cohort is a specific group of students established for tracking purposes. The cohort year is based on the year that a cohort of students begins attending college.


Exhibit 1. IPEDS survey components that collect counts of individuals by subgroup

Table showing IPEDS survey components that collect counts of individuals by subgroup; column one shows the unit of information (student counts vs. staff counts); column two shows the survey component


IPEDS collects student and staff counts by combinations of interacting subgroups.

For survey components that collect student or staff counts, individuals are often reported in disaggregated demographic groups, which allows for more detailed understanding of specific subpopulations. For example, the Fall Enrollment (EF) and 12-month Enrollment (E12) survey components collect total undergraduate enrollment counts disaggregated by all possible combinations of students’ full- or part-time status, gender, degree/certificate-seeking status, and race/ethnicity. Exhibit 2 provides an excerpt of the EF survey component’s primary data collection screen (Part A), in which data reporters provide counts of students who fall within each demographic group indicated by the blank cells.


Exhibit 2. Excerpt of IPEDS Fall Enrollment (EF) survey component data collection screen for full-time undergraduate men: 2022­–23

[click image to enlarge]

Image of IPEDS Fall Enrollment survey component data collection screen for full-time undergraduate men in 2022–23

NOTE: This exhibit reflects the primary data collection screen (Part A) for the 2022–23 Fall Enrollment (EF) survey component for full-time undergraduate men. This screen is duplicated three more times for undergraduate students, once each for part-time men, full-time women, and part-time women. For survey materials for all 12 IPEDS survey components, including complete data collection forms and detailed reporting instructions, visit the IPEDS Survey Materials website.


As IPEDS does not collect data at the individual student level, these combinations of interacting subgroups are the smallest unit of information available in IPEDS. However, data users may wish to aggregate these smaller subgroups to arrive at larger groups that reflect broader populations of interest.

For example, using the information presented in exhibit 2, a data user could sum all the values highlighted in the green column to arrive at the total enrollment count of full-time, first-time men. As another example, a data user could sum all the values highlighted in the blue row to determine the total enrollment count of full-time Hispanic/Latino men. Note, however, that many IPEDS data products provide precalculated aggregated values (e.g., total undergraduate enrollment), but data are collected at these smaller units of information (i.e., disaggregated subgroup categories).

Student enrollment counts and cohorts align across IPEDS survey components.

There are several instances when student enrollment or cohort counts reported in one survey component should match or very closely mirror those same counts reported in another survey component. For example, the number of first-time degree/certificate-seeking undergraduate students in a particular fall term should be consistently reported in the Admissions (ADM) and Fall Enrollment (EF) survey components within the same data collection year (see letter A in exhibit 3).


Exhibit 3. Alignment of enrollment counts and cohorts across IPEDS survey components

Infographic showing the alignment of enrollment counts and cohorts across IPEDS survey components


For a full explanation of the alignment of student counts and cohorts across IPEDS survey components (letters A to H in exhibit 3), visit the Student Cohorts and Subgroups in IPEDS resource page.

Be sure to follow NCES on TwitterFacebookLinkedIn, and YouTube, follow IPEDS on Twitter, and subscribe to the NCES News Flash to stay up-to-date on IPEDS data releases and resources.

 

By Katie Hyland and Roman Ruiz, AIR

Women’s Equality Day: The Gender Wage Gap Continues

Today, on Women’s Equality Day, we honor the many women who fuel the education sector with their dedication to our nation’s students! But, let’s also remember the many ways women are still striving to overcome inequalities in the workplace.

Women made up the majority of public school teachers (77 percent) and public school principals (54 percent) in 2017–18. While overrepresented in terms of public school positions, women were paid significantly less than their male counterparts.

Background Demographics

Compared to 30 years ago, women made up higher proportions of public school teachers and principals in 2017–18 than in 1987–88. According to data from the 1987–88 Schools and Staffing Survey (SASS), 71 percent of all public school teachers were women. By 2017–18, data from the National Teacher and Principal Survey (NTPS) showed the rate had increased to 77 percent. The percentage of female public school principals more than doubled during the same period, from 25 percent in 1987–88 to 54 percent in 2017–18. 

Historically, U.S. school buildings weren’t heavily populated by women. Nearly all teachers were men before “Common Schools”—the precursor to today’s public school system—were introduced in the late 1820s. As the education landscape shifted, so did the composition of the teaching workforce. By the 1890s, more than two-thirds (68 percent) of all public school teachers were women.1 

New Depression-era laws in the 1930s—which limited the number of adults in a family who were allowed to work in certain occupations—made it more difficult for married women to stay in the workforce, since a husband often earned more than his wife, even in the same position. Since female public school teachers were the most immediately recognizable example of this law at the local level, married women in education were direct targets of employment discrimination.2 Consequently, the percentage of female teachers dropped from 81 percent in 1930 to 76 percent in 1940.3 Throughout history, this percentage continued to fluctuate as laws readjusted more equitably and more diverse jobs became available to women, although women have always represented more than 50 percent of the teacher workforce. 

The Gender Wage Gap: Teachers and Principals (2017–18 NTPS)

According to Bureau of Labor Statistics data, women are paid less than men in nearly all occupations. While the gap for public elementary and secondary teachers is smaller than the average, it still exists.  

History tells us that the gender wage gap in elementary and secondary education wasn’t accidental. In fact, it was specifically created to expand the reach of the public education system by Common School reformers who argued that the United States could afford to staff the proposed new schools by adding more female teachers, since schools could pay them less than male teachers.4

Patterns in teacher compensation from the 2017–18 school year show that the average base teaching salary of female public school teachers is less than that of their male counterparts ($55,490 vs. $57,453).5 Comparably, female public school principals also had a lower average salary in 2017–18 than did male principals ($96,300 vs. $100,600).

How does average annual salary vary based on teacher, principal, or school characteristics? (201718 NTPS)

Public school teacher and principal salaries are known to vary by several individual- or school-related characteristics (see figures 1 and 2).

For instance, there are fluctuations in teachers’ and principals’ average annual salary by age, years of experience, and highest degree earned. Salary increases often follow a predictable pattern: older, more experienced, or more highly educated teachers and principals generally earn higher salaries than their younger, less experienced, or less educated counterparts.

Educators are also paid differently based on where they work. Certain school characteristics, such as community type, school level, and school size, can influence teachers’ and principals’ average salaries. In 2017–18, the educators with the highest average annual salary worked in either suburban schools, high schools, or large schools with more than 1,000 enrolled students.


Figure 1. Average annual base teaching salary of regular, full-time public school teachers, by selected school or teacher characteristics: 201718

[click figure image to enlarge]

Horizontal bar chart showing average annual base teaching salary of regular, full-time public school teachers, by selected school or teacher characteristics (community type, school level, student enrollment, years of experience, and highest degree earned) in 2017–18


Figure 2. Average annual salary of public school principals, by selected school or principal characteristics: 2017–18

[click figure image to enlarge]

Horizontal bar chart showing average annual salary of public school principals, by selected school or principal characteristics (community type, school level, student enrollment, years of experience, and highest degree earned) in 2017–18


Do teacher, principal, or school characteristics close the gender wage gap? (201718 NTPS)

We know that gender differences in average salary can be correlated with other related factors. For example, higher percentages of public primary school teachers (89 percent) and principals (67 percent) than of public middle or high school educators are female. Notably, figures 1 and 2 show that primary school educators earn less on average than their counterparts in middle or high schools. But these other related factors don’t entirely explain the male-female wage gap.

Teachers

Comparing male and female public school teachers who have the same characteristics can, in some situations, narrow the wage gap (see figure 3).


Figure 3. Average base teaching salary of regular, full-time public school teachers, by sex and selected school and teacher characteristics: 2017–18

[click figure image to enlarge]

Line graph showing average base teaching salary of regular, full-time public school teachers, by sex and selected school and teacher characteristics (years of experience, highest degree earned, community type, school level, and student enrollment) in 2017–18


Among teachers who have the same years of experience, salaries among newer teachers are more similar than among more experienced teachers. There is no significant difference in the average base teaching salary between male and female teachers with less than 4 years or 4–9 years of experience. However, the wage gap remains for the most experienced teachers. Female teachers with 10–14 years or 15 or more years of experience had lower average salaries ($56,990 and $66,600, respectively) than their male counterparts with the same amount of experience ($58,300 and $69,100, respectively).

Similarly, female teachers whose highest degree is bachelor’s degree or less or whose highest degree is a master’s degree earn less on average per year ($49,600 and $62,700, respectively) than male teachers with the same amount of education ($52,300 and $64,300, respectively).6 There is no significant difference between the average salaries of male and female teachers who have higher than a master’s degree.   

When looking at the data by key school characteristics, the wage gap also shrinks for at least some teachers. As discussed before, average base teaching salaries vary by school level and by school size. When comparing male and female teachers at the same school level, female primary school teachers earn less ($56,800) than male primary school teachers ($59,000). But there is no significant difference in average salaries between male and female middle and high school teachers, nor between male and female teachers who work at the same size schools.  

However, gender differences in average base teaching salary remain when school location is the same. In all four community types, female teachers have lower average salaries than their male colleagues: $62,300 vs. $64,400 in suburbs, $59,000 vs. $60,800 in cities, $50,200 vs. $52,600 in rural areas, and $50,100 vs. $52,000 in towns.

Principals

Female principals consistently have lower average annual salaries than male principals, even when controlling for other related factors (see figure 4).


Figure 4. Average annual salary of public school principals, by sex and selected school and principal characteristics: 2017–18

[click figure image to enlarge] 

Line graph showing average annual salary of public school principals, by sex and selected school and teacher characteristics (years of experience, highest degree earned, community type, school level, and student enrollment) in 2017–18


Both the newest and the most experienced female principals are paid significantly less on average than their male peers with the same amount of experience. Similarly, when considering highest degree earned, the data show that female principals are consistently paid less on average than male principals. For example, female principals with a doctorate or first professional degree are paid less on average than male principals with the same education ($102,800 vs. $111,900).

For the most part, principal salaries by gender also remain significantly different when accounting for school characteristics. For example, when considering school location, the data show that female principals have lower average salaries than their male colleagues in all four community types: $105,200 vs. $112,700 in suburbs, $101,400 vs. $106,000 in cities, $85,800 vs. $92,000 in towns, and $82,200 vs. $87,500 in rural areas.

Although there is a lot more to learn about the complex levers that guide educator salaries, the data show that the male-female wage gap is still affecting female educators in many situations.  

Because of the NTPS, researchers, policymakers, and other decisionmakers can continue to analyze relationships that may influence the gender salary gap, including state-by-state differences, turnover rates, self-rated evaluation and job satisfaction scales, and data on the self-reported amount of influence an educator has over various school or classroom decisions. Results from the 2020–21 NTPS will be released in fall 2022 and will include information on the impact of the coronavirus pandemic on public and private schools. Whether the gender wage gap changed over the last two school years is to be determined.

Be sure to follow NCES on Twitter, Facebook, LinkedIn, and YouTube and subscribe to the NCES News Flash to receive notifications when these new data are released.

 

Facts and figures in this blog come from the National Teacher and Principal Survey (NTPS) and its predecessor, the Schools and Staffing Survey (SASS). The NTPS is a primary source of information about K12 schools and educators across the United States and a great resource for understanding the characteristics and experiences of public and private school teachers and principals.

 

By Julia Merlin, NCES


[1] The Fifty-Second Congress. (1893). The executive documents of the House of Representatives for the second session of the Fifty-second Congress (Vol. 1). Washington, DC: U.S. Government Printing Office. 

[2] Blackwelder, J.K. (1998). Women of the Depression: Caste and Culture in San Antonio, 1929–1939. Texas A&M University Press.

[3] Adams, K.H., and Kenne, M.L. (2015). Women, Art, and the New Deal. McFarland.

[4] Kaestle, C. F. (1983). Pillars of the Republic: Common Schools and American Society: 1780–1860. Macmillan.

[5] For the purpose of this blog post, only regular, full-time teachers are included in any salary calculations. A regular full-time teacher is any teacher whose primary position in a school is not an itinerant teacher, a long-term substitute, a short-term substitute, a student teacher, a teacher aide, an administrator, a library media or librarian, another type of professional staff (e.g., counselor, curriculum coordinator, social worker) or support staff (e.g., secretary), or a part-time teacher. For average base salary, teachers who reported zero are excluded from analysis. Summer earnings are not included.

[6] Notably, most teachers have earned a bachelor’s (39 percent) or a master’s (49 percent) degree as their highest level of education. The percentage distribution of teachers whose highest degree earned is a bachelor’s or a master’s degree does not meaningfully differ by gender.

What Do NCES Data Tell Us About America’s Kindergartners?

Happy Get Ready for Kindergarten Month! 

For more than 20 years, the National Center for Education Statistics (NCES) has been collecting information about kindergartners’ knowledge and skills as part of the Early Childhood Longitudinal Studies (ECLS) program.

The first ECLS, the Early Childhood Longitudinal Study, Kindergarten Class of 1998–99 (ECLS-K), focused on children in kindergarten in the 1998–99 school year. At the time the ECLS-K began, no large national study focused on education had followed a cohort of children from kindergarten entry through the elementary school years. Some of today’s commonly known information about young children, such as the information about kindergartners’ early social and academic skills shown in the infographics below, comes out of the ECLS-K. For example, we all know that children arrive at kindergarten with varied knowledge and skills; the ECLS-K was the first study to show at a national level that this was the case and to provide the statistics to highlight the differences in children’s knowledge and skills by various background factors.



The second ECLS kindergarten cohort study, the Early Childhood Longitudinal Study, Kindergarten Class of 2010–11 (ECLS-K:2011), is the ECLS-K’s sister study. This study followed the students who were in kindergarten during the 2010–11 school year. The ECLS-K:2011, which began more than a decade after the inception of the ECLS-K, allows for comparisons of children in two nationally representative kindergarten classes experiencing different policy, educational, and demographic environments. For example, significant changes that occurred between the start of the ECLS-K and the start of the ECLS-K:2011 include the passage of No Child Left Behind legislation, a rise in school choice, and an increase in English language learners. 

From the parents of children in the ECLS-K:2011, we learned how much U.S. kindergartners like school, as shown in the following infographic.



The ECLS program studies also provide information on children’s home learning environments and experiences outside of school that may contribute to learning. For example, we learned from the ECLS-K:2011 what types of activities kindergartners were doing with their parents at least once a month (see the infographic below).


Infographic titled How do kindergarteners like school?


What’s next for ECLS data collections on kindergartners? NCES is excited to be getting ready to launch our next ECLS kindergarten cohort study, the Early Childhood Longitudinal Study, Kindergarten Class of 2023–24 (ECLS-K:2024)

Before the ECLS-K:2024 national data collections can occur, the ECLS will conduct a field test—a trial run of the study to test the study instruments and procedures—in the fall of 2022.

If you, your child, or your school are selected for the ECLS-K:2024 field test or national study, please participate! While participation is voluntary, it is important so that the study can provide information that can be used at the local, state, and national levels to guide practice and policies that increase every child’s chance of doing well in school. The ECLS-K:2024 will be particularly meaningful, as it will provide important information about the experiences of children whose early lives were shaped by the COVID-19 pandemic.

Watch this video to learn more about participation in the ECLS-K:2024. For more information on the ECLS studies and the data available on our nation’s kindergartners, see the ECLS homepage, review our online training modules, or email the ECLS study team.

 

By Jill Carlivati McCarroll, NCES

Timing is Everything: Understanding the IPEDS Data Collection and Release Cycle

For more than 3 decades, the Integrated Postsecondary Education Data System (IPEDS) has collected data from all postsecondary institutions participating in Title IV federal student aid programs, including universities, community colleges, and vocational and technical schools.

Since 2000, the 12 IPEDS survey components occurring in a given collection year have been organized into three seasonal collection periods: Fall, Winter, and Spring.

The timing of when data are collected (the “collection year”) is most important for the professionals who report their data to the National Center for Education Statistics (NCES). However, IPEDS data users are generally more interested in the year that is actually reflected in the data (the “data year”). As an example, a data user may ask, “What was happening with students, staff, and institutions in 2018–19?"


Text box that says: The collection year refers to the time period the IPEDS survey data are collected. The data year refers to the time period reflected in the IPEDS survey data.


For data users, knowing the difference between the collection year and the data year is important for working with and understanding IPEDS data. Often, the collection year comes after the data year, as institutions need time to collect the required data and check to make sure they are reporting the data accurately. This lag between the time period reflected by the data and when the data are reported is typically one academic term or year, depending on the survey component. For example, fall 2021 enrollment data are not reported to NCES until spring 2022, and the data would not be publicly released until fall 2022.

After the data are collected by NCES, there is an additional time period before they are released publicly in which the data undergo various quality and validity checks. About 9 months after each seasonal collection period ends (i.e., Fall, Winter, Spring), there is a Provisional Data Release and IPEDS data products (e.g., web tools, data files) are updated with the newly released seasonal data. During this provisional release, institutions may revise their data if they believe it was inaccurately reported. A Revised/Final Data Release then happens the following year and includes any revisions that were made to the provisional data.

Sound confusing? The data collection and release cycle can be a technical and complex process, and it varies slightly for each of the 12 IPEDS survey components. Luckily, NCES has created a comprehensive resource page that provides information about the IPEDS data collection and release cycles for each survey component as well as key details for data users and data reporters, such as how to account for summer enrollment in the different IPEDS survey components.

Table 1 provides a summary of the IPEDS 2021–22 data collection and release schedule information that can be found on the resource page. Information on the data year and other details about each survey component can also be found on the resource page.


Table 1. IPEDS 2021–22 Data Collection and Release Schedule

Table showing the IPEDS 2021–22 data collection and release schedule


Here are a few examples of how to distinguish the data year from the collection year in different IPEDS data products.

Example 1: IPEDS Trend Generator

Suppose that a data user is interested in how national graduation rates have changed over time. One tool they might use is the IPEDS Trend Generator. The Trend Generator is a ready-made web tool that allows users to view trends over time on the most frequently asked subject areas in postsecondary education. The Graduation Rate chart below displays data year (shown in green) in the headline and on the x-axis. The “Modify Years” option also allows users to filter by data year. Information about the collection year (shown in gold) can be found in the source notes below the chart.


Image of IPEDS Trend Generator webpage


Example 2: IPEDS Complete Data Files

Imagine that a data user was interested enough in 6-year Graduation Rates that they wanted to run more complex analyses in a statistical program. IPEDS Complete Data Files include all variables for all reporting institutions by survey component and can be downloaded by these users to create their own analytic datasets.

Data users should keep in mind that IPEDS Complete Data Files are organized and released by collection year (shown in gold) rather than data year. Because of this, even though files might share the same collection year, the data years reflected within the files will vary across survey components.


Image of IPEDS Complete Data Files webpage


The examples listed above are just a few of many scenarios in which this distinction between collection year and data year is important for analysis and understanding. Knowing about the IPEDS reporting cycle can be extremely useful when it comes to figuring out how to work with IPEDS data. For more examples and additional details on the IPEDS data collection and release cycles for each survey component, please visit the Timing of IPEDS Data Collection, Coverage, and Release Cycle resource page.

Be sure to follow NCES on Twitter, Facebook, LinkedIn, and YouTube, follow IPEDS on Twitter, and subscribe to the NCES News Flash to stay up-to-date on all IPEDS data releases.

 

By Katie Hyland and Roman Ruiz, American Institutes for Research

Rescaled Data Files for Analyses of Trends in Adult Skills

In January 2022, NCES released the rescaled data files for three adult literacy assessments conducted several decades earlier: the 1992 National Adult Literacy Survey (NALS), the 1994 International Adult Literacy Survey (IALS), and the 2003 Adult Literacy and Lifeskills Survey (ALL). By connecting the rescaled data from these assessments with data from the current adult literacy assessment, the Program for the International Assessment of Adult Competencies (PIAAC), researchers can examine trends on adult skills in the United States going back to 1992. This blog post traces the history of each of these adult literacy assessments, describes the files and explains what “rescaling” means, and discusses how these files can be used in analyses in conjunction with the PIAAC files. The last section of the post offers several example analyses of the data.

A Brief History of International and National Adult Literacy Assessments Conducted in the United States

The rescaled data files highlighted in this blog post update and combine historical data from national and international adult literacy studies that have been conducted in the United States.

NALS was conducted in 1992 by NCES and assessed U.S. adults in households, as well as adults in prisons. IALS—developed by Statistics Canada and ETS in collaboration with 22 participating countries, including the United States—assessed adults in households and was administered in three waves between 1994 and 1998. ALL was administered in 11 countries, including the United States, and assessed adults in two waves between 2003 and 2008.

PIAAC seeks to ensure continuity with these previous surveys, but it also expands on their quality assurance standards, extends the definitions of literacy and numeracy, and provides more information about adults with low levels of literacy by assessing reading component skills. It also, for the first time, includes a problem-solving domain to emphasize the skills used in digital (originally called “technology-rich”) environments.

How Do the Released Data Files From the Earlier Studies of Adult Skills Relate to PIACC?

All three of the released restricted-use data files (for NALS, IALS, and ALL) relate to PIAAC, the latest adult skills assessment, in different ways.

The NALS data file contains literacy estimates and background characteristics of U.S. adults in households and in prisons in 1992. It is comparable to the PIAAC data files for 2012/14 and 2017 through rescaling of the assessment scores and matching of the background variables to those of PIAAC.

The IALS and ALL data files contain literacy (IALS and ALL) and numeracy (ALL) estimates and background characteristics of U.S. adults in 1994 (IALS) and 2003 (ALL). Similar to NALS, they are comparable to the PIAAC restricted-use data (2012/14) through rescaling of the literacy and numeracy assessment scores and matching of the background variables to those of PIAAC. These estimates are also comparable to the international estimates of skills of adults in several other countries, including in Canada, Hungary, Italy, Norway, the Netherlands, and New Zealand (see the recently released Data Point International Comparisons of Adult Literacy and Numeracy Skills Over Time). While the NCES datasets contain only the U.S. respondents, IALS and ALL are international studies, and the data from other participating countries can be requested from Statistics Canada (see the IALS Data Files/Publications and ALL Data pages for more detail). See the History of International and National Adult Literacy Assessments page for additional background on these studies. 

Table 1 provides an overview of the rescaled NALS, IALS, and ALL data files.


Table 1. Overview of the rescaled data files for the National Adult Literacy Survey (NALS), International Adult Literacy Survey (IALS), and Adult Literacy and Lifeskills Survey (ALL) 

Table showing overview of the rescaled data files for the National Adult Literacy Survey (NALS), International Adult Literacy Survey (IALS), and Adult Literacy and Lifeskills Survey


What Does “Rescaled” Mean?

“Rescaling” the literacy (NALS, IALS, ALL) and numeracy (ALL) domains from these three previous studies means that the domains were put on the same scale as the PIAAC domains through the derivation of updated estimates of proficiency created using the same statistical models used to create the PIAAC skills proficiencies. Rescaling was possible because PIAAC administered a sufficient number of the same test questions used in NALS, IALS, and ALL.1 These rescaled proficiency estimates allow for trend analysis of adult skills across the time points provided by each study.

What Can These Different Files Be Used For?

While mixing the national and international trend lines isn’t recommended, both sets of files have their own distinct advantages and purposes for analysis.

National files

The rescaled NALS 1992 files can be used for national trend analyses with the PIAAC national trend points in 2012/2014 and 2017. Some potential analytic uses of the NALS trend files are to

  • Provide a picture of the skills of adults only in the United States;
  • Examine the skills of adults in prison and compare their skills with those of adults in households over time, given that NALS and PIAAC include prison studies conducted in 1992 and 2014, respectively;
  • Conduct analyses on subgroups of the population (such as those ages 16–24 or those with less than a high school education) because the larger sample size of NALS allows for more detailed breakdowns along with the U.S. PIAAC sample;
  • Focus on the subgroup of older adults (ages 66–74), given that NALS sampled adults over the age of 65, similar to PIAAC, which sampled adult ages 16–74; and
  • Analyze U.S.-specific background questions (such as those on race/ethnicity or health-related practices).

International files

The rescaled IALS 1994 and ALL 2003 files can be used for international trend analyses among six countries with the U.S. PIAAC international trend point in 2012/2014: Canada, Hungary, Italy, Norway, the Netherlands, and New Zealand. Some potential analytic uses of the IALS and ALL trend files are to

  • Compare literacy proficiency results internationally and over time using the results from IALS, ALL, and PIAAC; and
  • Compare numeracy proficiency results internationally and over time using the results from ALL and PIAAC.

Example Analyses Using the U.S. Trend Data on Adult Literacy

Below are examples of a national trend analysis and an international trend analysis conducted using the rescaled NALS, IALS, and ALL data in conjunction with the PIAAC data.

National trend estimates

The literacy scores of U.S. adults increased from 269 in NALS 1992 to 272 in PIAAC 2012/2014. However, the PIAAC 2017 score of 270 was not significantly different from the 1992 or 2012/2014 scores.


Figure 1. Literacy scores of U.S. adults (ages 16–65) along national trend line: Selected years, 1992–2017

Line graph showing literacy scores of U.S. adults (ages 16–65) along national trend line for NALS 1992, PIAAC 2012/2014, and PIAAC 2017

* Significantly different (p < .05) from NALS 1992 estimate.
SOURCE: U.S. Department of Education, National Center for Education Statistics, National Adult Literacy Survey (NALS), NALS 1992; and Program for the International Assessment of Adult Competencies (PIAAC), PIAAC 2012–17.


International trend estimates

The literacy scores of U.S. adults decreased from 273 in IALS 1994 to 268 in ALL 2003 before increasing to 272 in PIAAC 2012/2014. However, the PIAAC 2012/2014 score was not significantly different from the IALS 1994 score.


Figure 2. Literacy scores of U.S. adults (ages 16–65) along international trend line: Selected years, 1994–2012/14

Line graph showing literacy scores of U.S. adults (ages 16–65) along international trend line for IALS 1994, ALL 2003, and PIAAC 2012/2014

* Significantly different (p < .05) from IALS 1994 estimate.
SOURCE: U.S. Department of Education, National Center for Education Statistics, Statistics Canada and Organization for Economic Cooperation and Development (OECD), International Adult Literacy Survey (IALS), 1994–98; Adult Literacy and Lifeskills Survey (ALL), 2003–08; and Program for the International Assessment of Adult Competencies (PIAAC), PIAAC 2012/14. See figure 1 in the International Comparisons of Adult Literacy and Numeracy Skills Over Time Data Point.


How to Access the Rescaled Data Files

More complex analyses can be conducted with the NALS, IALS, and ALL rescaled data files. These are restricted-use files and researchers must obtain a restricted-use license to access them. Further information about these files is available on the PIAAC Data Files page (see the “International Trend Data Files and Data Resources” and “National Trend Data Files and Data Resources” sections at the bottom of the page).

Additional resources:

By Emily Pawlowski, AIR, and Holly Xie, NCES


[1] In contrast, the 2003 National Assessment of Adult Literacy (NAAL), another assessment of adult literacy conducted in the United States, was not rescaled for trend analyses with PIAAC. For various reasons, including the lack of overlap between the NAAL and PIAAC literacy items, NAAL and PIAAC are thought to be the least comparable of the adult literacy assessments.