IES Blog

Institute of Education Sciences

Knock, Knock! Who’s There? Understanding Who’s Counted in IPEDS

The Integrated Postsecondary Education Data System (IPEDS) is a comprehensive federal data source that collects information on key features of higher education in the United States, including characteristics of postsecondary institutions, college student enrollment and academic outcomes, and institutions’ employees and finances, among other topics.

The National Center for Education Statistics (NCES) has created a new resource page, Student Cohorts and Subgroups in IPEDS, that provides data reporters and users an overview of how IPEDS collects information related to postsecondary students and staff. This blog post highlights key takeaways from the resource page.

IPEDS survey components collect counts of key student and staff subgroups of interest to the higher education community.

Data users—including researchers, policy analysts, and prospective college students—may be interested in particular demographic groups within U.S. higher education. IPEDS captures data on a range of student and staff subgroups, including race/ethnicity, gender, age categories, Federal Pell Grant recipient status, transfer-in status, and part-time enrollment status.

The Outcome Measures (OM) survey component stands out as an example of how IPEDS collects student subgroups that are of interest to the higher education community. Within this survey component, all entering degree/certificate-seeking undergraduates are divided into one of eight subgroups by entering status (i.e., first-time or non-first-time), attendance status (i.e., full-time or part-time), and Pell Grant recipient status.

Although IPEDS is not a student-level data system, many of its survey components collect counts of students and staff by subgroup.

Many IPEDS survey components—such as Admissions, Fall Enrollment, and Human Resources—collect data as counts of individuals (i.e., students or staff) by subgroup (e.g., race/ethnicity, gender) (exhibit 1). Other IPEDS survey components—such as Graduation Rates, Graduation Rates 200%, and Outcome Measures—also include selected student subgroups but monitor cohorts of entering degree/certificate-seeking students over time to document their long-term completion and enrollment outcomes. A cohort is a specific group of students established for tracking purposes. The cohort year is based on the year that a cohort of students begins attending college.


Exhibit 1. IPEDS survey components that collect counts of individuals by subgroup

Table showing IPEDS survey components that collect counts of individuals by subgroup; column one shows the unit of information (student counts vs. staff counts); column two shows the survey component


IPEDS collects student and staff counts by combinations of interacting subgroups.

For survey components that collect student or staff counts, individuals are often reported in disaggregated demographic groups, which allows for more detailed understanding of specific subpopulations. For example, the Fall Enrollment (EF) and 12-month Enrollment (E12) survey components collect total undergraduate enrollment counts disaggregated by all possible combinations of students’ full- or part-time status, gender, degree/certificate-seeking status, and race/ethnicity. Exhibit 2 provides an excerpt of the EF survey component’s primary data collection screen (Part A), in which data reporters provide counts of students who fall within each demographic group indicated by the blank cells.


Exhibit 2. Excerpt of IPEDS Fall Enrollment (EF) survey component data collection screen for full-time undergraduate men: 2022­–23

[click image to enlarge]

Image of IPEDS Fall Enrollment survey component data collection screen for full-time undergraduate men in 2022–23

NOTE: This exhibit reflects the primary data collection screen (Part A) for the 2022–23 Fall Enrollment (EF) survey component for full-time undergraduate men. This screen is duplicated three more times for undergraduate students, once each for part-time men, full-time women, and part-time women. For survey materials for all 12 IPEDS survey components, including complete data collection forms and detailed reporting instructions, visit the IPEDS Survey Materials website.


As IPEDS does not collect data at the individual student level, these combinations of interacting subgroups are the smallest unit of information available in IPEDS. However, data users may wish to aggregate these smaller subgroups to arrive at larger groups that reflect broader populations of interest.

For example, using the information presented in exhibit 2, a data user could sum all the values highlighted in the green column to arrive at the total enrollment count of full-time, first-time men. As another example, a data user could sum all the values highlighted in the blue row to determine the total enrollment count of full-time Hispanic/Latino men. Note, however, that many IPEDS data products provide precalculated aggregated values (e.g., total undergraduate enrollment), but data are collected at these smaller units of information (i.e., disaggregated subgroup categories).

Student enrollment counts and cohorts align across IPEDS survey components.

There are several instances when student enrollment or cohort counts reported in one survey component should match or very closely mirror those same counts reported in another survey component. For example, the number of first-time degree/certificate-seeking undergraduate students in a particular fall term should be consistently reported in the Admissions (ADM) and Fall Enrollment (EF) survey components within the same data collection year (see letter A in exhibit 3).


Exhibit 3. Alignment of enrollment counts and cohorts across IPEDS survey components

Infographic showing the alignment of enrollment counts and cohorts across IPEDS survey components


For a full explanation of the alignment of student counts and cohorts across IPEDS survey components (letters A to H in exhibit 3), visit the Student Cohorts and Subgroups in IPEDS resource page.

Be sure to follow NCES on TwitterFacebookLinkedIn, and YouTube, follow IPEDS on Twitter, and subscribe to the NCES News Flash to stay up-to-date on IPEDS data releases and resources.

 

By Katie Hyland and Roman Ruiz, AIR

Timing is Everything: Understanding the IPEDS Data Collection and Release Cycle

For more than 3 decades, the Integrated Postsecondary Education Data System (IPEDS) has collected data from all postsecondary institutions participating in Title IV federal student aid programs, including universities, community colleges, and vocational and technical schools.

Since 2000, the 12 IPEDS survey components occurring in a given collection year have been organized into three seasonal collection periods: Fall, Winter, and Spring.

The timing of when data are collected (the “collection year”) is most important for the professionals who report their data to the National Center for Education Statistics (NCES). However, IPEDS data users are generally more interested in the year that is actually reflected in the data (the “data year”). As an example, a data user may ask, “What was happening with students, staff, and institutions in 2018–19?"


Text box that says: The collection year refers to the time period the IPEDS survey data are collected. The data year refers to the time period reflected in the IPEDS survey data.


For data users, knowing the difference between the collection year and the data year is important for working with and understanding IPEDS data. Often, the collection year comes after the data year, as institutions need time to collect the required data and check to make sure they are reporting the data accurately. This lag between the time period reflected by the data and when the data are reported is typically one academic term or year, depending on the survey component. For example, fall 2021 enrollment data are not reported to NCES until spring 2022, and the data would not be publicly released until fall 2022.

After the data are collected by NCES, there is an additional time period before they are released publicly in which the data undergo various quality and validity checks. About 9 months after each seasonal collection period ends (i.e., Fall, Winter, Spring), there is a Provisional Data Release and IPEDS data products (e.g., web tools, data files) are updated with the newly released seasonal data. During this provisional release, institutions may revise their data if they believe it was inaccurately reported. A Revised/Final Data Release then happens the following year and includes any revisions that were made to the provisional data.

Sound confusing? The data collection and release cycle can be a technical and complex process, and it varies slightly for each of the 12 IPEDS survey components. Luckily, NCES has created a comprehensive resource page that provides information about the IPEDS data collection and release cycles for each survey component as well as key details for data users and data reporters, such as how to account for summer enrollment in the different IPEDS survey components.

Table 1 provides a summary of the IPEDS 2021–22 data collection and release schedule information that can be found on the resource page. Information on the data year and other details about each survey component can also be found on the resource page.


Table 1. IPEDS 2021–22 Data Collection and Release Schedule

Table showing the IPEDS 2021–22 data collection and release schedule


Here are a few examples of how to distinguish the data year from the collection year in different IPEDS data products.

Example 1: IPEDS Trend Generator

Suppose that a data user is interested in how national graduation rates have changed over time. One tool they might use is the IPEDS Trend Generator. The Trend Generator is a ready-made web tool that allows users to view trends over time on the most frequently asked subject areas in postsecondary education. The Graduation Rate chart below displays data year (shown in green) in the headline and on the x-axis. The “Modify Years” option also allows users to filter by data year. Information about the collection year (shown in gold) can be found in the source notes below the chart.


Image of IPEDS Trend Generator webpage


Example 2: IPEDS Complete Data Files

Imagine that a data user was interested enough in 6-year Graduation Rates that they wanted to run more complex analyses in a statistical program. IPEDS Complete Data Files include all variables for all reporting institutions by survey component and can be downloaded by these users to create their own analytic datasets.

Data users should keep in mind that IPEDS Complete Data Files are organized and released by collection year (shown in gold) rather than data year. Because of this, even though files might share the same collection year, the data years reflected within the files will vary across survey components.


Image of IPEDS Complete Data Files webpage


The examples listed above are just a few of many scenarios in which this distinction between collection year and data year is important for analysis and understanding. Knowing about the IPEDS reporting cycle can be extremely useful when it comes to figuring out how to work with IPEDS data. For more examples and additional details on the IPEDS data collection and release cycles for each survey component, please visit the Timing of IPEDS Data Collection, Coverage, and Release Cycle resource page.

Be sure to follow NCES on Twitter, Facebook, LinkedIn, and YouTube, follow IPEDS on Twitter, and subscribe to the NCES News Flash to stay up-to-date on all IPEDS data releases.

 

By Katie Hyland and Roman Ruiz, American Institutes for Research

Announcing the Condition of Education 2022 Release

NCES is pleased to present the 2022 edition of the Condition of Education. The Condition is part of a 150-year tradition at NCES and provides historical and contextual perspectives on key measures of educational progress to Congress and the American public. This report uses data from across NCES and from other sources and is designed to help policymakers and the public monitor the latest developments and trends in U.S. education.

Cover of Report on the Condition of Education with IES logo and photos of children reading and writing

The foundation of the Condition of Education is a series of online indicators. Fifty-two of these indicators include content that has been updated this year. Each indicator provides detailed information on a unique topic, ranging from prekindergarten through postsecondary education, as well as labor force outcomes and international comparisons. In addition to the online indicator system, a synthesized overview of findings across topics is presented in the Report on the Condition of Education.

This year, we are excited to begin the rollout of interactive figures. These new interactive figures will empower users to explore the data in different ways. A selection of these indicators are highlighted here. They show various declines in enrollment that occurred during the coronavirus pandemic, from early childhood through postsecondary education. (Click the links below to explore the new interactive figures!)

  • From 2019 to 2020, enrollment rates of young children fell by 6 percentage points for 5-year-olds (from 91 to 84 percent) and by 13 percentage points for 3- to 4-year-olds (from 54 to 40 percent).
  • Public school enrollment in prekindergarten through grade 12 dropped from 50.8 million in fall 2019 to 49.4 million students in fall 2020. This 3 percent drop brought total enrollment back to 2009 levels (49.4 million), erasing a decade of steady growth.
  • At the postsecondary level, total undergraduate enrollment decreased by 9 percent from fall 2009 to fall 2020 (from 17.5 million to 15.9 million students). For male and female students, enrollment patterns exhibited similar trends between 2009 and 2019 (both decreasing by 5 percent). However, from 2019 to 2020, female enrollment fell 2 percent, while male enrollment fell 7 percent. Additionally, between 2019 and 2020, undergraduate enrollment fell 5 percent at public institutions and 2 percent at private nonprofit institutions. In contrast, undergraduate enrollment at for-profit institutions was 4 percent higher in fall 2020 than in fall 2019, marking the first positive single year change in enrollments at these institutions since 2010. Meanwhile, at the postbaccalaureate level, enrollment increased by 10 percent between fall 2009 and fall 2020 (from 2.8 million to 3.1 million students).
  • Educational attainment is associated with economic outcomes, such as employment and earnings, as well as with changes in these outcomes during the pandemic. Compared with 2010, employment rates among 25- to 34-year-olds were higher in 2021 only for those with a bachelor’s or higher degree (84 vs 86 percent). For those who had completed high school and those with some college, employment rates increased from 2010 to 2019, but these gains were reversed to 68 and 75 percent, respectively, during the coronavirus pandemic. For those who had not completed high school, the employment rate was 53 percent in 2021, which was not measurably different from 2019 or 2010.

This year’s Condition also includes two spotlight indicators. These spotlights use data from the Household Pulse Survey (HPS) to examine education during the coronavirus pandemic.

  • Homeschooled Children and Reasons for HomeschoolingThis spotlight opens with an examination of historical trends in homeschooling, using data from the National Household Education Survey (NHES). Then, using HPS, this spotlight examines the percentage of adults with students under 18 in the home who were homeschooled during the 2020–21 school year. Some 6.8 percent of adults with students in the home reported that at least one child was homeschooled in 2020–21. The percentage was higher for White adults (7.4 percent) than for Black adults (5.1 percent) and for Asian adults (3.6 percent). It was also higher for Hispanic adults (6.5 percent) than for Asian adults.
  • Impact of the Coronavirus Pandemic on Fall Plans for Postsecondary Education: This spotlight uses HPS data to examine changes in plans for fall 2021 postsecondary education made in response to the coronavirus pandemic. Among adults 18 years old and over who had household members planning to take classes in fall 2021 from a postsecondary institution, 44 percent reported that there was no change for any household member in their fall plans for postsecondary classes. This is compared with 28 percent who reported no change in plans for at least one household member one year earlier in the pandemic, for fall 2020.

The Condition also includes an At a Glance section, which allows readers to quickly make comparisons within and across indicators, as well as a Reader’s Guide, a Glossary, and a Guide to Sources that provide additional information to help place the indicators in context. In addition, each indicator references the source data tables that were used to produce that indicator. Most of these are in the Digest of Education Statistics.

In addition to publishing the Condition of Education, NCES produces a wide range of other reports and datasets designed to help inform policymakers and the public about significant trends and topics in education. More information about the latest activities and releases at NCES may be found on our website or by following us on Twitter, Facebook, and LinkedIn.

 

By Peggy G. Carr, NCES Commissioner

You’ve Been Asked to Participate in a Study

Dear reader,

You’ve been asked to participate in a study.

. . . I know what you’re thinking. Oh, great. Another request for my time. I am already so busy.

Hmm, if I participate, what is my information going to be used for? Well, the letter says that collecting data from me will help researchers study education, and it says something else about how the information I provide would “inform education policy . . .”

But what does that mean?

If you’re a parent, student, teacher, school administrator, or district leader, you may have gotten a request like this from me or a colleague at the National Center for Education Statistics (NCES). NCES is one of 13 federal agencies that conducts survey and assessment research in order to help federal, state, and local policymakers better understand public needs and challenges. It is the U.S. Department of Education’s (ED’s) statistical agency and fulfills a congressional mandate to collect, collate, analyze, and report statistics on the condition of American education. The law also directs NCES to do the same about education across the globe.

But how does my participation in a study actually support the role Congress has given NCES?

Good question. When NCES conducts a study, participants are asked to provide information about themselves, their students or child/children, teachers, households, classrooms, schools, colleges, or other education providers. What exactly you will be asked about is based on many considerations, including previous research or policy needs. For example, maybe a current policy might be based on results from an earlier study, and we need to see if the results are still relevant. Maybe the topic has not been studied before and data are needed to determine policy options. In some cases, Congress has charged NCES with collecting data for them to better understand education in general.

Data collected from participants like you are combined so that research can be conducted at the group level. Individual information is not the focus of the research. Instead, NCES is interested in the experiences of groups of people or groups of institutions—like schools—based on the collected data. To protect respondents, personally identifiable information like your name (and other information that could identify you personally) is removed before data are analyzed and is never provided to others. This means that people who participate in NCES studies are grouped in different ways, such as by age or type of school attended, and their information is studied to identify patterns of experiences that people in these different groups may have had.

Let’s take a look at specific examples that show how data from NCES studies provide valuable information for policy decisions.

When policymakers are considering how data can inform policy—either in general or for a specific law under consideration—data from NCES studies play an important role. For example, policymakers concerned that students in their state/district/city often struggle to pay for college may be interested in this question:

“What can education data tell me about how to make college more affordable?”

Or policymakers further along in the law development process might have more specific ideas about how to help low-income students access college. They may have come across research linking programs such as dual enrollment—when high school students take college courses—to college access for underrepresented college students. An example of this research is provided in the What Works Clearinghouse (WWC) dual-enrollment report produced by ED’s Institute for Education Sciences (IES), which shows that dual-enrollment programs are effective at increasing students’ access to and enrollment in college and attainment of degrees. This was found to be the case especially for students typically underrepresented in higher education.   

Then, these policymakers might need more specific questions answered about these programs, such as:

What is the benefit of high school students from low-income households also taking college courses?”

Thanks to people who participate in NCES studies, we have the data to address such policy questions. Rigorous research using data from large datasets, compiled from many participants, can be used to identify differences in outcomes between groups. In the case of dual-enrollment programs, college outcomes for dual-enrollment participants from low-income households can be compared with those of dual-enrollment participants from higher-income households, and possible causes of those differences can be investigated.

The results of these investigations may then inform enactment of laws or creation of programs to support students. In the case of dual enrollment, grant programs might be set up at the state level for districts and schools to increase students’ local access to dual-enrollment credit earning.

This was very close to what happened in 2012, when I was asked by analysts in ED’s Office of Planning, Evaluation, and Policy Development to produce statistical tables with data on students’ access to career and technical education (CTE) programs. Research, as reviewed in the WWC dual-enrollment report, was already demonstrating the benefits of dual enrollment for high school students. Around 2012, ED was considering a policy that would fund the expansion of dual enrollment specifically for CTE. The reason I was asked to provide tables on the topic was my understanding of two important NCES studies, the Education Longitudinal Study of 2002 (ELS:2002) and the High School Longitudinal Study of 2009 (HSLS:09). Data provided by participants in those studies were ideal for studying the question. The tables were used to evaluate policy options. Based on the results, ED, through the President, made a budget request to Congress to support dual-enrollment policies. Ultimately, dual-enrollment programs were included in the Strengthening Career and Technical Education for the 21st Century Act (Perkins V).  

The infographic below shows that this scenario—in which NCES data provided by participants like you were used to provide information about policy—has happened on different scales for different policies many times over the past few decades. The examples included are just some of those from the NCES high school longitudinal studies. NCES data have been used countless times in its 154-year history to improve education for American students. Check out the full infographic (PDF) with other examples.


Excerpt of full infographic showing findings and actions for NCES studies on Equity, Dropout Prevention, and College and Career Readiness


However, it’s not always the case that a direct line can be drawn between data from NCES studies and any one policy. Research often informs policy indirectly by educating policymakers and the public they serve on critical topics. Sometimes, as in the dual-enrollment and CTE programs research question I investigated, it can take time before policy gets enacted or a new program rolls out. This does not lessen the importance of the research, nor the vital importance of the data participants provide that underpin it.

The examples in the infographic represent experiences of actual individuals who took the time to tell NCES about themselves by participating in a study.  

If you are asked to participate in an NCES study, please consider doing so. People like you, schools like yours, and households in your town do matter—and by participating, you are helping to inform decisions and improve education across the country.

 

By Elise Christopher, NCES

Measuring “Traditional” and “Non-Traditional” Student Success in IPEDS: Data Insights from the IPEDS Outcome Measures (OM) Survey Component

This blog post is the second in a series highlighting the Integrated Postsecondary Education Data System (IPEDS) Outcome Measures (OM) survey component. The first post introduced a new resource page that helps data reporters and users understand OM and how it compares to the Graduation Rates (GR) and Graduation Rates 200% (GR200) survey components. Using data from the OM survey component, this post provides key findings about the demographics and college outcomes of undergraduates in the United States and is designed to spark further study of student success using OM data.

What do Outcome Measures cohorts look like?

OM collects student outcomes for all entering degree/certificate-seeking undergraduates, including non-first-time (i.e., transfer-in) and part-time students. Students are separated into eight subcohorts by entering status (i.e., first-time or non-first-time), attendance status (i.e., full-time or part-time), and Pell Grant recipient status.1 Figure 1 shows the number and percentage distribution of degree/certificate-seeking undergraduates in each OM subcohort from 2009–10 to 2012–13, by institutional level.2

Key takeaways:

  • Across all cohort years, the majority of students were not first-time, full-time (FTFT) students, a group typically referred to as “traditional” college students. At 2-year institutions, 36 percent of Pell Grant recipients and 16 percent of non-Pell Grant recipients were FTFT in 2012–13. At 4-year institutions, 43 percent of Pell Grant recipients and 44 percent of non-Pell Grant recipients were FTFT in 2012–13.
  • Pell Grant recipient cohorts have become less “traditional” over time. In 2012–13, some 36 percent of Pell Grant recipients at 2-year institutions were FTFT, down 5 percentage points from 2009–10 (41 percent). At 4-year institutions, 43 percent of Pell Grant recipients were FTFT in 2012–13, down 5 percentage points from 2009–10 (48 percent).

Figure 1. Number and percentage distribution of degree/certificate-seeking undergraduate students in the adjusted cohort, by Pell Grant recipient status, institutional level, and entering and attendance status: 2009–10 to 2012–13 adjusted cohorts

Stacked bar chart showing the number and percentage distribution of degree/certificate-seeking undergraduate students by Pell Grant recipient status (recipients and non-recipients), institutional level (2-year and 4-year), and entering and attendance status (first-time/full-time, first-time/part-time, non-first-time/full-time, and non-first-time/part-time) for 2009–10 to 2012–13 adjusted cohorts

NOTE: This figure presents data collected from Title IV degree-granting institutions in the United States. Percentages may not sum to 100 due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, Integrated Postsecondary Education Data System (IPEDS) Outcome Measures component final data (2017­–19) and provisional data (2020).


What outcomes does Outcome Measures collect?

The OM survey component collects students’ highest credential earned (i.e., certificate, associate’s, or bachelor’s) at 4,3 6, and 8 years after entry. Additionally, for students who did not earn a credential by the 8-year status point, the survey component collects an enrollment status outcome (i.e., still enrolled at the institution, enrolled at another institution, or enrollment status unknown). Figure 2 shows these outcomes for the 2012–13 adjusted cohort.

Key takeaways:

  • The percentage of students earning an award (i.e., certificate, associate’s, or bachelor’s) was higher at each status point, with the greatest change occurring between the 4- and 6-year status points (a 7-percentage point change, from 32 percent to 39 percent).
  • At the 8-year status point, more than a quarter of students were still enrolled in higher education: 26 percent had “transferred-out” to enroll at another institution and 1 percent were still enrolled at their original institution. This enrollment status outcome fills an important gap left by the GR200 survey component, which does not collect information on students who do not earn an award 8 years after entry.

Figure 2. Number and percentage distribution of degree/certificate-seeking undergraduate students, by award and enrollment status and entry status point: 2012–13 adjusted cohort

Waffle chart showing award status (certificate, associate’s, bachelor’s, and did not receive award) and enrollment status (still enrolled at institution, enrolled at another institution, and enrollment status unknown) of degree/certificate-seeking undergraduate students, by status point (4-year, 6-year, and 8-year) for 2012–13 adjusted cohort

NOTE: One square represents 1 percent. This figure presents data collected from Title IV degree-granting institutions in the United States.

SOURCE: U.S. Department of Education, National Center for Education Statistics, Integrated Postsecondary Education Data System (IPEDS) Outcome Measures component provisional data (2020).


How do Outcome Measures outcomes vary across student subgroups?

Every data element collected by the OM survey component (e.g., cohort counts, outcomes by time after entry) can be broken down into eight subcohorts based on entering, attendance, and Pell Grant recipient statuses. In addition to these student characteristics, data users can also segment these data by key institutional characteristics such as sector, Carnegie Classification, special mission (e.g., Historically Black College or University), and region, among others.4 Figure 3 displays the status of degree/certificate-seeking undergraduates 8 years after entry by each student subcohort within the broader 2012–13 degree/certificate-seeking cohort.

Key takeaways:

  • Of the eight OM subcohorts, FTFT non-Pell Grant recipients had the highest rate of earning an award or still being enrolled 8 years after entry. Among this subcohort, 18 percent had an unknown enrollment status 8 years after entry.
  • Among both Pell Grant recipients and non-Pell Grant recipients, full-time students had a higher rate than did part-time students of earning an award or still being enrolled 8 years after entry.
  • First-time, part-time (FTPT) students had the lowest rate of the subcohorts of earning a bachelor’s degree. One percent of FTPT Pell Grant recipients and 2 percent of FTPT non-Pell Grant recipients had earned a bachelor’s degree by the 8-year status point.

Figure 3. Number and percentage distribution of degree/certificate-seeking undergraduate students 8 years after entry, by Pell Grant Recipient status, entering and attendance status, and award and enrollment status: 2012–13 adjusted cohort

Horizontal stacked bar chart showing award (certificate, associate’s, and bachelor’s) and enrollment statuses (still enrolled at institution, enrolled at another institution, and enrollment status unknown) of degree/certificate-seeking undergraduate students by Pell Grant recipient status (recipients and non-recipients), institutional level (2-year and 4-year), and entering and attendance status (first-time/full-time, first-time/part-time, non-first-time/full-time, and non-first-time/part-time) for 2012–13 adjusted cohort

 

NOTE: This figure presents data collected from Title IV degree-granting institutions in the United States. Percentages may not sum to 100 due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, Integrated Postsecondary Education Data System (IPEDS) Outcome Measures component provisional data (2020).


How do Outcome Measures outcomes vary over time?

OM data are comparable across 4 cohort years.5 Figure 4 shows outcomes of degree/certificate-seeking undergraduates 8 years after entry from the 2009–10 cohort through the 2012–13 cohort for so-called “traditional” (i.e., FTFT) and “non-traditional” (i.e., non-FTFT) students.

Key takeaways:

  • For both traditional and non-traditional students, the percentage of students earning an award was higher for the 2012–13 cohort than for the 2009–10 cohort, climbing from 47 percent to 51 percent for traditional students and from 32 percent to 35 percent for non-traditional students.
  • The growth in award attainment for traditional students was driven by the share of students earning bachelor’s degrees (30 percent for the 2009–10 cohort vs. 35 percent for the 2012–13 cohort).
  • The growth in award attainment for non-traditional students was driven by the share of students earning both associate’s degrees (15 percent for the 2009–10 cohort vs. 16 percent for the 2012–13 cohort) and bachelor’s degrees (13 percent for the 2009–10 cohort vs. 15 percent for the 2012–13 cohort).

Figure 4. Number and percentage distribution of degree/certificate-seeking undergraduate students 8 years after entry, by first-time, full-time (FTFT) status and award and enrollment status: 2009–10 to 2012–13 adjusted cohorts

Stacked bar chart showing award status (certificate, associate’s, and bachelor’s) and enrollment status (still enrolled at institution, enrolled at another institution, and enrollment status unknown) of degree/certificate-seeking undergraduate students 8 years after entry by first-time, full-time status (traditional or first-time, full-time students and non-traditional or non-first-time, full-time students) for 2009–10 to 2012–13 adjusted cohorts

NOTE: This figure presents data collected from Title IV degree-granting institutions in the United States. “Non-traditional” (i.e., non-first-time, full-time) students include first-time, part-time, non-first-time, full-time, and non-first-time, part-time subcohorts. Percentages may not sum to 100 due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, Integrated Postsecondary Education Data System (IPEDS) Outcome Measures component final data (2017–2019) and provisional data (2020).


To learn more about the IPEDS OM survey component, visit the Measuring Student Success in IPEDS: Graduation Rates (GR), Graduation Rates 200% (GR200), and Outcome Measures (OM) resource page and the OM survey component webpage. Go to the IPEDS Use the Data page to explore IPEDS data through easy-to-use web tools, access data files to conduct your own analyses like those presented in this blog post, or view OM web tables.  

By McCall Pitcher, AIR


[1] The Federal Pell Grant Program (Higher Education Act of 1965, Title IV, Part A, Subpart I, as amended) provides grant assistance to eligible undergraduate postsecondary students with demonstrated financial need to help meet education expenses.

[2] Due to the 8-year measurement lag between initial cohort enrollment and student outcome reporting for the Outcome Measures survey component, the most recent cohort for which data are publicly available is 2012–13. Prior to the 2009–10 cohort, OM did not collect cohort subgroups by Pell Grant recipient status. Therefore, this analysis includes data only for the four most recent cohorts.

[3] The 4-year status point was added in the 2017–18 collection.

[4] Data users can explore available institutional variables on the IPEDS Use the Data webpage.

[5] For comparability purposes, this analysis relies on data from the 2017–18 collection (reflecting the 2009–10 adjusted cohort) through the 2020–21 collection (reflecting the 2012–13 adjusted cohort). Prior to the 2017–18 collection, OM cohorts were based on a fall term for academic reporters and a full year for program reporters.