NCES Blog

National Center for Education Statistics

Measuring “Traditional” and “Non-Traditional” Student Success in IPEDS: Data Insights from the IPEDS Outcome Measures (OM) Survey Component

This blog post is the second in a series highlighting the Integrated Postsecondary Education Data System (IPEDS) Outcome Measures (OM) survey component. The first post introduced a new resource page that helps data reporters and users understand OM and how it compares to the Graduation Rates (GR) and Graduation Rates 200% (GR200) survey components. Using data from the OM survey component, this post provides key findings about the demographics and college outcomes of undergraduates in the United States and is designed to spark further study of student success using OM data.

What do Outcome Measures cohorts look like?

OM collects student outcomes for all entering degree/certificate-seeking undergraduates, including non-first-time (i.e., transfer-in) and part-time students. Students are separated into eight subcohorts by entering status (i.e., first-time or non-first-time), attendance status (i.e., full-time or part-time), and Pell Grant recipient status.1 Figure 1 shows the number and percentage distribution of degree/certificate-seeking undergraduates in each OM subcohort from 2009–10 to 2012–13, by institutional level.2

Key takeaways:

  • Across all cohort years, the majority of students were not first-time, full-time (FTFT) students, a group typically referred to as “traditional” college students. At 2-year institutions, 36 percent of Pell Grant recipients and 16 percent of non-Pell Grant recipients were FTFT in 2012–13. At 4-year institutions, 43 percent of Pell Grant recipients and 44 percent of non-Pell Grant recipients were FTFT in 2012–13.
  • Pell Grant recipient cohorts have become less “traditional” over time. In 2012–13, some 36 percent of Pell Grant recipients at 2-year institutions were FTFT, down 5 percentage points from 2009–10 (41 percent). At 4-year institutions, 43 percent of Pell Grant recipients were FTFT in 2012–13, down 5 percentage points from 2009–10 (48 percent).

Figure 1. Number and percentage distribution of degree/certificate-seeking undergraduate students in the adjusted cohort, by Pell Grant recipient status, institutional level, and entering and attendance status: 2009–10 to 2012–13 adjusted cohorts

Stacked bar chart showing the number and percentage distribution of degree/certificate-seeking undergraduate students by Pell Grant recipient status (recipients and non-recipients), institutional level (2-year and 4-year), and entering and attendance status (first-time/full-time, first-time/part-time, non-first-time/full-time, and non-first-time/part-time) for 2009–10 to 2012–13 adjusted cohorts

NOTE: This figure presents data collected from Title IV degree-granting institutions in the United States. Percentages may not sum to 100 due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, Integrated Postsecondary Education Data System (IPEDS) Outcome Measures component final data (2017­–19) and provisional data (2020).


What outcomes does Outcome Measures collect?

The OM survey component collects students’ highest credential earned (i.e., certificate, associate’s, or bachelor’s) at 4,3 6, and 8 years after entry. Additionally, for students who did not earn a credential by the 8-year status point, the survey component collects an enrollment status outcome (i.e., still enrolled at the institution, enrolled at another institution, or enrollment status unknown). Figure 2 shows these outcomes for the 2012–13 adjusted cohort.

Key takeaways:

  • The percentage of students earning an award (i.e., certificate, associate’s, or bachelor’s) was higher at each status point, with the greatest change occurring between the 4- and 6-year status points (a 7-percentage point change, from 32 percent to 39 percent).
  • At the 8-year status point, more than a quarter of students were still enrolled in higher education: 26 percent had “transferred-out” to enroll at another institution and 1 percent were still enrolled at their original institution. This enrollment status outcome fills an important gap left by the GR200 survey component, which does not collect information on students who do not earn an award 8 years after entry.

Figure 2. Number and percentage distribution of degree/certificate-seeking undergraduate students, by award and enrollment status and entry status point: 2012–13 adjusted cohort

Waffle chart showing award status (certificate, associate’s, bachelor’s, and did not receive award) and enrollment status (still enrolled at institution, enrolled at another institution, and enrollment status unknown) of degree/certificate-seeking undergraduate students, by status point (4-year, 6-year, and 8-year) for 2012–13 adjusted cohort

NOTE: One square represents 1 percent. This figure presents data collected from Title IV degree-granting institutions in the United States.

SOURCE: U.S. Department of Education, National Center for Education Statistics, Integrated Postsecondary Education Data System (IPEDS) Outcome Measures component provisional data (2020).


How do Outcome Measures outcomes vary across student subgroups?

Every data element collected by the OM survey component (e.g., cohort counts, outcomes by time after entry) can be broken down into eight subcohorts based on entering, attendance, and Pell Grant recipient statuses. In addition to these student characteristics, data users can also segment these data by key institutional characteristics such as sector, Carnegie Classification, special mission (e.g., Historically Black College or University), and region, among others.4 Figure 3 displays the status of degree/certificate-seeking undergraduates 8 years after entry by each student subcohort within the broader 2012–13 degree/certificate-seeking cohort.

Key takeaways:

  • Of the eight OM subcohorts, FTFT non-Pell Grant recipients had the highest rate of earning an award or still being enrolled 8 years after entry. Among this subcohort, 18 percent had an unknown enrollment status 8 years after entry.
  • Among both Pell Grant recipients and non-Pell Grant recipients, full-time students had a higher rate than did part-time students of earning an award or still being enrolled 8 years after entry.
  • First-time, part-time (FTPT) students had the lowest rate of the subcohorts of earning a bachelor’s degree. One percent of FTPT Pell Grant recipients and 2 percent of FTPT non-Pell Grant recipients had earned a bachelor’s degree by the 8-year status point.

Figure 3. Number and percentage distribution of degree/certificate-seeking undergraduate students 8 years after entry, by Pell Grant Recipient status, entering and attendance status, and award and enrollment status: 2012–13 adjusted cohort

Horizontal stacked bar chart showing award (certificate, associate’s, and bachelor’s) and enrollment statuses (still enrolled at institution, enrolled at another institution, and enrollment status unknown) of degree/certificate-seeking undergraduate students by Pell Grant recipient status (recipients and non-recipients), institutional level (2-year and 4-year), and entering and attendance status (first-time/full-time, first-time/part-time, non-first-time/full-time, and non-first-time/part-time) for 2012–13 adjusted cohort

 

NOTE: This figure presents data collected from Title IV degree-granting institutions in the United States. Percentages may not sum to 100 due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, Integrated Postsecondary Education Data System (IPEDS) Outcome Measures component provisional data (2020).


How do Outcome Measures outcomes vary over time?

OM data are comparable across 4 cohort years.5 Figure 4 shows outcomes of degree/certificate-seeking undergraduates 8 years after entry from the 2009–10 cohort through the 2012–13 cohort for so-called “traditional” (i.e., FTFT) and “non-traditional” (i.e., non-FTFT) students.

Key takeaways:

  • For both traditional and non-traditional students, the percentage of students earning an award was higher for the 2012–13 cohort than for the 2009–10 cohort, climbing from 47 percent to 51 percent for traditional students and from 32 percent to 35 percent for non-traditional students.
  • The growth in award attainment for traditional students was driven by the share of students earning bachelor’s degrees (30 percent for the 2009–10 cohort vs. 35 percent for the 2012–13 cohort).
  • The growth in award attainment for non-traditional students was driven by the share of students earning both associate’s degrees (15 percent for the 2009–10 cohort vs. 16 percent for the 2012–13 cohort) and bachelor’s degrees (13 percent for the 2009–10 cohort vs. 15 percent for the 2012–13 cohort).

Figure 4. Number and percentage distribution of degree/certificate-seeking undergraduate students 8 years after entry, by first-time, full-time (FTFT) status and award and enrollment status: 2009–10 to 2012–13 adjusted cohorts

Stacked bar chart showing award status (certificate, associate’s, and bachelor’s) and enrollment status (still enrolled at institution, enrolled at another institution, and enrollment status unknown) of degree/certificate-seeking undergraduate students 8 years after entry by first-time, full-time status (traditional or first-time, full-time students and non-traditional or non-first-time, full-time students) for 2009–10 to 2012–13 adjusted cohorts

NOTE: This figure presents data collected from Title IV degree-granting institutions in the United States. “Non-traditional” (i.e., non-first-time, full-time) students include first-time, part-time, non-first-time, full-time, and non-first-time, part-time subcohorts. Percentages may not sum to 100 due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, Integrated Postsecondary Education Data System (IPEDS) Outcome Measures component final data (2017–2019) and provisional data (2020).


To learn more about the IPEDS OM survey component, visit the Measuring Student Success in IPEDS: Graduation Rates (GR), Graduation Rates 200% (GR200), and Outcome Measures (OM) resource page and the OM survey component webpage. Go to the IPEDS Use the Data page to explore IPEDS data through easy-to-use web tools, access data files to conduct your own analyses like those presented in this blog post, or view OM web tables.  

By McCall Pitcher, AIR


[1] The Federal Pell Grant Program (Higher Education Act of 1965, Title IV, Part A, Subpart I, as amended) provides grant assistance to eligible undergraduate postsecondary students with demonstrated financial need to help meet education expenses.

[2] Due to the 8-year measurement lag between initial cohort enrollment and student outcome reporting for the Outcome Measures survey component, the most recent cohort for which data are publicly available is 2012–13. Prior to the 2009–10 cohort, OM did not collect cohort subgroups by Pell Grant recipient status. Therefore, this analysis includes data only for the four most recent cohorts.

[3] The 4-year status point was added in the 2017–18 collection.

[4] Data users can explore available institutional variables on the IPEDS Use the Data webpage.

[5] For comparability purposes, this analysis relies on data from the 2017–18 collection (reflecting the 2009–10 adjusted cohort) through the 2020–21 collection (reflecting the 2012–13 adjusted cohort). Prior to the 2017–18 collection, OM cohorts were based on a fall term for academic reporters and a full year for program reporters.

Introducing a New Resource Page for the IPEDS Outcome Measures (OM) Survey Component

The National Center for Education Statistics (NCES) has introduced a new resource page for the Integrated Postsecondary Education Data System (IPEDS) Outcome Measures (OM) survey component. This blog post provides an overview of the webpage and is the first in a series of blog posts that will showcase OM data.

Measuring Student Success in IPEDS: Graduation Rates (GR), Graduation Rates 200% (GR200), and Outcome Measures (OM) is a new resource page designed to help data reporters and users better understand the value of OM data and how the OM survey component works, particularly when compared with the Graduation Rates (GR) and Graduation Rates 200% (GR200) survey components.

The OM survey component was added to IPEDS in 2015–16 in an effort to capture postsecondary outcomes for more than so-called “traditional” college students. From 1997–98 to 2015–16, IPEDS graduation rate data were collected only for first-time, full-time (FTFT) degree/certificate-seeking (DGCS) undergraduates through the GR and GR200 survey components. Unlike these survey components, OM collects student outcomes for all entering DGCS undergraduates, including non-first-time students (i.e., transfer-in students) and part-time students.

Outcome measures are useful as characteristics of students vary by the level of institution. In 2009, some 4.7 million students began at 2-year postsecondary institutions, and 25 percent were full-time students who were attending college for the first time. During the same period, some 4.5 million students began at 4-year institutions, and 44 percent were first-time, full-time students.1

The new resource page answers several important questions about OM, GR, and GR200, including the following:

  • Which institutions complete each survey component?
  • Does the survey form vary by institutional type?
  • What student success measures are included?
  • Which students are included in the cohort?
  • What is the timeframe for establishing student cohorts?
  • Which subgroups (disaggregates) are included?
  • What is the timing of data collection and release?

In answering these questions, the resource page highlights that OM provides a more comprehensive view of student success than do GR and GR200. Furthermore, it suggests that OM, GR, and GR200 are not directly comparable, as the survey components differ in terms of which institutions complete them, which students are captured, and how each measures cohorts. Here are some of the key differences:

  • Institutions with FTFT cohorts complete the GR and GR200 components, whereas degree-granting institutions complete the OM component.
  • GR and GR200 include only FTFT DGCS undergraduates, whereas OM includes all DGCS undergraduates.
  • GR and GR200 cohorts are based on a fall term for academic reports and a full year (September 1–August 31) for program reporters, whereas OM cohorts are based on a full year (July 1–June 30) for all degree-granting institutions.

Finally, the resource page outlines how OM works, including how cohorts and subcohorts are established, which outcomes are collected at various status points, and when the public have access to submitted data. Exhibit 1 presents the current 2021–22 data collection timeline, including the cohort year, outcome status points, data collection period, and public release of OM data.


Exhibit 1. 2021­–22 Outcome Measures (OM) data collection timeline (2013–14 entering degree/certificate-seeking cohort)

Infographic showing the 2020—21 OM data collection timeline, including the cohort year, outcome status points, data collection period, and public release of OM data


Data reporters and users are encouraged to utilize the new OM survey component resource page to better understand the scope of OM, how it works, and how it differs from GR and GR200. Stay tuned for a follow-up blog post featuring data from OM that further highlights the survey component’s usefulness in measuring student success for all DGCS undergraduate students.

 

By Tara Lawley, NCES; Roman Ruiz, AIR; Aida Ali Akreyi, AIR; and McCall Pitcher, AIR


[1] U.S. Department of Education, National Center for Education Statistics, Integrated Postsecondary Education Data System (IPEDS), Winter 2017–18, Outcome Measures component; and IPEDS Fall 2009, Institutional Characteristics component. See Digest of Education Statistics 2018, table 326.27.

NCES's Top Hits of 2021

As 2021—another unprecedented year—comes to a close and you reflect on your year, be sure to check out NCES’s annual list of top web hits. From reports and Condition of Education indicators to Fast Facts, APIs, blog posts, and tweets, NCES releases an array of content to help you stay informed about the latest findings and trends in education. Don’t forget to follow us on Twitter, Facebook, and LinkedIn to stay up-to-date in 2022!
 

Top five reports, by number of PDF downloads

1. Condition of Education 2020 (8,376)

2Digest of Education Statistics 2019 (4,427)

3. Status and Trends in the Education of Racial and Ethnic Groups 2018 (3,282)

4. Indicators of School Crime and Safety: 2019 (2,906)

5. Trends in High School Dropout and Completion Rates in the United States: 2019 (2,590)

 

Top five indicators from the Condition of Education, by number of web sessions

1. Students With Disabilities (100,074)

2. Racial/Ethnic Enrollment in Public Schools (64,556)

3. Characteristics of Public School Teachers (57,188)

4. Public High School Graduation Rates (54,504)

5. Education Expenditures by Country (50,20)

 

Top five Fast Facts, by number of web sessions

1. Back-to-School Statistics (162,126)

2. Tuition Costs of Colleges and Universities (128,236)

3. Dropout Rates (74,399)

4. Graduation Rates (73,855)

5. Degrees Conferred by Race and Sex (63,178)

 

Top five NCES/EDGE API requested categories of social and spatial context GIS data, by number of requests

1. K–12 Schools (including district offices) (4,822,590)

2. School Districts (1,616,374)

3. Social/Economic (882,984)

4. Locales (442,715)

5. Postsecondary (263,047)

 

Top five blog posts, by number of web sessions

1. Understanding School Lunch Eligibility in the Common Core of Data (8,242)

2. New Report Shows Increased Diversity in U.S. Schools, Disparities in Outcomes (3,463)

3. Free or Reduced Price Lunch: A Proxy for Poverty? (3,457)

4. Back to School by the Numbers: 2019–20 School Year (2,694)

5. Educational Attainment Differences by Students’ Socioeconomic Status (2,587)

 

Top five tweets, by number of impressions

1. CCD blog (22,557)


2. NAEP dashboard (21,551)


3. IPEDS data tools (21,323)


4. ACGR web table (19,638)


5. Kids’ Zone (19,390)

 

By Megan Barnett, AIR

The “Where” of Going to College: Residence, Migration, and Fall Enrollment

Newly released provisional data from the Integrated Postsecondary Education Data System’s (IPEDS) Fall Enrollment (EF) survey provides an updated look at whether beginning college students are attending school in their home state or heading elsewhere.  

In fall 2018, the number of first-time degree/certificate-seeking students enrolled at Title IV postsecondary institutions (beginning college students) varied widely across states, ranging from 3,700 in Alaska to 400,300 in California (figure 1). College enrollment is strongly correlated with the number of postsecondary institutions within each state, as more populous and geographically large states have more institutional capacity to enroll more students. Most states (32 out of 50) and the District of Columbia enrolled fewer than 50,000 beginning college students in fall 2018 and only six states (California, Texas, New York, Florida, Pennsylvania, and Ohio) enrolled more than 100,000 beginning college students.


Figure 1. Number of first-time degree/certificate-seeking undergraduate students enrolled at Title IV institutions, by state or jurisdiction: Fall 2018SOURCE: U.S. Department of Education, National Center for Education Statistics, IPEDS, Spring 2019, Fall Enrollment component (provisional data).


As a result of students migrating outside their home states to attend college, some postsecondary institutions enroll students who are not residents of the same state or jurisdiction in which it is located. Among beginning college students in fall 2018, the share of students who were residents of the same state varied widely, from 31 percent in New Hampshire to 93 percent in Texas and Alaska (figure 2). For a majority of states (27 out of 50), residents comprised at least 75 percent of total beginning college student enrollment. Only three states (Rhode Island, Vermont, and New Hampshire) and the District of Columbia enrolled more nonresidents than residents among their fall 2018 beginning college students.


Figure 2. Percent of first-time degree/certificate-seeking undergraduate students enrolled at Title IV institutions in the state or jurisdiction who are residents of the same state or jurisdiction: Fall 2018

SOURCE: U.S. Department of Education, National Center for Education Statistics, IPEDS, Spring 2019, Fall Enrollment component (provisional data).


States experience varying levels of out-migration (i.e., residents leaving the state to attend college) and in-migration (i.e., nonresidents coming into the state to attend college). For example, in fall 2018, California experienced the largest number of residents out-migrating to attend college in a different state (44,800) but gained 37,800 nonresidents in-migrating to attend college in the state, for an overall negative net migration of beginning college students (figure 3). In contrast, New York also experienced a large number of residents out-migrating for college (33,800) but gained 43,300 nonresidents, for an overall positive net migration of beginning college students.


Figure 3. Number of first-time degree/certificate-seeking undergraduate students at Title IV institutions who migrate into and out of the state or jurisdiction: Fall 2018

NOTE: The migration of students refers to students whose permanent address at the time of application to the institution is located in a different state or jurisdiction than the institution. Migration does not indicate a permanent change of address has occurred. Migration into the state or jurisdiction may include students who are nonresident aliens, who are from the other U.S. jurisdictions, or who reside outside the state or jurisdiction and are enrolled exclusively in online or distance education programs. Migration into the state or jurisdiction does not include individuals whose state or jurisdiction of residence is unknown.

SOURCE: U.S. Department of Education, National Center for Education Statistics, IPEDS, Spring 2019, Fall Enrollment component (provisional data).


Approximately three-quarters of states (37 out of 50) and the District of Columbia had a positive net migration of beginning college students in fall 2018 (figure 4). The remaining one-quarter of states (13 out of 50) had more residents out-migrate for college than nonresidents in-migrate for college, resulting in a negative net migration of beginning college students. Net migration varied widely by state, with New Jersey experiencing the largest negative net migration (28,500 students) and Utah experiencing the largest positive net migration (14,400 students).


Figure 4. Net migration of first-time degree/certificate-seeking undergraduate students at Title IV institutions, by state or jurisdiction: Fall 2018

NOTE: Net migration is the difference between the number of students entering the state or jurisdiction to attend school (into) and the number of students (residents) who leave the state or jurisdiction to attend school elsewhere (out of). A positive net migration indicates more students coming into the state or jurisdiction than leaving to attend school elsewhere.

SOURCE: U.S. Department of Education, National Center for Education Statistics, IPEDS, Spring 2019, Fall Enrollment component (provisional data).


The newly released IPEDS Fall Enrollment data provide tremendous insights into the geographic mobility of beginning college students. Additional analyses on residence and migration can be conducted using the full IPEDS data files. For example, the data can identify to which states and types of institutions beginning college students out-migrate and, conversely, from which states postsecondary institutions recruit their incoming classes.

 

By Roman Ruiz, AIR

From Data Collection to Data Release: What Happens?

In today’s world, much scientific data is collected automatically from sensors and processed by computers in real time to produce instant analytic results. People grow accustomed to instant data and expect to get things quickly.

At the National Center for Education Statistics (NCES), we are frequently asked why, in a world of instant data, it takes so long to produce and publish data from surveys. Although improvements in the timeliness of federal data releases have been made, there are fundamental differences in the nature of data compiled by automated systems and specific data requested from federal survey respondents. Federal statistical surveys are designed to capture policy-related and research data from a range of targeted respondents across the country, who may not always be willing participants.

This blog is designed to provide a brief overview of the survey data processing framework, but it’s important to understand that the survey design phase is, in itself, a highly complex and technical process. In contrast to a management information system, in which an organization has complete control over data production processes, federal education surveys are designed to represent the entire country and require coordination with other federal, state, and local agencies. After the necessary coordination activities have been concluded, and the response periods for surveys have ended, much work remains to be done before the survey data can be released.

Survey Response

One of the first sources of potential delays is that some jurisdictions or individuals are unable to fill in their surveys on time. Unlike opinion polls and online quizzes, which use anyone who feels like responding to the survey (convenience samples), NCES surveys use rigorously formulated samples meant to properly represent specific populations, such as states or the nation as a whole. In order to ensure proper representation within the sample, NCES follows up with nonresponding sampled individuals, education institutions, school districts, and states to ensure the maximum possible survey participation within the sample. Some large jurisdictions, such as the New York City school district, also have their own extensive survey operations to conclude before they can provide information to NCES. Before the New York City school district, which is larger than about two-thirds of all state education systems, can respond to NCES surveys, it must first gather information from all its schools. Receipt of data from New York City and other large districts is essential to compiling nationally representative data.

Editing and Quality Reviews

Waiting for final survey responses does not mean that survey processing comes to a halt. One of the most important roles NCES plays in survey operations is editing and conducting quality reviews of incoming data, which take place on an ongoing basis. In these quality reviews, a variety of strategies are used to make cost-effective and time-sensitive edits to the incoming data. For example, in the Integrated Postsecondary Education Data System (IPEDS), individual higher education institutions upload their survey responses and receive real-time feedback on responses that are out of range compared to prior submissions or instances where survey responses do not align in a logical way. All NCES surveys use similar logic checks in addition to a range of other editing checks that are appropriate to the specific survey. These checks typically look for responses that are out of range for a certain type of respondent.

Although most checks are automated, some particularly complicated or large responses may require individual review. For IPEDS, the real-time feedback described above is followed by quality review checks that are done after collection of the full dataset. This can result in individualized follow up and review with institutions whose data still raise substantive questions. 

Sample Weighting

In order to lessen the burden on the public and reduce costs, NCES collects data from selected samples of the population rather than taking a full census of the entire population for every study. In all sample surveys, a range of additional analytic tasks must be completed before data can be released. One of the more complicated tasks is constructing weights based on the original sample design and survey responses so that the collected data can properly represent the nation and/or states, depending on the survey. These sample weights are designed so that analyses can be conducted across a range of demographic or geographic characteristics and properly reflect the experiences of individuals with those characteristics in the population.

If the survey response rate is too low, a “survey bias analysis” must be completed to ensure that the results will be sufficiently reliable for public use. For longitudinal surveys, such as the Early Childhood Longitudinal Study, multiple sets of weights must be constructed so that researchers using the data will be able to appropriately account for respondents who answered some but not all of the survey waves.

NCES surveys also include “constructed variables” to facilitate more convenient and systematic use of the survey data. Examples of constructed variables include socioeconomic status or family type. Other types of survey data also require special analytic considerations before they can be released. Student assessment data, such as the National Assessment of Educational Progress (NAEP), require that a number of highly complex processes be completed to ensure proper estimations for the various populations being represented in the results. For example, just the standardized scoring of multiple choice and open-ended items can take thousands of hours of design and analysis work.

Privacy Protection

Release of data by NCES carries a legal requirement to protect the privacy of our nation’s children. Each NCES public-use dataset undergoes a thorough evaluation to ensure that it cannot be used to identify responses of individuals, whether they are students, parents, teachers, or principals. The datasets must be protected through item suppression, statistical swapping, or other techniques to ensure that multiple datasets cannot be combined in such a way as to identify any individual. This is a time-consuming process, but it is incredibly important to protect the privacy of respondents.

Data and Report Release

When the final data have been received and edited, the necessary variables have been constructed, and the privacy protections have been implemented, there is still more that must be done to release the data. The data must be put in appropriate formats with the necessary documentation for data users. NCES reports with basic analyses or tabulations of the data must be prepared. These products are independently reviewed within the NCES Chief Statistician’s office.

Depending on the nature of the report, the Institute of Education Sciences Standards and Review Office may conduct an additional review. After all internal reviews have been conducted, revisions have been made, and the final survey products have been approved, the U.S. Secretary of Education’s office is notified 2 weeks in advance of the pending release. During this notification period, appropriate press release materials and social media announcements are finalized.

Although NCES can expedite some product releases, the work of preparing survey data for release often takes a year or more. NCES strives to maintain a balance between timeliness and providing the reliable high-quality information that is expected of a federal statistical agency while also protecting the privacy of our respondents.  

 

By Thomas Snyder