NCES Blog

National Center for Education Statistics

NCES Fast Facts Deliver Data to Your Door

By Molly Fenster, American Institutes for Research

Have you ever wondered how many public high school students graduate on time? Or wanted to know the types of safety and security measures schools use, or the latest trends in the cost of a college education? If so, the NCES Fast Facts website has the answers for you!

Launched on March 1, 1999, the Fast Facts site originally included 45 responses to the questions most frequently asked by callers to the NCES Help Line. Today, the more than 70 Fast Facts answer questions of interest to education stakeholders–such as a teacher, school administrator, or researcher–as well as college students, parents, and community members with a specific interest or data need. The facts feature text, tables, figures, and links from various published sources, primarily the Digest of Education Statistics and The Condition of Education, and they are updated periodically with new data from recently released publications and products. 

For example, the screenshot below shows one of the most accessed Fast Facts on high school dropout rates:

Access the site for the full Fast Fact, as well as links to “Related Tables and Figures” and “Other Resources” on high school dropout rates.

The other facts on the site feature a diverse range of topics from child care, homeschooling, students with disabilities, teachers, and enrollment, to graduation rates, educational attainment, international education, finances, and more. The site is organized to provide concise, current information in the following areas:

  • Assessments;
  • Early Childhood;
  • Elementary and Secondary;
  • Library;
  • Postsecondary and Beyond; and
  • Resources.

Five recently released Fast Facts on ACT scores; science, technology, engineering, and mathematics (STEM) education; public school students eligible for free or reduced-price lunch; postsecondary student debt; and Historically Black Colleges and Universities offer the latest data on these policy-relevant and interesting education topics.

Join our growing base of users and visit the Fast Facts site today!

What is the difference between the ACGR and the AFGR?

By Joel McFarland

NCES and the Department of Education have released national and state-level Average Cohort Graduation Rates for the 2015-16 school year. You can see the data on the NCES website (as well as data from 2010-11 through 2014-15).

In recent years, NCES has released two widely-used annual measures of high school completion: the Adjusted Cohort Graduation Rate (ACGR) and the Averaged Freshman Graduation Rate (AFGR). Both measure the percent of public school students who attain a regular high school diploma within 4 years of starting 9th grade. However, they also differ in important ways. This post provides an overview of how each measure is calculated and why they may result in different rates.

What is the Adjusted Cohort Graduation Rate (ACGR)?

The ACGR was first collected for 2010-11 and is a newer graduation rate measure. To calculate the ACGR, states identify the “cohort” of first-time 9th graders in a particular school year, and adjust this number by adding any students who transfer into the cohort after 9th grade and subtracting any students who transfer out, emigrate to another country, or pass away. The ACGR is the percentage of the students in this cohort who graduate within four years. States calculate the ACGR for individual schools and districts and for the state as a whole using detailed data that track each student over time. In many states, these student-level records have become available at a state level only in recent years. As an example, the ACGR formula for 2012-13 was calculated like this:

Average Cohort Graduation Rate calculation

What is the Averaged Freshman Graduation Rate (AFGR)?

The AFGR uses aggregate student enrollment data to estimate the size of an incoming freshman class, which is compared to the number of high school diplomas awarded 4 years later. The incoming freshman class size is estimated by summing 8th grade enrollment in year one, 9th grade enrollment for the next year, and 10th grade enrollment for the year after, and then dividing by three. The averaging of the enrollment counts helps to smooth out the enrollment bump typically seen in 9th grade. The AFGR estimate is less accurate than the ACGR, but it can be estimated as far back as the 1960s since it requires only aggregate annual counts of enrollment and graduate data. As an example, the AFGR formula for 2012-13 was:

Average Freshman Graduation Rate calculation

Why do they produce different rates?

There are several reasons the AFGR and ACGR do not match exactly.

  • The AFGR’s estimate of the incoming freshman class is fixed, and is not adjusted to account for students entering or exiting the cohort during high school. As a result it is very sensitive to migration trends. If there is net out-migration after the initial cohort size is estimated, the AFGR will understate the graduation rate relative to the ACGR. If there is net in-migration, the AFGR will overstate the graduation rate;
  • The diploma count used in the AFGR includes any students who graduate with a regular high school diploma in a given school year, which may include students who took more or less than four years to graduate. The ACGR includes only those students who graduate within four years of starting ninth grade. This can cause the AFGR to be inflated relative to the ACGR; and
  • The AFGR’s averaged enrollment base is sensitive to the presence of 8th and 9th grade dropouts. Students who drop out in the 8th grade in one year are not eligible to be first-time freshmen the next year, but are included in the calculation of the AFGR enrollment base. At the same time, 9th grade dropouts should be counted as first-time 9th graders, but are excluded from the 10th grade enrollment counts used in the AFGR enrollment base. Since more students typically drop out in 9th grade than in 8th grade, the overall impact is likely to underestimate the AFGR enrollment base relative to the true ACGR cohort.

At the national level, these factors largely balance out, and the AFGR closely tracks the ACGR. For instance, in 2012-13, there was less than one percentage point difference between the AFGR (81.9%) and the ACGR (81.4%). At the state level, especially for small population subgroups, there is often more variation between the two measures.

On the NCES website you can access the most recently available data for each measure, including 2015-16 adjusted cohort graduation rates and 2012-13 averaged freshman graduation rates. You can find more data on high school graduation and dropout rates in the annual report Trends in High School Dropout and Completion Rates in the United States.

This blog was originally posted on July 15, 2015 and was updated on February 2, 2016 and December 4, 2017.

Measuring the Homeschool Population

By Sarah Grady

How many children are educated at home instead of school? Although many of our data collections focus on what happens in public or private schools, the National Center for Education Statistics (NCES) tries to capture as many facets of education as possible, including the number of homeschooled youth and the characteristics of this population of learners. NCES was one of the first organizations to attempt to estimate the number of homeschoolers in the United States using a rigorous sample survey of households. The Current Population Survey included homeschooling questions in 1994, which helped NCES refine its approach toward measuring homeschooling.[i] As part of the National Household Education Surveys Program (NHES), NCES published homeschooling estimates starting in 1999. The homeschooling rate has grown from 1.7 percent of the school-aged student population in 1999 to 3.4 percent in 2012.[ii]

NCES recently released a Statistical Analysis Report called Homeschooling in the United States: 2012. Findings from the report, detailed in a recent blog, show that there is a diverse group of students who are homeschooled. Although NCES makes every attempt to report data on homeschooled students, this diversity can make it difficult to accurately measure all facets of the homeschool population.

One of the primary challenges in collecting relevant data on homeschool students is that no complete list of homeschoolers exists, so it can be difficult to locate these individuals. When lists of homeschoolers can be located, problems exist with the level of coverage that they provide. For example, lists of members of local and national homeschooling organizations do not include homeschooling families unaffiliated with the organizations. Customer lists from homeschool curriculum vendors exclude families who access curricula from other sources such as the Internet, public libraries, and general purpose bookstores. For these reasons, collecting data about homeschooling requires a nationally representative household survey, which begins by finding households in which at least one student is homeschooled.

Once located, families can vary in their interpretation of what homeschooling is. NCES asks households if anyone in the household is “currently in homeschool instead of attending a public or private school for some or all classes.” About 18 percent of homeschoolers are in a brick-and-mortar school part-time, and families may vary in the extent to which they consider children in school part-time to be homeschoolers. Additionally, with the growth of virtual education and cyber schools, some parents are choosing to have the child schooled at home but not to personally provide instruction. Whether or not parents of students in cyber schools define their child as homeschooled likely varies from family to family.

NHES data collection begins with a random sample of addresses distributed across the entire U.S. However, most addresses will not contain any homeschooled students. Because of the low incidence of homeschooling relative to the U.S. population, a large number of households must be screened to find homeschooling students.  This leaves us with a small number of completed surveys from homeschooling families relative to studies of students in brick-and-mortar schools. For example, in 2012, the NHES program contacted 159,994 addresses and ended with 397 completed homeschooling surveys.

Smaller analytic samples can often result in less precise estimates. Therefore, NCES can estimate only the size of the total homeschool population and some key characteristics of homeschoolers with confidence, but we are not able to accurately report data for very small subgroups. For example, NCES can report the distribution of homeschoolers by race and ethnicity,[iii] but more specific breakouts of the characteristics of homeschooled students within these racial/ethnic groups often cannot be reported due to the small sample sizes and large standard errors. For a more comprehensive explanation of this issue, please see our blog post on standard errors.  The reason why this matters is that local-level research on homeschooling families suggests that homeschooling communities across the country may be very diverse.[iv] For example, Black, urban homeschooling families in these studies are often very different from White, rural homeschooling families. Low incidence and high heterogeneity lead to estimates with lower precision.

Despite these constraints, the data from NHES continue to be the most comprehensive that we have on homeschoolers. NCES continues to collect data on this important population. The 2016 NHES recently completed collection on homeschooling students, and those data will be released in fall 2017.

[i] Henke, R., Kaufman, P. (2000). Issues Related to Estimating the Home-school Population in the United States with National Household Survey Data (NCES 2000-311). National Center for Education Statistics. Institute of Education Sciences. U.S. Department of Education. Washington, DC.

[ii] Redford, J., Battle, D., and Bielick, S. (2016). Homeschooling in the United States: 2012 (NCES 2016-096). National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education. Washington, DC.

[iv] Hanna, L.G. (2012). Homeschooling Education: Longitudinal Study of Methods, Materials, and Curricula. Education and Urban Society 44(5): 609–631.

Learning to Use the Data: Online Dataset Training Modules

UPDATED JANUARY 16, 2018: New and Updated Modules Added

By Andy White

NCES provides a wealth of data online for users to access. However, the breadth and depth of the data can be overwhelming to first time users, and, sometimes, even for more experienced users. In order to help our users learn how to access, navigate, and use NCES datasets, we’ve developed a series of online training modules.

The Distance Learning Dataset Training  (DLDT) resource is an online, interactive tool that allows users to learn about NCES data across the education spectrum and evaluate it for suitability for specific  research purposes. The DLDT program at NCES has developed a growing number of online training modules for several NCES complex sample survey and administrative datasets.  The modules teach users about the intricacies of various datasets, including what the data represent, how the data are collected, the sample design, and considerations for analysis to help users in conducting successful analyses. 

The DLDT is also a teaching tool that can be used by individuals both in and out of the classroom to learn about NCES complex sample survey and administrative data collections and appropriate analysis methods.

There are two types of NCES DLDT modules available: common modules and dataset-specific modules. The common modules help users broadly understand NCES data across the education spectrum, introduce complex survey methods, and explain how to acquire NCES micro-data. The dataset-specific modules introduce and educate users about particular datasets. The available modules are listed below and more information can be found on the DLDT website

 

AVAILABLE DLDT MODULES

Common Modules

  • Introduction to the NCES Distance Learning Dataset Training System
  • Introduction to the NCES Datasets
  • Introduction to NCES Web Gateways: Accessing and Exploring NCES Data
  • Analyzing NCES Complex Survey Data
  • Statistical Analysis of NCES Datasets Employing a Complex Sample Design
  • Acquiring Micro-level NCES Data
  • DataLab Tools: QuickStats, PowerStats, and TrendStats

Dataset-Specific Modules

  • Common Core of Data (CCD)
  • Introduction to MapED
  • Fast Response Survey System (FRSS)
  • Early Childhood Longitudinal Study Birth Cohort (ECLS-B)
  • Early Childhood Longitudinal Study Kindergarten Class of 1998-1999 (ECLS-K)
  • Early Secondary Longitudinal Studies (1972 – 2000)
    • National Longitudinal Study of 1972 (NLS-72)
    • High School and Beyond (HS&B)
    • National Education Longitudinal Study of 1988 (NELS:88)
  • Educational Longitudinal Study of 2002 (ELS:2002)
  • High School Longitudinal Study of 2009 (HSLS:09)
  • Introduction to High School Transcript Studies
  • Integrated Postsecondary Education Data System (IPEDS)
  • National Assessment of Educational Progress (NAEP)
    • Main, State, and Long-Term Trend NAEP
    • NAEP High School Transcript Study (HSTS)
    • National Indian Education Study (NIES)
  • National Household Education Survey Program (NHES)
  • Postsecondary Education Sample Survey Datasets
    • National Postsecondary Student Aid Study (NPSAS)
    • Beginning Postsecondary Student Longitudinal Study (BPS)
    • Baccalaureate and Beyond Longitudinal Study (B&B)
  • Postsecondary Education Quick Information System (PEQIS)
  • Private School Universe Survey (PSS)
  • Schools and Staffing Survey (SASS)
    • Teacher Follow-up Survey (TFS)
    • Principal Follow-up Survey (PFS)
    • Beginning Teacher Longitudinal Study (BTLS)
  • School Survey On Crime and Safety (SSOCS)
  • International Activities Program Studies Datasets
    • Progress in International Reading Literacy Study (PIRLS)
    • Trends in International Mathematics and Science Study (TIMSS)
    • Program for International Student Assessment (PISA)
    • Program for the International Assessment of Adult Competencies (PIAAC)

Statistical Concepts in Brief: Embracing the Errors

By Lauren Musu-Gillette

EDITOR’S NOTE: This is part of a series of blog posts about statistical concepts that NCES uses as a part of its work.

Many of the important findings in NCES reports are based on data gathered from samples of the U.S. population. These sample surveys provide an estimate of what data would look like if the full population had participated in the survey, but at a great savings in both time and costs.  However, because the entire population is not included, there is always some degree of uncertainty associated with an estimate from a sample survey. For those using the data, knowing the size of this uncertainty is important both in terms of evaluating the reliability of an estimate as well as in statistical testing to determine whether two estimates are significantly different from one another.

NCES reports standard errors for all data from sample surveys. In addition to providing these values to the public, NCES uses them for statistical testing purposes. Within annual reports such as the Condition of Education, Indicators of School Crime and Safety, and Trends in High School Drop Out and Completion Rates in the United States, NCES uses statistical testing to determine whether estimates for certain groups are statistically significantly different from one another. Specific language is tied to the results of these tests. For example, in comparing male and female employment rates in the Condition of Education, the indicator states that the overall employment rate for young males 20 to 24 years old was higher than the rate for young females 20 to 24 years old (72 vs. 66 percent) in 2014. Use of the term “higher” indicates that statistical testing was performed to compare these two groups and the results were statistically significant.

If differences between groups are not statistically significant, NCES uses the phrases “no measurable differences” or “no statistically significant differences at the .05 level”. This is because we do not know for certain that differences do not exist at the population level, just that our statistical tests of the available data were unable to detect differences. This could be because there is in fact no difference, but it could also be due to other reasons, such as a small sample size or large standard errors for a particular group. Heterogeneity, or large amounts of variability, within a sample can also contribute to larger standard errors.

Some of the populations of interest to education stakeholders are quite small, for example, Pacific Islander or American Indian/Alaska Native students. As a consequence, these groups are typically represented by relatively small samples, and their estimates are often less precise than those of larger groups. These less precise estimates can often be reflected in larger standard errors for these groups. For example, in the table above the standard error for White students who reported having been in 0 physical fights anywhere is 0.70 whereas the standard error is 4.95 for Pacific Islander students and 7.39 for American Indian/Alaska Native students. This means that the uncertainty around the estimates for Pacific Islander and American Indian/Alaska Native students is much larger than it is for White students. Because of these larger standard errors, differences between these groups that may seem large may not be statistically significantly different. When this occurs, NCES analysts may state that large apparent differences are not statistically significant. NCES data users can use standard errors to help make valid comparisons using the data that we release to the public.

Another example of how standard errors can impact whether or not sample differences are statistically significant can be seen when comparing NAEP scores changes by state. Between 2013 and 2015, mathematics scores changed by 3 points between for fourth-grade public school students in Mississippi and Louisiana. However, this change was only significant for Mississippi. This is because the standard error for the change in scale scores for Mississippi was 1.2, whereas the standard error for Louisiana was 1.6. The larger standard error, and therefore larger degree of uncertainly around the estimate, factor into the statistical tests that determine whether a difference is statistically significant. This difference in standard errors could reflect the size of the samples in Mississippi and Louisiana, or other factors such as the degree to which the assessed students are representative of the population of their respective states. 

Researchers may also be interested in using standard errors to compute confidence intervals for an estimate. Stay tuned for a future blog where we’ll outline why researchers may want to do this and how it can be accomplished.