# NCES Blog

### National Center for Education Statistics

By Lauren Musu-Gillette

EDITOR’S NOTE: This is part of a series of blog posts about statistical concepts that NCES uses as a part of its work.

Many of the important findings in NCES reports are based on data gathered from samples of the U.S. population. These sample surveys provide an estimate of what data would look like if the full population had participated in the survey, but at a great savings in both time and costs.  However, because the entire population is not included, there is always some degree of uncertainty associated with an estimate from a sample survey. For those using the data, knowing the size of this uncertainty is important both in terms of evaluating the reliability of an estimate as well as in statistical testing to determine whether two estimates are significantly different from one another.

NCES reports standard errors for all data from sample surveys. In addition to providing these values to the public, NCES uses them for statistical testing purposes. Within annual reports such as the Condition of Education, Indicators of School Crime and Safety, and Trends in High School Drop Out and Completion Rates in the United States, NCES uses statistical testing to determine whether estimates for certain groups are statistically significantly different from one another. Specific language is tied to the results of these tests. For example, in comparing male and female employment rates in the Condition of Education, the indicator states that the overall employment rate for young males 20 to 24 years old was higher than the rate for young females 20 to 24 years old (72 vs. 66 percent) in 2014. Use of the term “higher” indicates that statistical testing was performed to compare these two groups and the results were statistically significant.

If differences between groups are not statistically significant, NCES uses the phrases “no measurable differences” or “no statistically significant differences at the .05 level”. This is because we do not know for certain that differences do not exist at the population level, just that our statistical tests of the available data were unable to detect differences. This could be because there is in fact no difference, but it could also be due to other reasons, such as a small sample size or large standard errors for a particular group. Heterogeneity, or large amounts of variability, within a sample can also contribute to larger standard errors.

Some of the populations of interest to education stakeholders are quite small, for example, Pacific Islander or American Indian/Alaska Native students. As a consequence, these groups are typically represented by relatively small samples, and their estimates are often less precise than those of larger groups. These less precise estimates can often be reflected in larger standard errors for these groups. For example, in the table above the standard error for White students who reported having been in 0 physical fights anywhere is 0.70 whereas the standard error is 4.95 for Pacific Islander students and 7.39 for American Indian/Alaska Native students. This means that the uncertainty around the estimates for Pacific Islander and American Indian/Alaska Native students is much larger than it is for White students. Because of these larger standard errors, differences between these groups that may seem large may not be statistically significantly different. When this occurs, NCES analysts may state that large apparent differences are not statistically significant. NCES data users can use standard errors to help make valid comparisons using the data that we release to the public.

Another example of how standard errors can impact whether or not sample differences are statistically significant can be seen when comparing NAEP scores changes by state. Between 2013 and 2015, mathematics scores changed by 3 points between for fourth-grade public school students in Mississippi and Louisiana. However, this change was only significant for Mississippi. This is because the standard error for the change in scale scores for Mississippi was 1.2, whereas the standard error for Louisiana was 1.6. The larger standard error, and therefore larger degree of uncertainly around the estimate, factor into the statistical tests that determine whether a difference is statistically significant. This difference in standard errors could reflect the size of the samples in Mississippi and Louisiana, or other factors such as the degree to which the assessed students are representative of the population of their respective states.

Researchers may also be interested in using standard errors to compute confidence intervals for an estimate. Stay tuned for a future blog where we’ll outline why researchers may want to do this and how it can be accomplished.

By Sarah Grady

NCES collects a lot of data from students, teachers, principals, school districts, and state education agencies, but a few of our data collections directly survey members of the American public using residence as a first point of contact. Why? Some information about education in the U.S. cannot be collected efficiently by starting with schools or other institutions. Instead, contacting people directly at home is the best way to understand certain education-related topics.

The 2012 National Household Education Survey (NHES) included two survey components:

• The Early Childhood Program Participation (ECPP) survey, mailed to parents of children ages birth to age 6 and not yet enrolled in kindergarten
• The Parent and Family Involvement in Education (PFI) survey, mailed to parents of students in kindergarten through grade 12

The ECPP survey provides information about children from the perspective of their parents and includes questions about:

• Factors that influence choices of childcare arrangements
• Characteristics of childcare providers and cost of care
• Participation in home activities such as reading, telling stories, and singing songs

The items on this survey provide a wealth of information about how America’s children are learning and growing at home as well as the characteristics of the children who are in different types of care arrangements, including having multiple care arrangements.

NCES’s administrative data collections like EDFacts tell us a great deal about the sizes and types of schools in the U.S., while surveys like The National Teacher and Principal Survey (NTPS) and the School Survey on Crime and Safety (SSOCS) tell us about school policies, school climate, and teacher attitudes and experiences. But NHES is the source for information about students’ and families’ experiences with schooling, irrespective of school affiliation. Parents with students attending all types of schools in the U.S.—public, private, charter schools, schools that were chosen rather than assigned by the school district, even parents who educate their children at home rather than send them to a school—respond to the survey and answer questions about topics such as:

In 2016, NHES will field the Adult Training and Education Survey (ATES), which will provide data about adults’ educational and work credentials, including professional certifications and licenses. This survey meets an important need for more information about where and how adults acquire the skills they need for work. The ATES will start with a random sample of U.S. adults rather than a sample of postsecondary institutions, which enables NCES to collect information about a broader array of credentials than could be collected by reaching students through postsecondary institutions. In short, NHES data allow us to understand how the American public is experiencing education so that we can better respond to the changing education needs of our people—be they young children, K-12 students, or adults.