NCES Blog

National Center for Education Statistics

High Job Satisfaction Among Teachers, but Leadership Matters

By Lauren Musu-Gillette

Are teachers satisfied with their jobs? Overall, the answer appears to be yes. However, a recent NCES report highlights that teacher job satisfaction differs by school characteristics.

Newly released data shows that at least 9 out of 10 teachers reported that they were satisfied with their jobs in 2003–04, 2007–08, and 2011–12. A higher percentage of private school teachers than public school teachers reported that they were satisfied with their jobs in all of these years.


Percent of teachers reporting they were satisfied in their jobs: School years 2003–04, 2007–08, and 2011–12

NOTE: “Satisfied” teachers are those who responded “strongly agree” or “somewhat agree” to the statement: “I am generally satisfied with being a teacher at this school.”
SOURCE: U.S. Department of Education, National Center for Education Statistics, Schools and Staffing Survey (SASS).


Differences in teacher job satisfaction also emerged based on perceptions of administrative support.[i] In 2011–12, a higher percentage of teachers who believed that the administration in their schools was supportive were satisfied with their jobs. Among teachers who felt that the administration in their schools was supportive, 95 percent were satisfied with their jobs. This was 30 percentage points higher than the percentage of teachers did not feel the administration was supportive. This pattern was seen in private schools as well and is consistent with previous research that demonstrates the importance of schools administrators to teachers’ working conditions.[ii]   


Percent of satisfied teachers, by their perceptions of administrative support: School year 2011–12

NOTE: “Satisfied” teachers are those who responded “strongly agree” or “somewhat agree” to the statement: “I am generally satisfied with being a teacher at this school.”
SOURCE: U.S. Department of Education, National Center for Education Statistics, Schools and Staffing Survey (SASS).


[i] Support was measured by teachers’ agreement or disagreement with the statement “The school administration’s behavior toward the staff is supportive and encouraging.”
[ii] Ladd, H. F. (2011). Teachers’ Perceptions of Their Working Conditions: How Predictive of Planned and Actual Teacher Movement? Educational Evaluation and Policy Analysis, 33(2): 235-261.

Statistical Concepts in Brief: Embracing the Errors

By Lauren Musu-Gillette

EDITOR’S NOTE: This is part of a series of blog posts about statistical concepts that NCES uses as a part of its work.

Many of the important findings in NCES reports are based on data gathered from samples of the U.S. population. These sample surveys provide an estimate of what data would look like if the full population had participated in the survey, but at a great savings in both time and costs.  However, because the entire population is not included, there is always some degree of uncertainty associated with an estimate from a sample survey. For those using the data, knowing the size of this uncertainty is important both in terms of evaluating the reliability of an estimate as well as in statistical testing to determine whether two estimates are significantly different from one another.

NCES reports standard errors for all data from sample surveys. In addition to providing these values to the public, NCES uses them for statistical testing purposes. Within annual reports such as the Condition of Education, Indicators of School Crime and Safety, and Trends in High School Drop Out and Completion Rates in the United States, NCES uses statistical testing to determine whether estimates for certain groups are statistically significantly different from one another. Specific language is tied to the results of these tests. For example, in comparing male and female employment rates in the Condition of Education, the indicator states that the overall employment rate for young males 20 to 24 years old was higher than the rate for young females 20 to 24 years old (72 vs. 66 percent) in 2014. Use of the term “higher” indicates that statistical testing was performed to compare these two groups and the results were statistically significant.

If differences between groups are not statistically significant, NCES uses the phrases “no measurable differences” or “no statistically significant differences at the .05 level”. This is because we do not know for certain that differences do not exist at the population level, just that our statistical tests of the available data were unable to detect differences. This could be because there is in fact no difference, but it could also be due to other reasons, such as a small sample size or large standard errors for a particular group. Heterogeneity, or large amounts of variability, within a sample can also contribute to larger standard errors.

Some of the populations of interest to education stakeholders are quite small, for example, Pacific Islander or American Indian/Alaska Native students. As a consequence, these groups are typically represented by relatively small samples, and their estimates are often less precise than those of larger groups. These less precise estimates can often be reflected in larger standard errors for these groups. For example, in the table above the standard error for White students who reported having been in 0 physical fights anywhere is 0.70 whereas the standard error is 4.95 for Pacific Islander students and 7.39 for American Indian/Alaska Native students. This means that the uncertainty around the estimates for Pacific Islander and American Indian/Alaska Native students is much larger than it is for White students. Because of these larger standard errors, differences between these groups that may seem large may not be statistically significantly different. When this occurs, NCES analysts may state that large apparent differences are not statistically significant. NCES data users can use standard errors to help make valid comparisons using the data that we release to the public.

Another example of how standard errors can impact whether or not sample differences are statistically significant can be seen when comparing NAEP scores changes by state. Between 2013 and 2015, mathematics scores changed by 3 points between for fourth-grade public school students in Mississippi and Louisiana. However, this change was only significant for Mississippi. This is because the standard error for the change in scale scores for Mississippi was 1.2, whereas the standard error for Louisiana was 1.6. The larger standard error, and therefore larger degree of uncertainly around the estimate, factor into the statistical tests that determine whether a difference is statistically significant. This difference in standard errors could reflect the size of the samples in Mississippi and Louisiana, or other factors such as the degree to which the assessed students are representative of the population of their respective states. 

Researchers may also be interested in using standard errors to compute confidence intervals for an estimate. Stay tuned for a future blog where we’ll outline why researchers may want to do this and how it can be accomplished.

Where can I find information about the condition of education in the United States?

By Grace Kena

The National Center for Education Statistics submits a report to Congress on the condition of education every year by June 1st. The Condition of Education provides a comprehensive look at the state and progress of education in the United States. Although The Condition of Education was first produced in 1975, the origins of the report date back to the creation of the first federal department of education in 1867. Its first major publication, the Annual Report of the Commissioner of Education, covered data for 1869-70. Today’s Condition of Education report is presented to Congress and the White House annually. In addition, the indicators are updated regularly online, with a convenient site designed for mobile devices. By visiting The Condition of Education website, you can access the latest indicators, download the full Congressional report for the current and prior years, and watch short videos about recent findings and highlights.

The Condition of Education covers early childhood through postbaccalaureate education, and addresses topics relevant to a broad spectrum of education stakeholders. The report contains text and graphics on dozens of educational indicators that describe student characteristics, participation in special programs, achievement, completion rates, as well as characteristics of teachers, schools, and colleges. Economic indicators show the success that students have in finding employment after their education, and present information on their earnings. In addition to core indicators of perennial interest and supplemental indicators on other special topics, the Condition features spotlight indicators with an in depth focus on emerging issues and new data. Taken together, these indicators provide valuable information about the progress of our education system in addressing such key policy concerns as improving graduation rates, closing gaps in student achievement, and promoting educational equity.

For more information and access to the indicators, see The Condition of Education 2016. You can also learn more about The Condition of Education in the video below, or see other videos on specific topics of interest on the NCES YouTube Channel.

This blog was originally posted on June 1, 2015 and was updated on May 26, 2016

What Are the Characteristics of Students Who Have Ever Been Suspended or Expelled From School?

By Lauren Musu-Gillette

Suspensions and expulsions from school are often associated with negative academic outcomes, such as lower levels of achievement and higher dropout rates.[i] Using data from the High School Longitudinal Study of 2009 (HSLS:2009), NCES recently published a new spotlight feature in Indicators of School Crime and Safety that shows that a greater percentage of students who are suspended or expelled have low engagement in school and are less academically successful.  

While there is a large body of research on this topic, this is the first time that the nationally representative HSLS study has been used to examine outcomes for and characteristics of suspended and expelled youth. The comparisons presented here cannot be used to establish a cause-and-effect relationship, but the longitudinal nature of the dataset could provide researchers an analytical path to understanding how these relationships have unfolded over time.

Research shows that students’ attitudes toward school are associated with their academic outcomes, and that schools with a supportive climate have lower rates of delinquency, including suspensions and expulsions.[ii] As part of the HSLS:2009 data collection, students reported on their school engagement[iii] and sense of school belonging[iv] in the fall of their ninth-grade year (2009). A greater percentage of students who were suspended or expelled between 2009 and 2012 were reported low school engagement entering high school. A similar pattern was seen with regard to a sense of belonging in school.


 Percentage of fall 2009 ninth-graders who were ever suspended or expelled through spring 2012, by school engagement and sense of school belonging: 2012

1A school engagement scale was constructed based on students' responses to questions about how frequently they went to class without homework done, without pencil or paper, without books, or late.

2A school belonging scale was constructed based on the extent to which students agreed or disagreed that they felt safe at school, that they felt proud of being part of the school, that there were always teachers or other adults at school they could talk to if they had a problem, that school was often a waste of time, and that getting good grades was important to them.

Source: U.S. Department of Education, National Center for Education Statistics, High School Longitudinal Study of 2009 (HSLS:2009).


The percentages of students who had ever been suspended or expelled were higher for those students with lower grade point averages (GPAs). Nearly half of students with a cumulative high school GPA below 2.0 had ever been suspended or expelled and just 11 percent had a GPA of 3.0 or higher. Additionally, as of 2013, a higher percentage of students who had not completed high school than of students who had completed high school had ever been suspended or expelled (54 vs. 17 percent).


Percentage of fall 2009 ninth-graders who were ever suspended or expelled through spring 2012, by cumulative high school grade point average and high school completion status: 2013

Source: U.S. Department of Education, National Center for Education Statistics, High School Longitudinal Study of 2009 (HSLS:2009).


Differences in the demographic characteristics of students who had ever been suspended or expelled were similar to those found in other datasets, such as the Civil Rights Data Collection (CRDC). Characteristics of youth in the HSLS study who were ever suspended or expelled include:

  • A higher percentage of males (26 percent) than of females (13 percent) were ever suspended or expelled.
  • A higher percentage of Black students (36 percent) than of Hispanic (21 percent), White (14 percent), and Asian students (6 percent) had ever been suspended or expelled.
  • A higher percentage of students of Two or more races (26 percent) and Hispanic students had ever been suspended or expelled than White students.
  • A lower percentage of Asian students than of students of any other race/ethnicity with available data had ever been suspended or expelled.

For more information on the characteristics of students who have ever been suspended or expelled, please see the full spotlight in Indicators of School Crime and Safety 2015.


[i] Christle, C.A., Nelson, C.M., and Jolivette, K. (2004). School Characteristics Related to the Use of Suspension. Education and the Treatment of Children, 27(4): 509-526.; Skiba, R.J., Michael, R.S., Nardo, A.C., and Peterson, R.L. (2002). The Color of Discipline: Sources of Gender and Racial Disproportionality in School Punishment. Urban Review, 34(4): 317-342.

[ii] Morrison, G.M., Robertson, L., Laurie, B., and Kelly, J. (2002). Protective Factors Related to Antisocial Behavior Trajectories.Journal of Clinical Psychology, 58(3): 277-290; Christle, C.A., Jolivette, K., and Nelson, C.M. (2005). Breaking the School to Prison Pipeline: Identifying School Risk and Protective Factors for Youth Delinquency. Exceptionality, 13(2): 69-88.

[iii] School engagement measured how frequently students went to class without homework done, without pencil or paper, without books, or late.

[iv] Sense of school belonging was measured based on the extent to which students agreed or disagreed that they felt safe at school, that they felt proud of being part of the school, that there were always teachers or other adults at school they could talk to if they had a problem, that school was often a waste of time, and that getting good grades was important to them.

Number of Juvenile Offenders in Residential Placement Falls; Racial/Ethnic Gaps Persist

By Lauren Musu-Gillette and Joel McFarland

Juvenile offenders held in residential placement facilities often experience disruptions to their education as they pass in and out of traditional schooling. While most facilities provide middle- and high-school-level educational services, these services are generally not comparable to those available in their community schools.[i] Understanding the characteristics of juveniles in these facilities can help educators and policy-makers in finding the best ways to support education for these youth.  

Between 1997 and 2013, the number of youth in residential placement facilities fell by nearly 50 percent, from approximately 105,000 to just over 54,000.[ii] While the overall decline is informative, the residential placement rate (the number of juvenile offenders in residential facilities per 100,000 youth in the general population) provides a more comparable measurement across time because it accounts for population growth and demographic changes. The overall residential placement rate fell from 356 per 100,000 youth in 1997 to 173 per 100,000 in 2013. Following this trend, the residential placement rate for youth in various racial and ethnic subgroups also fell significantly as seen in the chart below.


Residential placement rate (number of juvenile offenders in residential placement facilities) per 100,000 juveniles, by race/ethnicity: Selected years, 1997 through 2013

Source: U.S. Department of Justice, Office of Juvenile Justice and Delinquency Prevention, Census of Juveniles in Residential Placement (CJRP).


Although residential placement rates declined for all racial/ethnic groups, disparities between racial/ethnic groups persist. In 2013, the residential placement rate for Black youth was 4.6 times the rate for White youth, and the rate for Hispanic youth was 1.7 times the rate for White youth. The American Indian/Alaska Native rate was 3.3 times the White rate, and the residential placement rate for Asian/Pacific Islander youth was approximately one-quarter of the rate for White youth (0.28).

The residential placement rate per 100,000 youth was also higher for Black males than for males or females of any other racial/ethnic group. Overall, Black males made up over one-third (35 percent) of all youth in residential placement in 2013. The rate of residential placement for Black males in 2013 was 804 per 100,000, which was 1.6 times the rate for American Indian/Alaska Native males, 2.7 times the rate for Hispanic males, 5 times the rate for White males, and more than 16 times the rate for Asian/Pacific Islander males.

While residential placement rates were lower for females than males from all racial/ethnic groups, there were also differences between racial/ethnic groups for females. The residential placement rate was highest for American Indian/Alaska Native females. This rate was 3.7 times the rate for Hispanic females, 4.8 times the rate for White females, and over 20 times the rate for Asian/Pacific Islander females. The rate for Black females was also more than twice the rate for Hispanic, White, and Asian/Pacific Islander females.


Residential placement rate (number of juvenile offenders in residential placement facilities) per 100,000 juveniles, by race/ethnicity and sex: 2013

Source: U.S. Department of Justice, Office of Juvenile Justice and Delinquency Prevention, Census of Juveniles in Residential Placement (CJRP).


Older youth made up a greater share of juveniles in residential placement than younger youth in 2013. A majority (69 percent) of juveniles in residential facilities were between the ages of 16 and 20; about 30 percent were between the ages of 13 and 15; and just 1 percent were age 12 or younger.

For more information on juvenile offenders in residential placement facilities, including data on the characteristics of those facilities, please see the full spotlight in Indicators of School Crime and Safety 2015.


[i] Hockenberry, S., Sickmund, M., and Sladky, A. (2013). Juvenile Residential Facility Census, 2010: Selected Findings. Washington, DC: Office of Juvenile Justice and Delinquency Prevention, U.S. Department of Justice. Retrieved November 2015 from http://www.ojjdp.gov/pubs/241134.pdf; The Council of State Governments Justice Center. (2015). Locked Out: Improving Educational and Vocational Outcomes for Incarcerated Youth. New York: Author. Retrieved November 2015 from https://csgjusticecenter.org/youth/publications/locked-out-improving-educational-and-vocational-outcomes-for-incarcerated-youth/.

[ii] Data presented here come from the Census of Juveniles in Residential Placement (CJRP). The CJRP is a biennial survey of all secure and nonsecure residential placement facilities that house juvenile offenders, defined as persons younger than 21 who are held in a residential setting as a result of some contact with the justice system (i.e., being charged with or adjudicated for an offense). The CJRP provides a 1-day count of the number of youth in residential placement, as well as data on the characteristics of youth in these facilities and information about the facilities themselves.