IES Blog

Institute of Education Sciences

Trends in Graduate Student Loan Debt

Sixty percent of students who completed a master’s degree in 2015–16 had student loan debt, either from undergraduate or graduate school. Among those with student loan debt, the average balance was $66,000.[i] But there are many types of master’s degrees. How did debt levels vary among specific degree programs? And how have debt levels changed over time? You can find the answers, for both master’s and doctorate degree programs, in the Condition of Education 2018.

Between 1999–2000 and 2015–16, average student loan debt for master’s degree completers increased by:

  • 71 percent for master of education degrees (from $32,200 to $55,200),
  • 65 percent for master of arts degrees (from $44,000 to $72,800),
  • 39 percent for master of science degrees (from $44,900 to $62,300), and
  • 59 percent for “other” master’s degrees[ii] (from $47,200 to $75,100).

Average loan balances for those who completed master of business education degrees were higher in 2015–16 than in 1999–2000 ($66,300 vs. $47,400), but did not show a clear trend during this period.

Between 1999–2000 and 2015–16, average student loan debt for doctorate degree completers increased by:

  • 97 percent for medical doctorates (from $124,700 to $246,000),
  • 75 percent for other health science doctorates[iii] (from $115,500 to $202,400),
  • 77 percent for law degrees (from $82,400 to $145,500),
  • 104 percent for Ph.D.’s outside the field of education (from $48,400 to $98,800), and
  • 105 percent for “other (non-Ph.D.) doctorates[iv] (from $64,500 to $132,200).

While 1999–2000 data were unavailable for education doctorate completers, the average balance in 2015–16 ($111,900) was 66 percent higher than the average loan balance for education doctorate completers in 2003–04 ($67,300).

For more information, check out the full analysis in the Condition of Education 2018.

 

By Joel McFarland

 

[i] The average balances in this analysis exclude students with no student loans.

[ii] Includes public administration or policy, social work, fine arts, public health, and other.

[iii] Includes chiropractic, dentistry, optometry, pharmacy, podiatry, and veterinary medicine.

[iv] Includes science or engineering, psychology, business or public administration, fine arts, theology, and other.

Developing an Evidence Base for Researcher-Practitioner Partnerships

I recently attended the annual meeting of the National Network of Education Research-Practice Partnerships. I was joined by well over 100 others who represented a wide swath of partnerships (RPPs), most supported by IES funds.  When it comes to research, academic researchers and practitioners often have different needs and different time frames. On paper, RPPs look like a way to bridge that divide.

Over the last few years, IES has made some large investments in RPPs. The Institute’s National Center for Education Research runs an RPP grant competition that has funded over 50 RPPs, with an investment of around $20 million over the last several years. In addition, the evaluation of state and local programs and policies competition has supported partnerships between researchers and state and local education agencies since 2009.

But the biggest investment in RPPs, by far, has been through the Regional Educational Laboratories. In the 2012-2017 REL funding cycle, 85 percent of the REL’s work had to go through “alliances”, which often coordinated several RPPs and themselves emphasized research to practice partnerships. In the current funding cycle, RELs have created over 100 RPPs, and the bulk of REL’s work—upwards of 80 percent—is done through them.

Back of the envelope calculations show that IES is currently spending over $40 million per year on REL RPPs. Add to that the hundreds of millions of dollars invested in alliances in the previous REL contract plus the RPP and state policy grant competitions and this constitutes a very big bet.

Despite the fact that we have invested so much in RPPs for over half a decade, we have only limited evidence about what they are accomplishing.

Consider the report that was just released from the National Center for Research in Policy and Practice. Entitled A Descriptive Study of the IES Researcher-Practitioner Partnership Program, it is exactly what it says it is: a descriptive study.  Its first research goal focused on the perceived benefits of partnerships and the second focused on partnership contexts.

But neither of these research goals answers the most important question: what did the partnerships change, not just in terms of research use or service delivery, but in what matters the most, which is improved outcomes for students.

Despite IES’ emphasis on evidence-based policy, right now RPPs are mostly hope-based. As noted, some research has documented a few of the processes that seem to be associated with better functioning RPPs, such as building trust among partners and having consultative meetings. Research has not, however, helped identify the functions, structures, or processes that work best for increasing the impact of RPPs.

The Institute is planning an evaluation of REL-based RPPs. We know that it will be difficult and imperfect. With over $200 million invested in the last REL cycle, with over 100 REL-based RPPs currently operating, and with $40+ million a year supporting RPPs, we assume that there’s lots of variation in how they are structured, what they are doing, and ultimately how successful they are in improving student outcomes. With so many RPPs and so much variation, our evaluation will focus on the “what works for whom and under what circumstances” type questions: Are certain types of RPPs better at addressing particular types of problems? Are there certain conditions under which RPPs are more likely to be successful?  Are there specific strategies that make some RPPs more successful than others?  Are any successful RPP results replicable?

Defining success will not be simple. A recent study by Henrick et al. identifies five dimensions by which to evaluate RPPs—all of which have multiple indicators. Since it’s not likely that we can adequately assess all five of these dimensions, plus any others that our own background research uncovers, we need to make tough choices. Even by focusing on student outcomes, which we will, we are still left with many problems. For example, different RPPs are focused on different topics—how can we map reasonable outcome measures across those different areas, many of which could have different time horizons for improvement?

Related to the question of time horizons for improvement is the question of how long it takes for RPPs to gain traction. Consider three of arguably the most successful RPPs in the nation: The Chicago Consortium was launched in 1990; the Baltimore consortium, BERC, in fall 2006; and the Research Alliance for New York City Schools in 2008. In contrast, IES’ big investment in RPPs began in 2012. How much time do RPPs need to change facts on the ground? Since much of the work of the earliest alliances was focused on high school graduation rates and college access, 6 years seems to be a reasonable window for assessing those outcomes, but other alliances were engaged in work that may have longer time frames.

The challenges go on and on. But one thing is clear: we can’t continue to bet tens of millions of dollars each year on RPPs without a better sense of what they are doing, what they are accomplishing, and what factors are associated with their success.

The Institute will soon be issuing a request for comments to solicit ideas from the community on the issues and indicators of success that could help us inform our evaluation of the RPPs. We look forward to working with you to provide a stronger evidence base identifying what works for whom in RPPs.

Mark Schneider
Director, IES

Connect with NCES Researchers at Upcoming Summer Conferences: STATS-DC, JSM and ASA

NCES staff will share their knowledge and expertise through research presentations, training sessions, and booth demonstrations at three notable conferences this summer (listed below). The NCES booth will be also be featured at exhibit halls where conference attendees can “ask an NCES expert,” learn how NCES data can support their research, or pick up publications and products.

NCES STATS-DC Data Conference

July 25 – 27
Washington, DC
The Mayflower Hotel

STAT-DC is NCES’ annual conference designed to provide the latest information, resources and training on accessing and using federal education data. Researchers, policymakers and data system managers from all levels are invited to discover innovations in the design and implementation of data collections and information systems. There is no registration fee to attend STATS-DCparticipants must complete registration paperwork onsite at the conference.

Key conference items:

  • Learn updates on federal and state activities affecting data collection and reporting, with a focus on the best new approaches in education statistics
  • Attend general information sessions on CCD, data management, data use, and data privacy, etc.
  • Partake in trainings for Common Core of Data (CCD) and EDFacts data coordinators
  • Attend data tools and resource demonstrations from NCES staff during designated times at the NCES exhibit booth.

NCES Staff Presentations:

Explore the full conference agenda. Some highlighted sessions are shown below.

WEDNESDAY, JULY 25

9:00 a.m. – 12:00 noon
Common Core of Data (CCD) Fiscal Coordinators' Training
District Ballroom

1:15 p.m. – 2:15 p.m.
Opening Plenary Session by Dr. Lynn Woodworth
Grand Ballroom

4:30 p.m. – 5:20 p.m.
Introduction to the Common Core of Data: America's Public Schools by Mark Glander
Palm Court Ballroom

9:00 a.m. – 12:30 p.m.
EDFacts and Common Core of Data (CCD) Nonfiscal Coordinators’ Training
Grand Ballroom

THURSDAY, JULY 26

9:00 a.m. – 10:00 a.m.
Title I Allocations by Bill Sonnenberg
Palm Court Ballroom

 

American Statistical Association – Joint Statistical Meetings

July 28 – August 2
Vancouver, British Columbia, Canada
Vancouver Convention Centre

JSM is the largest gathering of statisticians and data scientists in North America. Exchange ideas and explore opportunities for collaboration across industries with NCES staff and other statisticians in academia, business, and government.
 

Key conference items:

  • Review applications and methodology of statistics, such as analytics and data science
  • Attend technical sessions, poster presentations, roundtable discussions, professional development courses and workshops
  • Visit the NCES booth in the exhibit hall booth #227 and meet the NCES Chief Statistician, Marilyn Seastrom.

NCES Staff Presentation:

THURSDAY, AUGUST 2
8:35 a.m. – 10:30 a.m.
Educating the Government Workforce to Lead with Statistics by Andrew White
CC-East 10

 

American Sociological Association – Annual Meeting

August 11 – August 14
Philadelphia, Pennsylvania
Pennsylvania Convention Center and the Philadelphia Marriott Downtown

Professionals involved in the scientific study of society will share knowledge and discuss new directions in research and practice during this annual meeting.

Key conference items:

  • Choose from 600 program sessions throughout the 4-day conference
  • Browse and discuss topics from 3,000+ research papers submitted
  • Swing by the NCES exhibit hall booth #211

 

Follow us on twitter (@EdNCES) throughout these upcoming conferences to stay up to date and learn the latest in education statistics. We hope you’ll join us whether in person or online!

What Are Threat Assessment Teams and How Prevalent Are They in Public Schools?

As part of the Safe School Initiative, the U.S. Department of Education and U.S. Secret Service authored a report in 2004 that described how schools could establish a threat assessment process “for identifying, assessing, and managing students who may pose a threat of targeted violence in schools.” School-based threat assessment teams are intended to prevent and reduce school violence and are adapted from the U.S. Secret Service’s threat assessment model.

The School Survey on Crime and Safety (SSOCS) collected data on the prevalence of threat assessment teams in schools for the first time in 2015–16 from a nationally representative sample of 3,500 K–12 public schools. The questionnaire defined a threat assessment team as “a formalized group of persons who meet on a regular basis with the common purpose of identifying, assessing, and managing students who may pose a threat of targeted violence in schools.” School-based threat assessment teams are usually composed of some combination of school administrators, teachers, counselors, sworn law enforcement officers, and mental health professionals.

While 42 percent of all public schools reported having a threat assessment team during the 2015–16 school year, the prevalence of threat assessment teams varied by school characteristics.


Percentage of public schools that reported having a threat assessment team, by school level and enrollment size: School year 2015–16

1Primary schools are defined as schools in which the lowest grade is not higher than grade 3 and the highest grade is not higher than grade 8. Middle schools are defined as schools in which the lowest grade is not lower than grade 4 and the highest grade is not higher than grade 9. High schools are defined as schools in which the lowest grade is not lower than grade 9 and the highest grade is not higher than grade 12. Combined schools include all other combinations of grades, including K–12 schools.
NOTE: A threat assessment team was defined for respondents as a formalized group of persons who meet on a regular basis with the common purpose of identifying, assessing, and managing students who may pose a threat of targeted violence in schools. Responses were provided by the principal or the person most knowledgeable about school crime and policies to provide a safe environment. Although rounded numbers are displayed, the figures are based on unrounded estimates.
SOURCE: U.S. Department of Education, National Center for Education Statistics, 2015–16 School Survey on Crime and Safety (SSOCS), 2016. See table 35.


For example, a higher percentage of high schools (52 percent) than of middle (45 percent), primary (39 percent), and combined schools (28 percent) reported having a threat assessment team during the 2015–16 school year. Further, 57 percent of schools with an enrollment size of 1,000 or more students reported having a threat assessment team, compared with 31 percent of schools with an enrollment size of less than 300 students; 40 percent of schools with an enrollment size of 300–499 students; and 45 percent of schools with an enrollment size of 500–999 students.

Threat assessment teams were also more prevalent in schools that had at least one security staff[i] member present at school at least once a week during the 2015–16 school year (48 percent of schools with security staff present vs. 33 percent of schools without security staff present). The percentage of schools reporting a threat assessment team was also higher in schools that reported at least one violent incident[ii] had occurred at school during the 2015–16 school year (44 percent) compared with schools that had no violent incidents (35 percent).

How often a threat assessment team meets can be an indication of how active the team is in the school.  The majority of schools with a threat assessment team in 2015–16 reported that their teams met “on occasion” (62 percent), followed by “at least once a month” (27 percent), “at least once a week” (9 percent), and “never” (2 percent).


Among public schools that reported having a threat assessment team, percentage distribution by frequency of threat assessment team meetings: School year 2015–16

!Interpret data with caution. The coefficient of variation (CV) for this estimate is between 30 and 50 percent.
NOTE: A threat assessment team was defined for respondents as a formalized group of persons who meet on a regular basis with the common purpose of identifying, assessing, and managing students who may pose a threat of targeted violence in schools. Responses were provided by the principal or the person most knowledgeable about school crime and policies to provide a safe environment.
SOURCE: U.S. Department of Education, National Center for Education Statistics, 2015–16 School Survey on Crime and Safety (SSOCS), 2016. See table 35.


You can find more information on school crime and safety in NCES publications, including Crime, Violence, Discipline, and Safety in U.S. Public Schools: Findings From the School Survey on Crime and Safety: 2015–16 and the 2017 Indicators of School Crime and Safety.

 

By Rachel Hansen, NCES and Melissa Diliberti, AIR

 

[i] Security staff includes full- or part-time school resource officers, sworn law enforcement officers, or security guards or security personnel present at school at least once a week.

[ii] Violent incidents include rape or attempted rape, sexual assault other than rape (including threatened rape), physical attack or fight with or without a weapon, threat of physical attack with or without a weapon, and robbery (taking things by force) with or without a weapon.

 

Building Evidence: Changes to the IES Goal Structure for FY 2019

The IES Goal Structure was created to support a continuum of education research that divides the research process into stages for both theoretical and practical purposes. Individually, the five goals – Exploration (Goal 1), Development and Innovation (Goal 2), Efficacy and Replication (Goal 3), Effectiveness (Goal 4), and Measurement (Goal 5) – were intended to help focus the work of researchers, while collectively they were intended to cover the range of activities needed to build evidence-based solutions to the most pressing education problems in our nation. Implicit in the goal structure is the idea that over time, researchers will identify possible strategies to improve student outcomes (Goal 1), develop and pilot-test interventions (Goal 2), and evaluate the effects of interventions with increasing rigor (Goals 3 and 4).

Over the years, IES has received many applications and funded a large number of projects under Goals 1-3.  In contrast, IES has received relatively few applications and awarded only a small number of grants under Goal 4. To find out why – and to see if there were steps IES could take to move more intervention studies through the evaluation pipeline – IES hosted a Technical Working Group (TWG) meeting in 2016 to hear views from experts on what should come after an efficacy study (see the relevant summary and blog post). IES also issued a request for public comment on this question in July 2017 (see summary).

The feedback we received was wide-ranging, but there was general agreement that IES could do more to encourage high-quality replications of interventions that show prior evidence of efficacy. One recommendation was to place more emphasis on understanding “what works for whom” under various conditions.  Another comment was that IES could provide support for a continuum of replication studies.  In particular, some commenters felt that the requirements in Goal 4 to use an independent evaluator and to carry out an evaluation under routine conditions may not be practical or feasible in all cases, and may discourage some researchers from going beyond Goal 3.   

In response to this feedback, IES revised its FY 2019 RFAs for Education Research Grants (84.305A) and Special Education Research Grants (84.324A) to make clear its interest in building more and better evidence on the efficacy and effectiveness of interventions. Among the major changes are the following:

  • Starting in FY 2019, Goal 3 will continue to support initial efficacy evaluations of interventions that have not been rigorously tested before, in addition to follow-up and retrospective studies.
  • Goal 4 will now support all replication studies of interventions that show prior evidence of efficacy, including but not limited to effectiveness studies.
  • The maximum amount of funding that may be requested under Goal 4 is higher to support more in-depth work on implementation and analysis of factors that moderate or mediate program effects.

The table below summarizes the major changes. We strongly encourage potential applicants to carefully read the RFAs (Education Research, 84.305A and Special Education Research, 84.324A) for more details and guidance, and to contact the relevant program officers with questions (contact information is in the RFA).

Applications are due August 23, 2018 by 4:30:00 pm Washington DC time.

 

Name Change

Focus Change

Requirements Change

Award Amount Change

Goal 3

Formerly “Efficacy and Replication;” in FY2019, “Efficacy and Follow-Up.”

Will continue to support initial efficacy evaluations of interventions in addition to follow-up and retrospective studies.

No new requirements.

No change.

Goal 4

Formerly “Effectiveness;” in FY2019, “Replication: Efficacy and Effectiveness.”

Will now support all replications evaluating the impact of an intervention. Will also support Efficacy Replication studies and Re-analysis studies.

Now contains a requirement to describe plans to conduct analyses related to implementation and analysis of key moderators and/or mediators. (These were previously recommended.)

Efficacy Replication studies maximum amount: $3,600,000.

Effectiveness studies maximum amount: $4,000,000.

Re-analysis studies maximum amount: $700,000.

 

 

By Thomas Brock (NCER Commissioner) and Joan McLaughlin (NCSER Commissioner)