IES Blog

Institute of Education Sciences

IES Funds First Large-Scale Evaluation Study of Public Preschool Montessori

The Montessori method of education was developed over 100 years ago by Dr. Maria Montessori. This “whole child” approach centers around the theory that children are capable of initiating learning in a thoughtfully prepared environment that develops children’s physical, social, emotional, and cognitive growth. Core components of Montessori education are mixed age classrooms in three-year groupings (e.g., 3-6 year olds, 6-9, 9-12, etc.), a carefully prepared environment filled with appropriate materials and lessons, student freedom to select lessons and activities each day, and daily uninterrupted 3-hour work blocks.

   

According to the National Center for Montessori in the Public Sector (NCMPS), there are currently over 5,000 Montessori schools in the U.S., 500 of which are public schools and over 150 of which serve public preschool and kindergarten students.  Despite its growing popularity in public preschools and Head Start schools, no large-scale evaluation of the efficacy of the Montessori model on children’s academic, social, and emotional skills has been conducted. 

This year, IES funded the first such study. A project team led by Dr. Ann-Marie Faria and Ms. Karen Manship (American Institutes for Research) and Dr. Angeline Lillard (University of Virginia) will study more than 650 children for three years, beginning with their entry at age 3 into preschool. Importantly, this study relies on individual random student assignment via lottery entry to compare preschool students who enroll in Montessori at age 3 to those who are assigned to a waitlist control group (and thus are in other settings such as public PreK, daycare, or a home setting). Data will be collected in diverse urban and suburban school districts across the country, including Houston (TX), Hartford and New Haven (CT), and Washington, DC.

Researchers will examine the impact of preschool Montessori education on children’s academic, social, and emotional skills, as well as kindergarten readiness skills. The research team will also conduct a cost effectiveness study of the public Montessori preschool model, and will examine the effect of fidelity of implementation of Montessori on student outcomes. Collectively, the findings from this study will provide valuable evidence of the efficacy of Montessori preschool education. Ultimately, the researchers plan to disseminate their findings to educators, parents, and policymakers through research briefs, infographics, blog posts, and webinars.

 

By Amanda M. Dettmer, PhD, American Psychological Association Executive Branch Science Fellow/ AAAS Science & Technology Policy Fellowship

Photo credit: Marilyn Horan, Carroll Creek Montessori Public Charter School

    

IES Expands Research in Social Emotional Learning

Social and emotional learning (SEL) is a key ingredient of high-quality education care, is important for both educators and children, and has been associated with children’s concurrent and later academic and social success.

Over a decade ago, Yale University’s Center for Emotional Intelligence developed and began testing RULER, an SEL program geared toward children and educators (i.e., school leaders, teachers, and staff). RULER stands for five key social and emotional skills: Recognizing emotions in self and others, Understanding the causes of emotions in self and others, Labeling and talking about emotions, Expressing emotions across situations, and Regulating emotions effectively. For children and the key adults in their lives, RULER combines a whole-school professional development approach with a skill-building curriculum targeting educator and student social and emotional skills, school and classroom climate, and educator and student well-being. RULER is currently offered for pre-k–12 and out-of-school-time settings.

IES has supported the development and testing of RULER programs since 2012. The first IES award supported the modification of existing components of the RULER K-8th grade intervention and creation of new developmentally appropriate content for preschool settings. RULER is currently implemented in over 200 early childhood school- and home-based programs across the country and nearly 2,000 K-12 schools nationwide. Although RULER’s evidence-base has been growing over the years, RULER has not been systematically studied in large-scale, randomized controlled trials in preschool settings nor has it undergone an external evaluation in the later grades.

That is about to change: this year, IES awarded two grants to study the effects of the RULER programs. One will study the efficacy of whole-school RULER implementation for preschool students (under the Early Learning Programs and Policies program), and the other will do so for grades K-6 (under the Social and Behavioral Context for Academic Learning program).

The Preschool RULER grant (PI: Craig Bailey, PhD) will assess school readiness in children aged 3-5, as well as outcomes at the teacher/classroom and school leader/school levels. The researchers will study 72 early childhood centers, including public, private, and Head Start programs from urban areas in Connecticut, using a multisite, cluster-randomized control trial design. Altogether, approximately 216 classrooms, 1,800 staff, and 2,160 children will participate. Children, educators, and school leaders will be assessed for social and emotional skills, and educators/leaders will be assessed for emotionally intelligent pedagogy and leadership. Children will also be assessed for their approaches to learning, pre-literacy, and pre-math skills. This study will provide evidence about the efficacy of RULER in preschool settings and contribute to our understanding of high quality early childhood interventions that promote social emotional learning.

 

The other grant, for RULER in grades K-6 (PI: Jason Downer, PhD), will be the first large-scale external evaluation of RULER. The study will take place in 60 urban and suburban public elementary schools, including 420 teachers and 2,520 K-6 students in Virginia. Key outcomes for this study will include school climate assessments (assessed by teacher and principal reports), teacher well-being (assessed by self-report), and four student outcomes: social-emotional skills, behavior, academic engagement and academic achievement (assessed by standardized assessments, tests, and attendance records). Ultimately, this study will describe RULER’s effects on school climate, teacher well-being, classroom climate, and student outcomes.

By Amanda M. Dettmer, AAAS Science & Technology Policy Fellow Sponsored by the American Psychological Association Executive Branch Science Fellowship

Photo credits: Yale Center for Emotional Intelligence

Trends in Graduate Student Loan Debt

Sixty percent of students who completed a master’s degree in 2015–16 had student loan debt, either from undergraduate or graduate school. Among those with student loan debt, the average balance was $66,000.[i] But there are many types of master’s degrees. How did debt levels vary among specific degree programs? And how have debt levels changed over time? You can find the answers, for both master’s and doctorate degree programs, in the Condition of Education 2018.

Between 1999–2000 and 2015–16, average student loan debt for master’s degree completers increased by:

  • 71 percent for master of education degrees (from $32,200 to $55,200),
  • 65 percent for master of arts degrees (from $44,000 to $72,800),
  • 39 percent for master of science degrees (from $44,900 to $62,300), and
  • 59 percent for “other” master’s degrees[ii] (from $47,200 to $75,100).

Average loan balances for those who completed master of business education degrees were higher in 2015–16 than in 1999–2000 ($66,300 vs. $47,400), but did not show a clear trend during this period.

Between 1999–2000 and 2015–16, average student loan debt for doctorate degree completers increased by:

  • 97 percent for medical doctorates (from $124,700 to $246,000),
  • 75 percent for other health science doctorates[iii] (from $115,500 to $202,400),
  • 77 percent for law degrees (from $82,400 to $145,500),
  • 104 percent for Ph.D.’s outside the field of education (from $48,400 to $98,800), and
  • 105 percent for “other (non-Ph.D.) doctorates[iv] (from $64,500 to $132,200).

While 1999–2000 data were unavailable for education doctorate completers, the average balance in 2015–16 ($111,900) was 66 percent higher than the average loan balance for education doctorate completers in 2003–04 ($67,300).

For more information, check out the full analysis in the Condition of Education 2018.

 

By Joel McFarland

 

[i] The average balances in this analysis exclude students with no student loans.

[ii] Includes public administration or policy, social work, fine arts, public health, and other.

[iii] Includes chiropractic, dentistry, optometry, pharmacy, podiatry, and veterinary medicine.

[iv] Includes science or engineering, psychology, business or public administration, fine arts, theology, and other.

Developing an Evidence Base for Researcher-Practitioner Partnerships

I recently attended the annual meeting of the National Network of Education Research-Practice Partnerships. I was joined by well over 100 others who represented a wide swath of partnerships (RPPs), most supported by IES funds.  When it comes to research, academic researchers and practitioners often have different needs and different time frames. On paper, RPPs look like a way to bridge that divide.

Over the last few years, IES has made some large investments in RPPs. The Institute’s National Center for Education Research runs an RPP grant competition that has funded over 50 RPPs, with an investment of around $20 million over the last several years. In addition, the evaluation of state and local programs and policies competition has supported partnerships between researchers and state and local education agencies since 2009.

But the biggest investment in RPPs, by far, has been through the Regional Educational Laboratories. In the 2012-2017 REL funding cycle, 85 percent of the REL’s work had to go through “alliances”, which often coordinated several RPPs and themselves emphasized research to practice partnerships. In the current funding cycle, RELs have created over 100 RPPs, and the bulk of REL’s work—upwards of 80 percent—is done through them.

Back of the envelope calculations show that IES is currently spending over $40 million per year on REL RPPs. Add to that the hundreds of millions of dollars invested in alliances in the previous REL contract plus the RPP and state policy grant competitions and this constitutes a very big bet.

Despite the fact that we have invested so much in RPPs for over half a decade, we have only limited evidence about what they are accomplishing.

Consider the report that was just released from the National Center for Research in Policy and Practice. Entitled A Descriptive Study of the IES Researcher-Practitioner Partnership Program, it is exactly what it says it is: a descriptive study.  Its first research goal focused on the perceived benefits of partnerships and the second focused on partnership contexts.

But neither of these research goals answers the most important question: what did the partnerships change, not just in terms of research use or service delivery, but in what matters the most, which is improved outcomes for students.

Despite IES’ emphasis on evidence-based policy, right now RPPs are mostly hope-based. As noted, some research has documented a few of the processes that seem to be associated with better functioning RPPs, such as building trust among partners and having consultative meetings. Research has not, however, helped identify the functions, structures, or processes that work best for increasing the impact of RPPs.

The Institute is planning an evaluation of REL-based RPPs. We know that it will be difficult and imperfect. With over $200 million invested in the last REL cycle, with over 100 REL-based RPPs currently operating, and with $40+ million a year supporting RPPs, we assume that there’s lots of variation in how they are structured, what they are doing, and ultimately how successful they are in improving student outcomes. With so many RPPs and so much variation, our evaluation will focus on the “what works for whom and under what circumstances” type questions: Are certain types of RPPs better at addressing particular types of problems? Are there certain conditions under which RPPs are more likely to be successful?  Are there specific strategies that make some RPPs more successful than others?  Are any successful RPP results replicable?

Defining success will not be simple. A recent study by Henrick et al. identifies five dimensions by which to evaluate RPPs—all of which have multiple indicators. Since it’s not likely that we can adequately assess all five of these dimensions, plus any others that our own background research uncovers, we need to make tough choices. Even by focusing on student outcomes, which we will, we are still left with many problems. For example, different RPPs are focused on different topics—how can we map reasonable outcome measures across those different areas, many of which could have different time horizons for improvement?

Related to the question of time horizons for improvement is the question of how long it takes for RPPs to gain traction. Consider three of arguably the most successful RPPs in the nation: The Chicago Consortium was launched in 1990; the Baltimore consortium, BERC, in fall 2006; and the Research Alliance for New York City Schools in 2008. In contrast, IES’ big investment in RPPs began in 2012. How much time do RPPs need to change facts on the ground? Since much of the work of the earliest alliances was focused on high school graduation rates and college access, 6 years seems to be a reasonable window for assessing those outcomes, but other alliances were engaged in work that may have longer time frames.

The challenges go on and on. But one thing is clear: we can’t continue to bet tens of millions of dollars each year on RPPs without a better sense of what they are doing, what they are accomplishing, and what factors are associated with their success.

The Institute will soon be issuing a request for comments to solicit ideas from the community on the issues and indicators of success that could help us inform our evaluation of the RPPs. We look forward to working with you to provide a stronger evidence base identifying what works for whom in RPPs.

Mark Schneider
Director, IES

Informing Future Research in Career and Technical Education (CTE)

Career and Technical Education (CTE) has been evolving and expanding at a rapid pace in recent years as industry and education leaders focus on students’ readiness for college and careers. While some studies have shown positive effects of CTE on students, the evidence base is thin. To learn more about the research needs of the CTE field, the National Center for Special Education Research (NCSER) and the National Center for Education Research (NCER) at the Institute of Education Sciences (IES) convened a group of experts in policy, practice and research related to CTE.  The discussion held by the Technical Working Group (TWG) led both NCER and NCSER to increase their investments in CTE for fiscal year 2019: NCSER included a CTE special topic, and NCER changed its CTE topic from a special topic to a standing topic. Applications to both are due August 23, 2018.  Both research centers hope to fund more studies that will help us better understand this growing aspect of education.

The TWG focused on the following four questions:

  1. Who is served by CTE and who is left behind? From national CTE statistics, we know that 82% of all public high schools offer CTE. And, 85% of students earn at least one credit in CTE with the average high school student earning 2.5 CTE credits. However, TWG members noted that research is lacking on specific subpopulations in CTE, such as students from various demographic backgrounds and students with disabilities. Disaggregated data on these dimensions are needed to better understand the CTE experiences of the range of students being served. Such data may help educators improve equity of access to high quality programs for all students.
  2. What do we know―and need to know―about CTE policies, programs, and practices at the secondary and postsecondary levels?  TWG experts discussed the need to know more about industry-recognized credentials and about business and industry engagement in CTE at the secondary level. They argued that we do not know if credentials align with industry requirements, nor do we understand the impact of different types of credentials on student outcomes and wage trajectories. TWG members also noted that the higher the perceived quality or prestige of the CTE program, the more exclusive it becomes, and the more difficult it is for disadvantaged students to obtain access. TWG members also expressed concerns about CTE teacher training, particularly for experts who are recruited from industry without prior teacher preparation. As the experts discussed postsecondary CTE, they suggested that the field would be best served by framing the conversation about secondary to postsecondary pathways as a continuum that enables transparent and sequential transitions from secondary to 2-year and then to 4-year programs or to training or employment, with guidance for students to understand possible sequences.
  3. What are the critical methodological issues in CTE?  TWG members noted that, with a few notable exceptions (e.g.,  a 2008  MDRC study on career academies in New York and a recent study of CTE high schools in Massachusetts), few causal studies on CTE have been conducted. There is an urgent need for more high quality, causal research on CTE policies and programs. In addition, the experts noted that there is almost no research on students with disabilities in CTE. TWG members concluded that the field needs to re-conceptualize CTE research – including better defining CTE students, instructors, programs, and measures – and identify the critical research questions in order to encourage more research in this field.
  4. What is needed to advance CTE research?  State CTE administrators want to know how to identify quality CTE programs so they know how to spend their dollars most effectively on programs that best meet the needs of students. Policymakers also want to know what “works” and what the benefits are of such investments. The TWG members encouraged studies that examine the educational benefits of particular instructional approaches. They also highlighted the importance of collaborative cross-institutional and cross-agency efforts to advance CTE research.

Readers are invited to read the summary of the TWG discussion.

By Corinne Alfeld (NCER program officer) and Kimberley Sprague (former NCSER program officer)