Inside IES Research

Notes from NCER & NCSER

Developing the Vanderbilt Assessment of Leadership in Education (VALED)

As education accountability policies continue to hold school leaders responsible for the success of their schools, it is crucial to assess and develop leadership throughout the school year. In honor of the IES 20th Anniversary, we are highlighting NCER’s investment in leadership measures. This guest blog discusses the Vanderbilt Assessment of Leadership in Education (VALED). The VALED team was led by Andy Porter and included Ellen Goldring, Joseph Murphy and Steve Elliott, all at Vanderbilt University at the time. Other important contributors to the work are Xiu Cravens, Morgan Polikoff, Beth Minor Covay, and Henry May. The VALED was initially developed with funding from the Wallace Foundation and then further developed and validated with funding from IES.

What motivated your team to develop VALED?

There is currently widespread agreement that school principals have a major impact on schools and student achievement. However, at the time we developed VALED, we noticed that there were limited research-based instruments to measure principal leadership effectiveness aligned to both licensure standards and rooted in the evidence base. Prior to the VALED, principal leadership evaluation focused primarily on managerial tasks. However, we believed that principal leadership centered on improving teaching and learning, school culture, and community and parent engagement (often called learning-centered leadership) is at the core of leadership effectiveness.

What does VALED measure?

The VALED is a multi-rater assessment of learning-centered leadership behaviors. The principal, his/her supervisor, and teachers in the school complete it, which is why VALED is sometimes referred to as a 360 assessment or multi-source feedback.

VALED measures six core components and six key processes that define learning-centered leadership. The core components are high standards for student learning, rigorous curriculum, quality instruction, culture of learning and professional behavior, connections to external communities, and performance accountability. The key processes are planning, implementing, supporting, communicating, monitoring, and advocating.

How is the VALED different from other school leadership assessments?

The VALED is unique because it focuses on school leadership behaviors aligned to school improvement and school effectiveness, incorporates feedback and input from those who collaborate closely with the principal, includes a self- assessment, acknowledges the distributed work of leadership in a school, and has strong psychometric properties. We think there are several elements that contribute to the uniqueness of the instrument.

First, VALED is based on what we have learned from scholarship and academic research rather than less robust frameworks such as personal opinions and or unrepresentative samples. The VALED was crafted from concepts identified as important in that knowledge and understanding. The VALED model is based upon knowledge about connections between leadership and learning and provides a good deal of the required support for the accuracy, viability, and stability of the instrument.

Second, principals rarely receive data-based feedback, even though feedback is essential for growth and improvement. The rationale behind multi-source or 360-degree feedback is that information regarding leadership efficacy resides within the shared experiences of teachers and supervisors, collaborating with the principal, rather than from any one source alone. Data that pinpoint gaps between principal’s own self-assessment, and their teachers’ and supervisors’ ratings of their leadership effectiveness can serve as powerful motivators for change.

Finally, in contrast to some other leadership measures, VALED has undergone extensive psychometric development and testing. We conducted a sorting study to investigate content validity and a pilot study where we addressed ceiling effects, and cognitive interviews to refine wording. We also conducted a known group study that showed the tool’s ability to reliably distinguish principals, test-retest reliability, convergent-divergent validity, and principal value-added to student achievement. As part of this testing, we identified several key properties of VALED. The measure—  

  • Works well in a variety of settings and circumstances
  • Is construct valid
  • Is reliable
  • Is feasible for widespread use
  • Provides accurate and useful reporting of results
  • Is unbiased
  • Yields a diagnostic profile for summative and formative purposes
  • Can be used to measure progress over time in the development of leadership
  • Predicts important outcomes
  • Is part of a comprehensive assessment of the effectiveness of a leader's behaviors

What is the influence of VALED on education leadership research and practice?

VALED is used in schools and districts across the US and internationally for both formative and evaluative purposes to support school leadership development. For example, Baltimore City Public Schools uses VALED as a component of their School Leader Evaluations. VALED has also spurred studies on principal evaluation, including the association between evaluation, feedback and important school outcomes, the implementation of principal evaluation, and its uses to support principal growth and development. In addition, it provides a reliable and valid instrument for scholars to use in their studies as a measure of leadership effectiveness.


Andy Porter is professor emeritus of education at the Pennsylvania State University. He has published widely on psychometrics, student assessment, education indicators, and research on teaching.

Ellen Goldring is Patricia and Rodes Hart Chair, professor of education and leadership at Vanderbilt University. Her research interests focus on the intersection of education policy and school improvement with emphases on education leadership.

Joseph Murphy is an emeritus professor of education and the former Frank W. Mayborn Chair of Education at Peabody College, Vanderbilt University. He has published widely on school improvement, with special emphasis on leadership and policy and has been led national efforts to develop leadership standards. 

Produced by Katina Stapleton (Katina.Stapleton@ed.gov), program officer for NCER’s education leadership portfolio.

 
 
 

The Comprehensive Assessment of Leadership for Learning: How We Can Support School Leaders to Improve Learning for All Students

As educational accountability policies continue to hold school leaders responsible for the success of their schools, it is crucial to assess and develop leadership throughout the school year. In honor of School Principals’ Day and the IES 20th Anniversary, we are highlighting NCER’s investment in formative leadership measures. In this guest blog, researchers Rich Halverson and Carolyn Kelley from the University of Wisconsin-Madison and Mark Blitz from the Wisconsin Center for Education Products and Services discuss the development and evolution of their IES-funded Comprehensive Assessment of Leadership for Learning (CALL).

What is CALL?

CALL is a survey tool based on a distributed leadership model that emphasizes the work of leaders rather than their positions or identities. In 2008, we led a team of researchers at the University of Wisconsin-Madison to identify the key leadership tasks necessary for school improvement, regardless of who made the tasks happen. The CALL survey invites each educator in a school to assess the degree to which these core tasks are conducted, then aggregates these responses to provide a school-level portrait of the state of leadership practice in their school.

How was CALL developed?

Our CALL team relied on over 30 years of research on leadership for school improvement to name about 100 key tasks in five domains of practice. The team then worked over a year with expert educators and leaders to articulate these tasks into survey items phrased in language that teachers would readily understand as describing the work that happens every day in their schools. We designed each item to assess the presence and quality of leadership practices, policies, and programs known to improve school quality and student learning. We validated the survey with qualitative and quantitative analyses of survey content, structure, and reliability.

What inspired you to develop CALL?

We believed a measure like CALL is necessary in the era of data-driven decision-making. Educators are inundated by accountability and contextual data about their schools, but they are left on their own for data to help them understand how to develop and implement the strategies, policies, and programs that support student success. Traditional school data systems leave a hole where feedback matters most for educators–at the practice-level where the work of leaders and educators unfolds. That is the hole that CALL is designed to fill.

How is the CALL different from other leadership surveys?

Traditional surveys include items that invite educators to rate their leaders on important tasks using Likert scale measures. The results of these surveys produce scores that allow leaders to be rated and compared. But, as a school leader, it is hard to know what to do with a 3.5 score on an item like “My principal is an effective instructional leader.” CALL items are designed differently. Each CALL item response represents a distinct level of practice, so respondents can learn about optimum practices simply by taking the survey. If the collected responses by educators in your school averaged a “2” on one of the items, the description of the next level practice (“3”) clearly articulates an improvement goal.

In addition, our online CALL reporting tools provide formative feedback by allowing users to compare item and domain scores between academic departments and grade levels, as well as across schools. The reports name specific areas of strength and improvement, and also suggest research-driven strategies and resources leaders can use to improve specific aspects of leadership.

How did CALL transition into a commercial measure?

The CALL project provides a model of how IES-funded research can have broad impact in schools around the country. We are thrilled that CALL developed into the rare educational survey that was embraced by the people who tested it as well as the research community. Many of our development partners asked about whether they could continue with CALL as the survey took on new life as a commercial product after our grant ended.

The Wisconsin Center for Education Products and Services (WCEPS) provided us with the business services and the support to bring CALL to market. CALL became a WCEPS partner in 2014 and has since developed into a successful leadership and school improvement resource. Under the leadership of WCEPS’s Mark Blitz, the CALL model became a framework to build successful collaborations with learning and research organizations across the country.

Leading professional learning groups such as WestEd, WIDA, the Southern Regional Education Board, and the Georgia Leadership Institute for School Improvement worked with Mark and the WCEPS team to build customized CALL-based formative feedback systems for their clients. Research partners at East Carolina University, Teachers College, and the University of Illinois at Chicago used CALL to collect baseline data on leadership practices for school improvement and principal preparation projects. CALL has also developed customized versions of the survey to support leadership for personalized learning (CALL PL) and virtual learning (Long Distance CALL). These partnerships have provided opportunities for hundreds of schools and thousands of educators to experience the CALL model of formative feedback to improve teaching and learning in schools.

What’s the next step for CALL?

In 2021, the CALL project entered a new era of leadership for equity. With the support of the Wallace Foundation, we created CALL for Equity Centered Leadership (CALL-ECL) to provide school districts with feedback on the leadership practices that create more equitable schools. CALL-ECL is part of a $100 million+ Wallace Foundation initiative to transform how districts across the country develop partnerships to prepare and support a new generation of equity-centered leaders. According to Wallace Research Director Bronwyn Bevan, “The foundation is excited about CALL-ECL because it will help leaders identify the organizational routines that sustain inequality and replace them with routines that help all students thrive.”

Our $8 million, six-year CALL-ECL project will document the development of these new preparation and support program, and will create a new CALL survey as an information tool to describe and assess equity-centered leadership practices. We believe that by 2027, CALL-ECL will be able to share the practices of equity-centered leadership developed through the Wallace initiatives with districts and schools around the world. Our hope is that CALL-ECL will give school leaders and leadership teams the data they need to continually evolve toward better opportunities and outcomes for all young people.


Richard Halverson is the Kellner Family Chair of Urban Education and Professor of Educational Leadership and Policy Analysis in the UW-Madison School of Education. He is also a co-director of the Comprehensive Assessment of Leadership for Learning and leads the Wallace Foundation Equity-Centered Leadership Pipeline research project.

 

Carolyn Kelley is a distinguished professor in the Department of Educational Leadership and Policy Analysis. Dr. Kelley’s research focuses on strategic human resources management in schools, including teacher compensation, principal and teacher evaluation, and leadership development.

 

Mark Blitz is the project director of the Comprehensive Assessment of Leadership for Learning (CALL) at the Wisconsin Center for Education Products & Services.

 

 

This blog was produced by Katina Stapleton (Katina.Stapleton@ed.gov), program officer for NCER’s education leadership portfolio.

 
 
 

Smooth Sailing Using the Neurodiversity Paradigm: Developing Positive Classrooms Experiences for Autistic Students

In honor of Autism Awareness Month, we’d like to highlight an IES-funded research project on autism spectrum disorder and discuss how the current framework of neurodiversity informs this research. In recent years, the neurodiversity paradigm has been an increasingly popular way of viewing autism and other neurodevelopmental conditions. Neurodiversity is a term coined in the late 1990s by Judy Singer to refer to natural human variation in neurotypes. Neurodivergent individuals diverge from the norm, usually with conditions such as autism, attention-deficit hyperactivity disorder, or dyslexia. Rather than focusing on deficits, this paradigm supports a strength-based view of these conditions while still acknowledging individual challenges. For this blog, we interviewed Dr. Jan Blacher and Dr. Abbey Eisenhower, principal investigators who created a professional development (PD) program supporting general education teachers of students on the autism spectrum. In the interview below, the researchers describe how their PD program works and how it uses the neurodiversity paradigm to strengthen relationships between autistic students and their teachers.

What is Smooth Sailing and what led you to develop it?

Headshot of Dr. Abbey EisenhowerHeadshot of Dr. Jan Blacher

Smooth Sailing is the nickname for our PD for general education teachers in kindergarten through second grade who have at least one student on the autism spectrum in their classrooms. The catalyst for the program was the findings from our previous project on student-teacher relationships, indicating that teachers are central to facilitating positive school experiences, especially for autistic students. Warm, positive student-teacher relationships are predictive of academic engagement and social adjustment.

The program provides coaching-based support for teachers, equips them with strategies for building strong relationships with autistic students, and enables them to expand on their students' strengths and interests in the classroom. Developed by educators, clinicians, and researchers in partnership with teachers and autistic individuals, Smooth Sailing uses an autism-affirming, neurodiversity perspective throughout the program.

What makes this program unique?                                                                                                                

Smooth Sailing recognizes the importance of relationships—especially student-teacher relationships—in making school a positive and welcoming place for students.

Our program prioritizes a neurodiversity perspective on autism: We recognize autism as a set of differences that are part of the diversity of human experience. In order to best support autistic students, we must provide an affirming context that embraces their strengths and differences. This approach contrasts with a deficit-based model, which focuses on changing children and their behaviors. The deficit model could impair relationships between students and their teachers, making academic engagement and social adjustment worse.

Finally, Smooth Sailing is unique for centering on autistic people as key contributors to shaping program content so that the program reflects the lived realities of autistic students.

What have you learned while developing and testing the Smooth Sailing intervention?

We have learned several important lessons:

(1) During the initial research for our intervention, findings indicated that only 8% of general education teachers in the study had received any professional training in autism. This provides a clear-cut mandate for more autism-focused training for these educators.

(2) After the intervention, general education teachers endorsed three key Smooth Sailing strategies for reaching out to their autistic students: (a) identifying interests, (b) celebrating talents, and (c) having one-on-one time to form stronger relationships. We learned that these simple strategies are ones every teacher can adopt to create more inclusive classrooms and cultivate stronger relationships with students, especially autistic students.

(3) Overall, teachers who received the Smooth Sailing PD experienced significant improvements in the quality of their relationships with autistic students, including higher student-teacher closeness and lower student-teacher conflict, compared to teachers who had not received the program. Thus, in addition to other positive outcomes for teachers and children, we learned that our brief program (12 hours over 4 weeks) was sufficient for moving the needle on the critical construct of student-teacher relationship quality.

How does respect for neurodiversity inform the Smooth Sailing intervention and your philosophies as researchers?

One key factor that has been transformative to the resulting Smooth Sailing program has been our close consultation with current and former autistic students. As part of developing the Smooth Sailing program for teachers, our research team interviewed many autistic adolescents and adults about their school experiences, their advice for teachers, and their opinions on making schools more affirming and inclusive. In addition, we closely engaged autistic adults as expert consultants during our program development process. These consultants advised on teacher-focused content, reviewed materials, and weighed in on program changes.

The rich information we learned from the interviews and intensive consultation substantially impacted the content of the resulting program. To offer one example, these interviews showed us the outsized power of a positive student-teacher relationship, even with just one teacher, in making school a bearable place for autistic students.

Because many autistic students describe their school experiences as ableist and marginalizing, our team's programming aims to disrupt these school problems by building strong student-teacher relationships and fostering teachers' understanding of autism through an affirming, neurodiversity-informed lens. By incorporating first-person perspectives of autistic students and adults in its creation and content, our programs affirm the lived realities of autistic students. 

What needs are still unmet for general education teachers working with autistic students?

We have heard from teachers and administrators at all K-12 levels—high school, middle school, and later elementary school—that they would like access to similar autism-focused PD programs targeted to the student age ranges they teach. We think that creating a school culture that affirms neurodiversity starts by fostering understanding between students and all school staff, not just primary classroom teachers.  

What's next for the Smooth Sailing project?

We hope to expand the Smooth Sailing PD program to the early childhood education context. Unfortunately, our research has shown that, by the time they enter elementary school, one out of every six autistic children has been expelled from a preschool or childcare program. Viewed through a social justice lens, this preschool expulsion is an educational equity issue.

Early childhood educators are key to improving these early school experiences. We believe that preschool and childcare educators can be catalysts in providing an inclusive environment by forming strong relationships with autistic and neurodivergent children. That said, most early childhood educators report having no professional training in autism, feeling underprepared to meet the needs of autistic children, and wanting more support for inclusion. We hope that programs like Smooth Sailing can be applied to support educators working with preschool-age children who are autistic or neurodivergent, many of whom are not yet diagnosed, so that their first school experiences can be enriching and inclusive.

Jan Blacher is a distinguished research professor in the School of Education and the director of the SEARCH Family Autism Research Center at the University of California, Riverside. Abbey Eisenhower is an associate professor in the Psychology Department at the University of Massachusetts, Boston.   

This blog was authored by Juliette Gudknecht, an intern at IES, along with Emily Weaver (Emily.Weaver@ed.gov), program officer at NCSER with oversight of the portfolio of autism grants.

Investing in Math Learning and Achievement for All Learners

International and national assessment data show that many U.S. students struggle with mathematics, and there continues to be a gap between students with and without disabilities. The recent 2022 NAEP mathematics results continue to showcase these disparities, which have been further exacerbated as a result of the COVID-19 pandemic, particularly for lower-performing students and students of color.

In honor of Mathematics and Statistics Awareness Month, we want to highlight the research IES is supporting to improve mathematics achievement and access to educational opportunities for all learners, especially learners who have been historically underserved and underrepresented in STEM education.

IES is supporting research through its discretionary grant competitions to measure, explore, develop, and evaluate effective mathematics programs, practices, and policies for all students, including those with or at risk for disabilities. Here are a few highlights of some new research supported by IES:

  • Interleaved Mathematics Practice – Bryan Matlen (WestEd) and colleagues are conducting a systematic replication of a highly promising mathematics learning intervention, interleaved practice, in 7th grade classrooms. With the interleaved practice intervention, some of the assigned math practice problems are rearranged so that problems of different kinds are mixed together, which improves learning, and problems of the same kind are distributed across multiple assignments, which improves retention. Numerous studies in the laboratory and classroom have demonstrated that merely rearranging practice problems so that the students receive a higher dose of interleaved practice can dramatically boost scores on measures of learning. This replication study will determine whether this promising intervention can improve math learning and achievement and whether the intervention can scale to a widely-used online intervention that currently reaches tens of thousands of students in diverse settings.
  • Educational Technology Approaches to K-12 Mathematics – Jennifer Morrison (Johns Hopkins University) and colleagues are conducting a meta-analysis of rigorous evaluations of approaches that use technology to improve student mathematics achievement in grades K to 12. Using meta-analytic techniques, the team will be identifying conditions under which various types of technology applications are most effective in teaching mathematics. The results will provide researchers and education leaders with up-to-date information on effective uses of technology, including computer assisted instruction, cooperative learning, intelligent tutoring systems, games, simulations, virtual reality, inquiry/discovery, project-based learning, and media-infused instruction.
  • Specialized Intervention to Reach All Learners - Sarah Powell (University of Texas at Austin) and colleagues are conducting an initial efficacy evaluation of Math SPIRAL, an educator-provided mathematics intervention for students identified as needing intervention services through state achievement testing in grades four and five. Educators are provided with an evidence-based word problem intervention (Pirate Math Equation Quest), associated professional development, and coaching to support implementation and address the needs of their learners who are struggling in math. The research team will evaluate the impact of Math SPIRAL on mathematics outcomes for upper elementary students identified as being with or at risk for a disability. The results will provide information on the efficacy of Math SPIRAL as a tool to accelerate the learning of students in need of math intervention.
  • Math and Reading Acquisition Co-Adaptive System – Jess Gropen (Center for Applied Special Technology), Steve Ritter (Carnegie Learning), and their research team are iteratively developing and studying a set of individualized reading supports for students embedded within an adaptive mathematics learning system (MATHia) and an associated teacher application (LiveLab). Heuristics will determine when reading supports or scaffolds should be provided or recommended to students. In addition, adaptive supports for teachers will alert them when students are likely exhibiting reading challenges and provide recommendations for intervention. The findings will determine whether these reading supports that can be embedded into a variety of digital and/or adaptive math tools to decrease reading challenges and increase students' ability to engage effectively with math. The findings and generated technical resources (such as design assets and heuristics) will be Creative Commons licensed and made available through GitHub for use by other developers.

In August 2022, IES also launched the Learning Acceleration Challenge (LAC) Math Prize to identify and award school-based, digital interventions that significantly improve math outcomes for upper elementary school students with or at risk for a disability that affects math performance. Interventions for the Math Prize needed to specifically focus on fractions and could also include prerequisite skills such as whole numbers and operations. Two interventions are currently competing for the math prize and the winner will be announced Fall 2023.

In addition, IES has developed Practice Guides with evidence-based recommendations for educators to address challenges in their classrooms and schools. A list of the mathematics focused Practice Guides can be found here.


This blog was written by Christina Chhin (christina.chhin@ed.gov), NCER; Sarah Brasiel (sarah.brasiel@ed.gov), NCSER; and Britta Bresina (britta.bresina@ed.gov), NCSER.

Pennsylvania Student Proficiency Rates Rebound Partially from COVID-19-Related Declines

Given the magnitude of the disruption the COVID-19 pandemic caused to education practices, there has been considerable interest in understanding how the pandemic may have affected student proficiency. In this guest blog, Stephen Lipscomb, Duncan Chaplin, Alma Vigil, and Hena Matthias of Mathematica discuss their IES-funded grant project, in partnership with the Pennsylvania Department of Education (PDE), that is looking at the pandemic’s impacts in Pennsylvania.  

The onset of the COVID-19 pandemic in spring 2020 brought on a host of changes to K–12 education and instruction in Pennsylvania. Many local education agencies (LEAs) instituted remote learning and hybrid schedules as their primary mode of educating students, while others maintained in-person learning. Statewide assessments, which were suspended in spring 2020, resumed in 2021 with low participation rates, particularly among students with lower performance before the pandemic. Furthermore, test administration dates varied from spring 2021 to fall 2021. Pennsylvania statewide assessment data reveal that student proficiency rates may have rebounded in 2022, despite remaining below pre-pandemic levels. In grades 5–8, there was a marked increase in proficiency in English language arts (ELA) and a slightly smaller increase in proficiency in math compared to 2021 proficiency rates predicted in recent research. Despite these gains, increasing student proficiency rates to pre-pandemic levels will require additional efforts.

The Pennsylvania Department of Education (PDE) has been committed to providing LEAs with the resources and support necessary to help students achieve pre-pandemic academic proficiency rates. To learn more about changes in how those rates may have been associated with the pandemic, PDE and Mathematica partnered to explore trends in student proficiency data for students in grades 5–8. Given the lower and nonrepresentative participation in the 2021 statewide assessments, as well as the differences in when LEAs administered the assessments, we developed a predictive model of statewide proficiency rates for spring 2021 to produce predicted proficiency rates that would be more comparable to previous and future years. The results revealed that steep declines in proficiency likely occurred between 2019 and 2021 (see Figure 1 below). By spring 2022, proficiency rates in grades 5–8 regained 6 percentage points of their 10 percentage point drop in ELA and nearly 5 percentage points of their 13 percentage point drop in math. Taken together, these results suggest that although the pandemic may have originally been associated with declines in students’ academic proficiency, over time, student proficiency might move back towards pre-pandemic levels.

 

Figure 1. Actual and predicted proficiency rates in grades 5–8 in Pennsylvania, 2015–2022

The figure shows actual proficiency rates from the Pennsylvania System of School Assessment, averaged across grades 5–8, unless marked by either an open or closed circle. Notes: Open circle indicates Statewide assessment cancelled; closed circle indicates predicted proficiency rate. The figure shows actual proficiency rates from the Pennsylvania System of School Assessment, averaged across grades 5–8, unless marked by either an open or closed circle.

Source: Data from 2015–2019 and 2022 are from the Pennsylvania Department of Education. The 2021 data are predicted proficiency rates from Lipscomb et al. (2022a). The figure originally appeared in Lipscomb et al. (2022b).  

 

The next steps for this project will include a strong focus on dissemination of our findings. For example, we will develop a research brief that describes the role of remote learning in shaping academic outcomes beyond proficiency rates and community health outcomes during the pandemic. The findings will help PDE and LEAs refine strategies for supporting vulnerable students and help state policymakers and educators learn from the COVID-19 pandemic—specifically how it might have affected student outcomes and educational inequities.


This blog was produced by Allen Ruby (Allen.Ruby@ed.gov), Associate Commissioner for Policy and Systems Division, NCER.