Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: RESET: Recognizing Effective Special Education Teachers
Center: NCSER Year: 2015
Principal Investigator: Johnson, Evelyn Awardee: Boise State University
Program: Educators and School-Based Service Providers      [Program Details]
Award Period: 4 years (7/1/2015 – 6/30/2019) Award Amount: $1,588,173
Type: Measurement Award Number: R324A150152  

Purpose: This project developed and validated a special education teacher observation measure, Recognizing Effective Special Education Teachers (RESET), to evaluate and improve instructional practice delivered to students with disabilities. The challenges of evaluating special education teachers are significant. Special educators work under a variety of conditions, serve a heterogeneous group of students with disabilities, enter the profession with varying skill levels, and may require additional instruction to meet the needs of struggling learners. These factors establish a need for an evaluation system that will help lead to high-quality, evidence-based instructional techniques focused on improving outcomes of students with disabilities in a variety of teaching contexts. This study aimed to meet that need through RESET, an evaluation tool intended to gain a greater understanding of the characteristics, processes, and outcomes that are associated with high-quality instructional practices for student with disabilities.

Project Activities: Using evaluation criteria the research team previously developed, the RESET observation tool and accompanying user manual were prepared for testing. The team conducted video observations of teachers' instruction to determine the reliability and validity of RESET. Teachers who were not included in the videos were recruited, trained, and asked to rate the video data according to RESET criteria. The data obtained from the teacher ratings of the video were used to finalize the RESET tool.

Key Outcomes: The main findings of this project, as reported by the principal investigator, are as follows:

  • The research team developed a set of psychometrically sound observation protocols that are aligned with evidence-based practices for students with high-incidence disabilities.
  • The RESET Explicit Instruction protocol taps instructional constructs not measured by Danielson's Framework for Teaching (FFT) and special education teachers demonstrate higher levels of proficiency when evaluated by RESET as compared to FFT.
  • With minimal support, teachers who conducted self-evaluations using the RESET Explicit Instruction protocol and who received feedback aligned with the observation protocol, made significant improvements in their teaching.
  • Teacher performance on the overall RESET Explicit Instruction protocol was not related to student growth; however, teachers' scores based on an abbreviated version of the protocol (including only items that had average or higher item difficulties) were positively associated with student growth.

Structured Abstract

Setting: The research took place in school districts across Idaho, Florida, Wisconsin, and Iowa.

Sample: The study sample included 118 special education teachers of students with high-incidence disabilities in kindergarten through 8th grade. The research team worked with special education directors to identify teachers who served the identified population of students, group teachers by grade level and instructional context (i.e., self-contained classroom), and draw a random sample of teachers from within each group to participate. An additional 41 teachers, administrators, and special education researchers with at least 5 years of teaching experience and a graduate degree in special education were recruited to rate the videos.

Assessment: The special education teacher observation tool, RESET, is based on the theory of action that the quality of instruction that a teacher provides is a key determinant of a student's individual growth. The foundation of RESET is derived from FFT, with a focus on the domain of instruction. RESET evaluates common features of sound instructional practice and delineates criteria for evaluating evidence-based instructional practices appropriate for students with disabilities. Through a computerized, evaluation system, RESET relies on the use of video capture of instruction for trained observers to evaluate the quality of the instruction using criteria based on prior research. Teachers evaluated by RESET receive targeted feedback from the tool to promote effective instructional practices in their classrooms.

Research Design and Methods: The research team collected approximately 800 hours of recorded teacher instruction from 118 special education teachers. Building on prior research, the team iteratively developed criteria for the evidence-based instructional practices on which teachers were evaluated. Forty-one teachers, administrators, and researchers were recruited for data coding to determine the reliability of RESET using the following steps: 1) rater performance on training videos were used to determine inter-rater reliability; 2) ratings were used in a generalizability study to help determine the ideal number of observations required per video and whether performance improves over time with feedback from trainers; and 3) data were analyzed to determine the internal consistency estimates of component subscales. Researchers used Michael Kane's argument-based validation model with four related sets of inferences – scoring, generalization, extrapolation, and implication. These inferences were empirically tested to determine whether RESET can reliably identify the special education teachers with the most effective instructional practices; measure and provide targeted, specific, corrective feedback for teacher instructional practice; and link student growth rates to effective teaching practices.

Control Condition: Due to the nature of the research design, there was no control condition.

Key Measures: RESET was used to observe and evaluate special education teachers. Ratings from trained raters and project staff were used to assess the reliability and validity of the evaluation tool. The FFT was also used to evaluate teachers and as a comparison to RESET. Student progress monitoring data (from iStation, easyCBM, AIMSweb, and Star Reading) were also collected at pretest and posttest to examine student growth rates over the year.

Data Analytic Strategy: Many-faceted Rasch measurement (MFRM) analyses were used to assess inter-rater reliability for RESET. Generalizability theory analyses (i.e., multilevel analyses, reliability coefficients) were used to determine the optimum number of observations per teacher. An examination of the distribution of scores, inter-rater agreement analysis, and confirmatory factor analysis of a random sample of observations were used to assess how appropriate the RESET criteria was in discriminating between more or less effective instructional practices. Correlations were conducted to assess how teacher evaluation and student growth scores relate. Qualitative data from a technical advisory group informed the consistency in the interpretation of feedback teachers received.

Products and Publications

ERIC Citations: Find available citations in ERIC for this award here.

Project website:

Select Publications:

Book chapter

Johnson, E. S. & Beymer, L. L. (2016). Special education teacher candidate evaluation: Creating a preservice to master teacher observation system. In J. Goeke & K. Kossars Special Education Teacher Preparation, 325T OSEP Program.

Journal articles

Crawford, A. R., Johnson, E. S., Moylan, L. A., & Zheng, Y. (2019). Variance and Reliability in Special Educator Observation Rubrics. Assessment for Effective Intervention, 45(1), 27–37.

Crawford, A. R., Johnson, E. S., Zheng, Y. Z., & Moylan, L. A. (2020). Developing an Understanding Procedures Observation Rubric for Mathematics Intervention Teachers. School Science and Mathematics, 120(3), 153–164.

Johnson, E. S., Crawford, A., Moylan, L. A., & Zheng, Y. (2018). Using Evidence-Centered Design to Create a Special Educator Observation System. Educational Measurement: Issues and Practice, 37(2), 35–44.

Johnson, E. S., Crawford, A., Moylan, L. A., & Zheng, Y. (2020). Validity of a Special Education Teacher Observation System. Educational Assessment, 25(1), 31–46.

Johnson, E. S., Crawford, A. R., Zheng, Y., & Moylan, L. A. (2021). Does Special Educator Effectiveness Vary Depending on the Observation Instrument Used?. Educational Measurement: Issues and Practice, 40(1), 36–43.

Johnson, E. S., Ford, J. W., Crawford, A., & Moylan, L. A. (2016). Issues in Evaluating Special Education Teachers: Challenges and Current Perspectives. Texas Education Review, 4(1), 71–83.

Johnson, E. S., Moylan, L. A., Crawford, A., & Zheng, Y. (2019). Developing a Comprehension Instruction Observation Rubric for Special Education Teachers. Reading & Writing Quarterly, 35(2), 118–136.

Johnson, E. S., Zheng, Y., Crawford, A. R., & Moylan, L. A. (2019). Developing an Explicit Instruction Special Education Teacher Observation Rubric. The Journal of Special Education, 53(1), 28–40.

Johnson, E. S., Zheng, Y., Crawford, A. R., & Moylan, L. A. (2020). Evaluating an Explicit Instruction Teacher Observation Protocol through a Validity Argument Approach. The Journal of Experimental Education, 1–16.

Johnson, E. S., Zheng, Y., Crawford, A. R., & Moylan, L. A. (2021). The Relationship of Special Education Teacher Performance on Observation Instruments with Student Outcomes. Journal of Learning Disabilities, 54(1), 54–65.