For more than 50 years, the RELs have collaborated with school districts, state departments of education, and other education stakeholders to help them generate and use evidence and improve student outcomes. Read more
Home Publications Examination of the Validity and Reliability of the Kansas Clinical Assessment Tool
Although national assessments for evaluating teacher candidates are available, some state education agencies and education preparation programs have developed their own assessments. These locally developed assessments are based on observations of teaching and other artifacts such as lesson plans and student assignments. However, local assessment developers often lack information about the validity and reliability of data collected with their assessments. The Council for the Accreditation of Educator Preparation (CAEP) has provided guidance for demonstrating the validity and reliability of locally developed teacher candidate assessments, yet few educator preparation programs have the capacity to generate this evidence.
The Regional Educational Laboratory Central partnered with educator preparation programs in Kansas to examine the validity and reliability of the Kansas Clinical Assessment Tool (K-CAT), a newly developed tool for assessing the performance of teacher candidates. The study was designed to align with CAEP guidance. The study found that cooperating teachers reported that the K-CAT accurately represented existing teaching performance standards (face validity). Two skilled raters found that the content of the K-CAT was mostly aligned to existing teaching performance standards (content validity). In addition, K-CAT scores for the same teacher candidate, provided by cooperating teachers and supervising faculty, were positively related (convergent validity). K-CAT indicator scores showed internal consistency, or correlations among related indicators, for standards and for the tool overall (reliability). K-CAT scores showed small relationships with teacher candidate scores on other measures of teaching performance (criterion-related validity).
ERIC DescriptorsClinical Supervision (of Teachers), Cooperating Teachers, Evaluation Methods, Observation, Portfolios (Background Materials), Preservice Teacher Education, Preservice Teachers, State Standards, Student Evaluation, Student Teacher Supervisors, Teacher Effectiveness, Teacher Evaluation, Test Reliability, Test Validity
Central | Publication Type: Descriptive Study | Publication
Date: July 2021
Connect with REL Central