|Title:||The Diagnostic Geometry Assessment Project|
|Principal Investigator:||Russell, Michael||Awardee:||Measured Progress|
|Program:||Cognition and Student Learning [Program Details]|
|Award Period:||4 years||Award Amount:||$1,727,059|
Previous Award Number: R305A080231
Purpose: Student assessment is a central component of instruction. Learning is affected by students' current knowledge and is facilitated when new knowledge and skills are consistent with and build upon current knowledge. The purpose of this project was to develop and validate a computer-delivered diagnostic formative assessment of geometric conceptions in the middle grades and to develop instructional resources to assist teachers in addressing flawed or underdeveloped conceptions identified by the assessment.
Project Activities: The research team developed and validated an assessment tool to target sources of difficulties and misconceptions in middle school geometry, specifically properties of shapes, transformations, and measurement. Unlike current achievement and diagnostic tests which provide information about a students' ability within a given domain, the Diagnostic Geometry Assessment diagnosed why students struggle with a given geometric concept and provided teachers with instructional strategies and resources designed to address the targeted difficulties and misconceptions.
THE FOLLOWING CONTENT DESCRIBES THE PROJECT AT THE TIME OF FUNDING
Purpose: The purpose of this project is to develop and validate a computer-delivered diagnostic formative assessment of geometric conceptions in the middle grades and to develop instructional resources to assist teachers in addressing flawed or underdeveloped conceptions identified by the assessment.
Setting: The research will be conducted with schools of varying socioeconomic profiles from across the United States.
Population: The research participants will be 7th- and 8th-grade mathematics teachers and their students.
Assessment: The assessment will target sources of difficulties and misconceptions in middle school geometry, specifically properties of shapes, transformations, and measurement. Unlike current achievement and diagnostic tests which provide information about a students' ability within a given domain, the Diagnostic Geometry Assessment will diagnose the reason(s) why students struggle with a given geometric concept, and it will provide teachers with instructional strategies and resources designed to address the targeted conceptions.
Research Design and Methods: The research team will conduct studies to examine the psychometric properties of the assessment, as well as the validity of inferences generated by the assessment of students' geometric conceptions. Research will be conducted both face-to-face and online. In the final pilot study of the assessment, 80 teachers will be randomly assigned to use the new assessment tool under one of four conditions: traditional feedback; traditional feedback and diagnostic feedback; traditional feedback, diagnostic feedback, and access to instructional resources; and traditional feedback and access to instructional resources.
Control Condition: Teachers in the final pilot study who are randomly assigned to the control condition will receive only traditional feedback from the Diagnostic Geometry Assessment (i.e., the percent of correct responses and item-level information that focuses on whether the student responded correctly) and will not receive diagnostic feedback or access to the instructional resources.
Key Measures: Key outcomes from the development and validation of the assessment include data indicating the reliability, unidimensionality, and the content-related, construct-related, criterion-related, and consequential-related validity of the assessment. Validity studies will examine the consistency of information provided by the Diagnostic Geometry Assessment by comparing that information to similar measures of geometric conceptions, and also through interview teaching using cognitive probes.
Data Analytic Strategy: Psychometric analyses will include test reliability (using Cronbach's alpha), factor analysis, and one and three parameter item response analyses. Data from the validity studies will be coded with emergent coding method using Cohen's Kappa to define acceptable agreement rates.
Related IES Projects: Diagnostic Inventories of Cognition in Education (DICE) (R305A170441), Bridging the Gap: Applying Algebra Cognition Research to Develop and Validate Diagnostic Classroom Algebra Testlets (R305H040099), The Universal Assessment System (UAS) (ED08CO0056)
Masters, J. (2010). Automated Scoring of an Interactive Geometry Item: A Proof-of-Concept. The Journal of Technology, Learning and Assessment, 8(7): 1–39.
Shear, B. R., & Roussos, L. A. (2017). Validating a distractor-driven geometry test using a generalized diagnostic classification model. Understanding and Investigating Response Processes in Validation Research, 277–304.