Project Activities
Researchers developed and validated WordChomp, an assessment that produces reliable diagnostic feedback to teachers about their students' morphological knowledge. In addition, the research team investigated measurement challenges and worked towards advancing methods for the development measurement tools.
Structured Abstract
Setting
The study took place in suburban elementary schools in Florida and Arizona.
Sample
The study included two large student samples from Florida and Arizona, from grades 3 to 5 that, taken together, were nationally representative in terms of ethnicity, students with disabilities, English language learners, and free/reduced lunch status. The study also included multiple smaller samples of Florida teachers and students for procedures such as beta testing and expert review of materials.
Researchers developed an assessment called WordChomp for use with upper elementary students. WordChomp produces reliable diagnostic feedback to teachers about their students' morphological knowledge. Morphological knowledge consists of various teachable problem-solving skills, such as the recognizing morphemes in words, comprehending the meaning of morphemes in words, and changing the meaning of sentences through morphemes. The project focused on three different kinds of morphemes: prefixes, suffixes, and roots (sometimes referred to as bases or stems). The tool assesses students' strengths and weaknesses in these areas so that teachers can identify underlying challenges to reading success and design instruction accordingly. The assessment is delivered adaptively in a technology-based application, and scores are calibrated under an explanatory diagnostic classification model.
Research design and methods
The team developed the diagnostic assessment using select evidence-centered design approaches along with multiple waves of data collection to address content validity evidence, response process validity evidence, internal structure validity evidence, external criterion validity evidence, fairness evidence, and validity evidence related to test use.
Researchers completed a pilot test, a field test, and a series of validity studies. First, the researchers worked with teachers to finalize the domain of measurement, created a large item bank, conducted expert reviews of items, conducted a response process validity study, and conducted a fairness study. Then, the researchers piloted WordChomp to N = 493 students. Lastly, the researchers developed the psychometric model for scoring and adapting the assessment, conducted a field test with N = 190 students, and conducted an external validity study with N = 582.
Control condition
There was no control condition for this study.
Key measures
To establish external validity evidence, key measures included the Florida Assessment of Student Thinking, the Gates-MacGinitie assessments of vocabulary and reading comprehension, the Being a Reader classroom assessments of morphology and spelling, the Core Phonics assessments of letter knowledge and word recognition, and the CORE Reading Maze Comprehension Test.
Data analytic strategy
The research team used an explanatory diagnostic classification model for establishing item bank item parameters, scoring student data, and developing adaptive algorithms.
Key outcomes
The main outcomes of this project are as follows:
- A framework for developing fair assessments was created where fairness is treated as an argument for which evidence can be collected by working with assessment stakeholders (Huggins-Manley et al., 2022).
- Two statistical approaches to scoring classroom assessment data were developed. First, the researchers developed a method to check the validity of model assumptions about which items on an assessment measure which student traits, such as which items in Project DIMES measure which morphology skills of students (da Silva et al., 2024). Second, the researchers developed a method for estimating student trait scores from assessment data in which students are provided a second chance to answer an item correctly (Kwon et al., 2024).
People and institutions involved
IES program contact(s)
Project contributors
Products and publications
Project website:
Publications:
Huggins-Manley, A.C., Booth, B.M., and D’Mello, S. (2022). Toward argument-based fairness with an application to AI-enhanced educational assessments. Journal of Educational Measurement, 59, 362-388.
da Silva, M., Huggins-Manley, A.C., and Benedict, A. E. (2024). A method of empirical Q-matrix validation for multidimensional item response theory. Applied Measurement in Education, 37, 177-190.
Kwon, T., Huggins-Manley, A.C., Templin, J., and Zheng, M. (2024). Modeling hierarchical attribute structures in diagnostic classification models with multiple attempts. Journal of Educational Measurement, 61, 198-218.
Related projects
Questions about this project?
To answer additional questions about this project or provide feedback, please contact the program officer.