Skip to main content

Breadcrumb

Home arrow_forward_ios Information on IES-Funded Research arrow_forward_ios Project DIMES: Diagnostic Instrumen ...
Home arrow_forward_ios ... arrow_forward_ios Project DIMES: Diagnostic Instrumen ...
Information on IES-Funded Research
Grant Closed

Project DIMES: Diagnostic Instrument for Morphology of Elementary Students

NCER
Program: Education Research Grants
Program topic(s): Literacy
Award amount: $1,290,952
Principal investigator: Anne Corinne Huggins- Manley
Awardee:
University of Florida
Year: 2019
Award period: 5 years (07/01/2019 - 06/30/2024)
Project type:
Measurement
Award number: R305A190079

Purpose

The purpose of the project was to develop a computer adaptive, diagnostic assessment of teachable morphological skills for students in grades 3 to 5. The development of morphological skills is essential to students' literacy growth. This is because knowledge of morphemes (for example, root words like nation, prefixes like inter, and suffixes like al) supports students' reading achievement by influencing their ability to read words and access the meaning of multisyllabic words which then supports their reading comprehension.

Project Activities

Researchers developed and validated WordChomp, an assessment that produces reliable diagnostic feedback to teachers about their students' morphological knowledge. In addition, the research team investigated measurement challenges and worked towards advancing methods for the development measurement tools.

 

Structured Abstract

Setting

The study took place in suburban elementary schools in Florida and Arizona.

Sample

The study included two large student samples from Florida and Arizona, from grades 3 to 5 that, taken together, were nationally representative in terms of ethnicity, students with disabilities, English language learners, and free/reduced lunch status. The study also included multiple smaller samples of Florida teachers and students for procedures such as beta testing and expert review of materials.

Assessment

Researchers developed an assessment called WordChomp for use with upper elementary students. WordChomp produces reliable diagnostic feedback to teachers about their students' morphological knowledge. Morphological knowledge consists of various teachable problem-solving skills, such as the recognizing morphemes in words, comprehending the meaning of morphemes in words, and changing the meaning of sentences through morphemes. The project focused on three different kinds of morphemes: prefixes, suffixes, and roots (sometimes referred to as bases or stems). The tool assesses students' strengths and weaknesses in these areas so that teachers can identify underlying challenges to reading success and design instruction accordingly. The assessment is delivered adaptively in a technology-based application, and scores are calibrated under an explanatory diagnostic classification model.

 

Research design and methods

The team developed the diagnostic assessment using select evidence-centered design approaches along with multiple waves of data collection to address content validity evidence, response process validity evidence, internal structure validity evidence, external criterion validity evidence, fairness evidence, and validity evidence related to test use.

Researchers completed a pilot test, a field test, and a series of validity studies. First, the researchers worked with teachers to finalize the domain of measurement, created a large item bank, conducted expert reviews of items, conducted a response process validity study, and conducted a fairness study. Then, the researchers piloted WordChomp to N = 493 students. Lastly, the researchers developed the psychometric model for scoring and adapting the assessment, conducted a field test with N = 190 students, and conducted an external validity study with N = 582. 

Control condition

There was no control condition for this study.

Key measures

To establish external validity evidence, key measures included the Florida Assessment of Student Thinking, the Gates-MacGinitie assessments of vocabulary and reading comprehension, the Being a Reader classroom assessments of morphology and spelling, the Core Phonics assessments of letter knowledge and word recognition, and the CORE Reading Maze Comprehension Test.

Data analytic strategy

The research team used an explanatory diagnostic classification model for establishing item bank item parameters, scoring student data, and developing adaptive algorithms.

Key outcomes

The main outcomes of this project are as follows:

  • A framework for developing fair assessments was created where fairness is treated as an argument for which evidence can be collected by working with assessment stakeholders (Huggins-Manley et al., 2022).
  • Two statistical approaches to scoring classroom assessment data were developed. First, the researchers developed a method to check the validity of model assumptions about which items on an assessment measure which student traits, such as which items in Project DIMES measure which morphology skills of students (da Silva et al., 2024). Second, the researchers developed a method for estimating student trait scores from assessment data in which students are provided a second chance to answer an item correctly (Kwon et al., 2024).

People and institutions involved

IES program contact(s)

Christina Chhin

Education Research Analyst
NCER

Project contributors

Amanda Goodwin

Co-principal investigator

Jonathan Templin

Co-principal investigator

Amber Benedict

Co-principal investigator

Products and publications

https://wordchomp.org/ 

Project website:

https://education.ufl.edu/dimes/

Publications:

Huggins-Manley, A.C., Booth, B.M., and D’Mello, S. (2022). Toward argument-based fairness with an application to AI-enhanced educational assessments. Journal of Educational Measurement, 59, 362-388.

da Silva, M., Huggins-Manley, A.C., and Benedict, A. E. (2024). A method of empirical Q-matrix validation for multidimensional item response theory. Applied Measurement in Education, 37, 177-190.

Kwon, T., Huggins-Manley, A.C., Templin, J., and Zheng, M. (2024). Modeling hierarchical attribute structures in diagnostic classification models with multiple attempts. Journal of Educational Measurement, 61, 198-218.   

Related projects

Project Coordinate: Increasing Coordinated Use of Evidence-Based Practices for Improving Word Study in an RTI Framework for Teams of 4th Grade Teachers

R324A170135

Morphological Awareness Computer Adaptive Testing Project

R305A150199

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

Tags

LiteracyReading

Share

Icon to link to Facebook social media siteIcon to link to X social media siteIcon to link to LinkedIn social media siteIcon to copy link value

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

You may also like

Zoomed in IES logo
Data file

U.S. Program for the International Assessment of A...

Publication number: NCES 2025224
Read More
Three students lie on their stomachs reading a book.
Ideas to apply

From Plan to Practice: Enhancing Literacy Through ...

March 06, 2025 by IES Staff
Read More
Zoomed in IES logo
Statistical Analysis Report

2024 NAEP Reading Assessment: Results at Grades 4 ...

Author(s): National Center for Education Statistics (NCES)
Publication number: NCES 2024218
Read More
icon-dot-govicon-https icon-quote