Skip to main content

Breadcrumb

Home arrow_forward_ios Information on ... arrow_forward_ios Improving Softw ...
Home arrow_forward_ios ... arrow_forward_ios Improving Softw ...
Information on ...
Grant Closed

Improving Software and Methods for Estimating Diagnostic Classification Models and Evaluating Model Fit

NCER
Program: Statistical and Research Methodology in Education
Program topic(s): Early Career
Award amount: $224,996
Principal investigator: William Thompson
Awardee:
University of Kansas Center for Research, Inc.
Year: 2021
Award period: 3 years (08/01/2021 - 07/31/2024)
Project type:
Methodological Innovation
Award number: R305D210045

Purpose

The goals of this project were to develop new software for the estimation and evaluation of the log-linear cognitive diagnosis model (the R package measr), and to use the software to conduct a simulation study evaluating the efficacy and efficiency of different model fit measures under a variety of conditions. Diagnostic classification models (DCMs) have shown the ability to provide more fine-grained and actionable scores to guide instruction, detect intervention effects, and improve student learning. Despite these benefits, DCMs have yet to gain widespread use in applied research or operational settings. Most existing methods are either limited in their ability to fully assess absolute and relative model fit or are under-researched in the context of DCMs.

Project Activities

In the first stage of the project, the research team will develop measr. The software will estimate the linear cognitive diagnosis model by interfacing with the Stan programming language. This will allow for the implementation of a powerful estimator in a user-friendly interface that is free and available in an open-source environment. The software will also include functions for evaluating the absolute and relative fit of estimated models. Once developed, the research team will use the software in the second stage of the project during which they will conduct Monte Carlo simulations to determine the efficiency and efficacy of various measures of absolute and relative model fit that have been proposed for evaluating DCMs. They will refine the software per the simulation results and based on user-testing that will occur at multiple points during software development.

Research plan

Throughout the process of developing the measr package, the research team sought feedback and input from a Project Advisory Committee (PAC) made up of experts in diagnostic modeling, software development, and applied research. The team also sought and received feedback from users of the software, both through GitHub issues and the training workshops. Feedback from the PAC and users was used to refine and update measr to ensure a positive user experience. The team also conducted a simulation study to investigate the effectiveness of different model fit metrics. The results of the simulation study were used to inform which model fit metrics are provided by default in the software, while still retaining the flexibility to calculate other metrics if requested by the user.

User Testing: The PAC, which consisted of active education researchers, provided feedback on functionality and the user interface throughout the development process. Their feedback was used to refine the package before measr was publicly released. Additionally, education researchers from outside the project team got hands-on experience using measr at two training workshops that were hosted at professional conferences. Following each conference participants were invited to complete a survey about the package’s utility and their ability to use the software on their in future research. All feedback that was received indicated a positive experience and high confidence for using the package independently.

Structured Abstract

The research team created an R package, measr, that can estimate and evaluate diagnostic classification models (DCMs) using Stan. The package provides a user-friendly interface and reasonable defaults, enabling practitioners and applied researchers to utilize these models in their work, without requiring deep technical expertise. In addition to
the R package, the team also create a website (https://measr.info) to document the measr’s functionality and provide narrative vignettes and case studies to support users of the software. They also hosted training workshops (e.g., https://ncme2024.measr.info) to introduce researchers to DCMs and provide hands-on experience with the measr software. Finally, the research team also conducted a simulation study with measr to investigate different methods for evaluating model fit and published articles (e.g., Thompson, 2023; https://doi.org/10.21105/joss.05742) to disseminate findings.

Key outcomes

  • Products produced
    • measr software
    • measr documentation (e.g., function reference, vignettes)
  • Key findings
    • measr provides a user-friendly interface to estimating and evaluating DCMs, with features not available in other software
    • Bayesian methods for assessing model fit in DCMs provide a marked improvement over traditional methods based on a maximum-likelihood estimation

Products and publications

The grant team published their findings and software developments in peer-reviewed journals and provided training workshops at a national conference and at Stats Camp to facilitate dissemination and use of measr. 

Project website:

measr software (R package)

Study registration:

measr R package

Publications:

Find available citations in ERIC for this award here. 

Thompson, W. J. (2023). measr: Bayesian psychometric measurement using Stan. Journal of Open Source Software, 8(91), 5742.

Thompson, W. J. (2024). measr: Bayesian psychometric measurement using ‘Stan’ (R package version 1.0.0) [Computer software]. The Comprehensive R Archive Network. https://doi.org/10.32614/CRAN.package.measr 

Additional project information

Additional Online Resources and Information:

  • measr software: https://doi.org/10.32614/CRAN.package.measr
  • Model estimation vignette: https://measr.info/articles/model-estimation
  • Model evaluation vignette: https://measr.info/articles/model-evaluation
  • Applied DCM case study: https://measr.info/articles/ecpe
  • NCME 2023 conference presentation: https://speakerdeck.com/wjakethompson/applieddiagnostic-classification-modeling-with-the-r-package-measr 
  • StanCon 2023 training workshop: https://stancon2023.measr.info
  • NCME 2024 training workshop: https://ncme2024.measr.info
  • useR! 2024 conference presentation: https://user2024.measr.info
  • IMPS 2024 conference presentation
    • Slides: https://imps2024.measr.info 
    • Conference paper: https://doi.org/10.35542/osf.io/ytjq9

Related projects

Expanding the Functionality and Accessibility of Software for Diagnostic Measurement

R305D240032

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

Tags

Data and Assessments

Share

Icon to link to Facebook social media siteIcon to link to X social media siteIcon to link to LinkedIn social media siteIcon to copy link value

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

You may also like

Zoomed in IES logo
Other

Expanding School Supports for Kinship Caregivers a...

January 16, 2026
Read More
Zoomed in IES logo
Blog

Using Feedback to Drive Education Dashboard Improv...

January 12, 2026 by Georgia Bock
Read More
Zoomed in IES logo
Other Resource

Improving Data Dashboards: A Feedback Process to E...

Author(s): U.S. Department of Education
Read More
icon-dot-govicon-https icon-quote