Skip to main content

Breadcrumb

Home arrow_forward_ios Information on IES-Funded Research arrow_forward_ios Measuring Oral Reading Fluency: Com ...
Home arrow_forward_ios ... arrow_forward_ios Measuring Oral Reading Fluency: Com ...
Information on IES-Funded Research
Grant Closed

Measuring Oral Reading Fluency: Computerized Oral Reading Evaluation (CORE)

NCER
Program: Education Research Grants
Program topic(s): Literacy
Award amount: $1,599,289
Principal investigator: Joseph Nese
Awardee:
University of Oregon
Year: 2014
Project type:
Measurement
Award number: R305A140203

Purpose

  The University of Oregon sought to develop and validate a new computerized assessment system of oral reading fluency (ORF) for use with students in Grades 2 through 4, called Computerized Oral Reading Evaluation (CORE). CORE contains an automated scoring algorithm based on a speech recognition engine, shorter passages, and a latent variable psychometric model, collecting word-level data that are used in a model-based approach to scale ORF scores with increased reliability compared to traditional ORF scores. CORE has the potential to reduce: (1) human administration errors by standardizing administration setting, delivery, and scoring, (2) the time cost of ORF administration by allowing small-group or whole classroom testing; and (3) the resource cost to train staff to administer and score the assessment. CORE is to address the practical and psychometric inadequacies of traditional ORF assessments for screening and progress monitoring in order to better support instructional decision-making and improving student reading outcomes.

Project Activities

  The project had three phases. During Phase 1, the project team developed 330 passage (long, medium, and short) for oral reading fluency assessment. During Phase 2, the project team developed and validated a model for oral reading fluency that incorporates response time and response accuracy and estimates a model-based words correct per minute (WCPM) parameter which is on the same scale as traditional ORF WCPM score. For Phase 3, the team compared the consequential validity properties of CORE compared to a traditional oral reading fluency assessment (easyCBM).

Structured Abstract

Setting

  The project was conducted with public elementary students in Grades 2 through 4 in 17 schools and five school districts (two towns, two suburbs, and one city) in Oregon or Washington.

Sample

  Phase I participants included approximately 59 teachers and 978 students. Phase II participants included approximately 121 teachers and 2,897 students. Phase III participants included approximately 108 teachers and 2,618 students.
Assessment
  The Computerized Oral Reading Evaluation (CORE) is an oral reading fluency assessment for students in Grades 2 through 4 that incorporates automatic speech recognition and a latent variable psychometric model. CORE uses speech recognition software that can minimize or eliminate the potential for administration errors by standardizing the delivery and setting, and automating scoring. CORE includes shorter passages (50 to 85 words) that are equated, horizontally scaled, and vertically linked, with a scale metric to reduce the standard error of measurement, improve psychometric standards for reliability and validity, and yield scores sensitive to instructional change. 

Research design and methods

  The project team developed 330 passages for oral reading fluency assessment: 110 at each of Grades 2-4, with 20 long passages (85 words, ± 5), 30 medium passages (50 words, ± 5),  and 60 short passages (25 words, ± 5) for each grade. Comparisons were made for scoring methods (human scores, traditional human ORF scores, and automatic speech recognition scoring) and passage length (CORE passages vs. traditional oral reading fluency passages). The project team then developed and validated a binomial-lognormal joint factor model for oral reading fluency that incorporates response time and response accuracy. The team derived a model-based words correct per minute (WCPM) parameter from this model, which is on the same scale as traditional ORF WCPM scores, and developed computation algorithms by maximum likelihood estimation and by Bayesian MCMC estimation methods, including their standard errors. The model was used to estimate and equate passage-level parameters for the 150 medium and long CORE passages and the equated passage parameters were applied to estimate the model-based WCPM scores and their standard errors. The team then conducted a repeated measures study to compare the consequential validity properties of CORE compared to a traditional oral reading fluency assessment (easyCBM) for students in Grades 2 through 4, including student growth trajectories, standard errors and reliability, and predictive and concurrent validity using state reading test scores and easyCBM comprehension scores.

Control condition

There is no control condition for this study.

Key measures

The easyCBM ORF assessments were used as a traditional ORF assessments to establish passage content and criterion validity during Phase I and as a comparison for consequential validity during Phase III.  State reading test scores (for students in Grades 3 and 4) and easyCBM reading comprehension scores (for all students in Grades 2 through 4) were used for predictive and concurrent validity analyses in Phase III. The team developed teacher questionnaires to measure teachers' perceptions of (a) feasibility, desirability, and passage length (Phase I), (b) traditional ORF assessment, and the CORE system and assessment procedures (Phase III), and (c) utility and interpretability of the CORE score reporting (Phase III).

Data analytic strategy

Linear mixed-effect models were used to validate ASR scoring and CORE passage length. The team developed a two-part model that includes components for reading accuracy and reading speed. The accuracy component is a binomial-count factor model, where accuracy is measured by the number of correctly read words in the passage. The speed component is a log-normal factor model, where speed is measured by passage reading time. Parameters in the accuracy and speed models are jointly modeled and estimated. Predictive modeling was used to analyze concurrent and predictive validity, and latent growth curve modeling was used to model student growth.

Key outcomes

The main findings of this measurement study are:

People and institutions involved

IES program contact(s)

Allen Ruby

Associate Commissioner for Policy and Systems
NCER

Products and publications

Nese, J. F. T. & Kamata, A. (2020). Evidence for automated scoring and shorter passages of CBM-R in early elementary school. School Psychology. Advance online publication. https://psycnet.apa.org/doi/10.1037/spq0000415

Kara, Y., Kamata, A., Potgieter, C., & Nese, J. F. T. (2020). Estimating model-based oral reading fluency: A Bayesian approach. Educational and Psychological Measurement. 80(5), 847-869.

Nese, J. F. T. & Kamata, A. (2020). Addressing the large standard error of traditional CBM-R: Estimating the conditional standard error of a model-based estimate of CBM-R. I. Assessment for Effective Instruction. https://doi.org/10.1177/1534508420937801

Potgieter, C. J., Kamata, A., & Kara, Y. (2017). An EM algorithm for estimating an oral reading speed and accuracy model. arXiv preprint arXiv:1705.10446. (available at https://arxiv.org/abs/1705.10446).

Project website:

https://jnese.github.io/core-blog/

Related projects

A Comprehensive Measure of Reading Fluency: Uniting and Scaling Accuracy, Rate, and Prosody

R305A200018

Developing Computational Tools for Model-Based Oral Reading Fluency Assessments

R305D200038

Supplemental information

Co-Principal Investigator: Kamata, Akihito

  • Automatic speech recognition (ASR) can provide reliable WCPM scores compared to expert human scorers, shorter passages perform comparably to traditional passages read for one minute, and both can be used in schools as part of an ORF assessment system (Nese & Kamata, 2020).
  • Researchers generated a two-part binomial-lognormal joint factor model for ORF that includes components for reading accuracy and reading speed, and computation algorithms by maximum likelihood estimation (Potgieter, Kamata, & Kara, 2017) and by Bayesian MCMC estimation methods, including their standard errors (Kara, Kamata, Potgieter, & Nese, 2020).
  • Researchers generated a model-based words correct per minute (WCPM) parameter from the two-part binomial-lognormal joint factor model, which is on the same scale as traditional ORF WCPM scores. The standard error for the CORE model-based WCPM estimates was substantially smaller than the reported standard errors of traditional ORF systems, especially for scores at/below the 25th percentile. A large proportion of sample scores, and an even larger proportion of scores at/below the 25th percentile (about 99%) had a smaller standard error than the reported standard errors of traditional systems (Nese & Kamata, 2020).

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

Tags

ReadingEducation TechnologyData and Assessments

Share

Icon to link to Facebook social media siteIcon to link to X social media siteIcon to link to LinkedIn social media siteIcon to copy link value

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

You may also like

Zoomed in IES logo
Workshop/Training

Data Science Methods for Digital Learning Platform...

August 18, 2025
Read More
Zoomed in IES logo
Workshop/Training

Meta-Analysis Training Institute (MATI)

July 28, 2025
Read More
Zoomed in Yellow IES Logo
Workshop/Training

Bayesian Longitudinal Data Modeling in Education S...

July 21, 2025
Read More
icon-dot-govicon-https icon-quote