Skip to main content

Breadcrumb

Home arrow_forward_ios Information on IES-Funded Research arrow_forward_ios Identifying Optimal Scoring Metrics ...
Home arrow_forward_ios ... arrow_forward_ios Identifying Optimal Scoring Metrics ...
Information on IES-Funded Research
Grant Open

Identifying Optimal Scoring Metrics and Prompt type for Written Expression Curriculum-Based Measurement

NCER
Program: Education Research Grants
Program topic(s): Literacy
Award amount: $1,387,725
Principal investigator: Milena Keller-Margulis
Awardee:
University of Houston
Year: 2019
Project type:
Measurement
Award number: R305A190100

Purpose

The purpose is to develop a revised assessment tool, Automated Written Expression Curriculum-Based Measurement (aWE-CBM), by combining advances in automated text evaluation with revised WE-CBM procedures for students in 3rd through 5th grade. Developing aWE-CBM will better align scoring with writing development theory and improve the technical adequacy and feasibility of screening for writing in upper elementary grades.

Project Activities

In Year 1 of the grant, the researchers will use available data from writing samples to compare approaches to assessing the quality of the writing. This will enable the team to gauge the extent of improvement of aWE-CBM over WE-CBM. They will also recruit schools for participation in the Year 2 primary data collection of additional writing samples. In addition to data collection, the team will, in Year 2, score all writing samples and again compare approaches to assessing the quality of writing in those samples. In Year 3, the researchers will finalize data analysis and plan for future work. The team plans to engage in dissemination activities during each year of the grant, with Year 3 scheduled to be the most productive year for dissemination.

Structured Abstract

Setting

The researchers will collect data from elementary school students in Houston, Texas, and Vancouver, British Columbia. Some of the data from Houston are already available, and original data collection will take place in both Houston and Vancouver.

Sample

Extant data from 7-minute narrative screening samples will be available to the project team from 145 students in 2nd-5th grades. The team will also have 3-minute narrative samples and state writing test data from 161 4th-graders. New data collection will include 15-min narrative/story, informational, and argumentative/persuasive screening samples and standardized writing and math measures from 450 students in 3rd-5th grades. In addition, 40 4th-grade students will complete handwritten and keyboarded responses at the winter time point.
Assessment
The team will develop a revised assessment tool, aWE-CBM, by combining advances in automated text evaluation with revised WE-CBM procedures for students in 3rd through 5th grade.

Research design and methods

The convergent validity of composites based on automated text evaluation in aWE-CBM versus traditional WE-CBM scoring and Project Essay Grade will be determined based on writing quality ratings on other screening samples, a statewide writing test, and two standardized assessments. The number of cross-genre samples needed for reliability and the validity of handwritten versus keyboarded responses will also be analyzed.

Control condition

There is no control condition.

Key measures

Scoring methods are free or open source programs (Coh-Metrix, ReaderBench), commercial (PEG), and traditional hand scoring. Criterion measures include the Test of Written Language (4th edition), the Wechsler Individual Achievement Test (3rd edition), and the State of Texas Assessment of Academic Readiness assessments.

Data analytic strategy

Applied predictive modeling will be used to form aWE-CBM composites to predict quality ratings on screening samples. The predicted quality scores will be correlated with convergent and discriminant validity measures, and tests of differences between correlations will determine whether validity estimates vary by scoring method, genre, and sample modality. Reliability with varying numbers of screening samples will be analyzed based on generalizability theory.

People and institutions involved

IES program contact(s)

Allen Ruby

Associate Commissioner for Policy and Systems
NCER

Products and publications

Products: The main outcome of this grant will be an assessment approach that can function as a general outcome measure, thereby serving as an index of overall writing used to screen those at-risk for poor performance. The dissemination activities center on journal manuscripts for publishing the results. Some manuscripts will be published in peer-reviewed research journals and other manuscripts will appear in journals typically read by education professionals, including teachers, administrators, and assessment practitioners. The research team plans to present their findings at conferences and develop a website for posting research briefs and technical reports. The team also intends to extend this project in future funded work.

Supplemental information

Co-Principal Investigators: Gonzalez, Jorge; Mercer, Sterett; Zumbo, Bruno

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

Tags

Data and AssessmentsEducation TechnologyWriting

Share

Icon to link to Facebook social media siteIcon to link to X social media siteIcon to link to LinkedIn social media siteIcon to copy link value

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

You may also like

Zoomed in IES logo
Workshop/Training

Data Science Methods for Digital Learning Platform...

August 18, 2025
Read More
Zoomed in IES logo
Workshop/Training

Meta-Analysis Training Institute (MATI)

July 28, 2025
Read More
Zoomed in Yellow IES Logo
Workshop/Training

Bayesian Longitudinal Data Modeling in Education S...

July 21, 2025
Read More
icon-dot-govicon-https icon-quote