Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Identifying Optimal Scoring Metrics and Prompt type for Written Expression Curriculum-Based Measurement
Center: NCER Year: 2019
Principal Investigator: Keller-Margulis, Milena Awardee: University of Houston
Program: Literacy      [Program Details]
Award Period: 3 Years (09/01/19–08/31/22) Award Amount: $1,387,725
Type: Measurement Award Number: R305A190100
Description:

Co-Principal Investigators: Gonzalez, Jorge; Mercer, Sterett; Zumbo, Bruno

Purpose: The purpose is to develop a revised assessment tool, Automated Written Expression Curriculum-Based Measurement (aWE-CBM), by combining advances in automated text evaluation with revised WE-CBM procedures for students in 3rd through 5th grade. Developing aWE-CBM will better align scoring with writing development theory and improve the technical adequacy and feasibility of screening for writing in upper elementary grades.

Project Activities: In Year 1 of the grant, the researchers will use available data from writing samples to compare approaches to assessing the quality of the writing. This will enable the team to gauge the extent of improvement of aWE-CBM over WE-CBM. They will also recruit schools for participation in the Year 2 primary data collection of additional writing samples. In addition to data collection, the team will, in Year 2, score all writing samples and again compare approaches to assessing the quality of writing in those samples. In Year 3, the researchers will finalize data analysis and plan for future work. The team plans to engage in dissemination activities during each year of the grant, with Year 3 scheduled to be the most productive year for dissemination.

Products: The main outcome of this grant will be an assessment approach that can function as a general outcome measure, thereby serving as an index of overall writing used to screen those at-risk for poor performance. The dissemination activities center on journal manuscripts for publishing the results. Some manuscripts will be published in peer-reviewed research journals and other manuscripts will appear in journals typically read by education professionals, including teachers, administrators, and assessment practitioners. The research team plans to present their findings at conferences and develop a website for posting research briefs and technical reports. The team also intends to extend this project in future funded work.

Structured Abstract

Setting: The researchers will collect data from elementary school students in Houston, Texas, and Vancouver, British Columbia. Some of the data from Houston are already available, and original data collection will take place in both Houston and Vancouver.

Sample: Extant data from 7-minute narrative screening samples will be available to the project team from 145 students in 2nd-5th grades. The team will also have 3-minute narrative samples and state writing test data from 161 4th-graders. New data collection will include 15-min narrative/story, informational, and argumentative/persuasive screening samples and standardized writing and math measures from 450 students in 3rd-5th grades. In addition, 40 4th-grade students will complete handwritten and keyboarded responses at the winter time point.

Assessment: The team will develop a revised assessment tool, aWE-CBM, by combining advances in automated text evaluation with revised WE-CBM procedures for students in 3rd through 5th grade.

Research Design and Methods: The convergent validity of composites based on automated text evaluation in aWE-CBM versus traditional WE-CBM scoring and Project Essay Grade will be determined based on writing quality ratings on other screening samples, a statewide writing test, and two standardized assessments. The number of cross-genre samples needed for reliability and the validity of handwritten versus keyboarded responses will also be analyzed.

Control Condition: There is no control condition.

Key Measures: Scoring methods are free or open source programs (Coh-Metrix, ReaderBench), commercial (PEG), and traditional hand scoring. Criterion measures include the Test of Written Language (4th edition), the Wechsler Individual Achievement Test (3rd edition), and the State of Texas Assessment of Academic Readiness assessments.

Data Analytic Strategy: Applied predictive modeling will be used to form aWE-CBM composites to predict quality ratings on screening samples. The predicted quality scores will be correlated with convergent and discriminant validity measures, and tests of differences between correlations will determine whether validity estimates vary by scoring method, genre, and sample modality. Reliability with varying numbers of screening samples will be analyzed based on generalizability theory.


Back