Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Validating Automated Measures of Student Writing and the Student Writing Process to Help Classroom Teachers Implement Formative Assessment Practices
Center: NCER Year: 2021
Principal Investigator: Mitros, Piotr Awardee: Educational Testing Service (ETS)
Program: Literacy      [Program Details]
Award Period: 4 years (07/01/2021 – 06/30/2025) Award Amount: $1,999,208
Type: Measurement Award Number: R305A210297

Co-Principal Investigators: Deane, Paul; Wylie,  Caroline; Lynch, Collin

Purpose: The research team will develop and validate an open-source, digital formative writing assessment dashboard for middle school language arts classrooms, exploiting advances in  automated writing evaluation and writing process analysis to identify actionable metrics that teachers can use to identify next steps in instruction. Current literature indicates that writing instruction is most effective when teachers use formative assessment techniques to track student performance while the students are acquiring new writing strategies. However, formative assessment methods can be difficult to implement in writing classrooms since much of the student writing happens outside of class. The goal of this project is to increase both the reliability and validity of the evidence teachers use to determine next steps in instruction, resulting in better identification of students who need specific instructional supports.

Project Activities: In the first phase of the project, culminating in Study 1, the research team will identify and implement metrics that function as indicators of writing quality, effective use of  time, and constructive participation in classroom writing communities. They also will conduct user research to identify meaningful and actionable ways to report each metric. In Study 2, the research team will conduct classroom research to validate these metrics both as measures of the underlying writing construct and as sources of actionable information for teachers. In Study 3, the research team will validate use of the formative assessment dashboard developed and refined  during Studies 1 and 2 as a formative assessment instrument. The goal is to establish its reliability as a measurement instrument and evaluate its component metrics against multiple standards for validity (test content, response process, internal structure, relations to other variables, and related consequences). The team also will establish initial protocols for effective report use by teachers. In Study 4, the research team will validate use of the dashboard as a formative instrument in English language arts classes in a variety of middle school settings. The goal is to provide evidence of generalizability for each measure as a measure of student performance or engagement, while also exploring differences in performance by teacher, by task type, and between subgroups. The team plans to engage in dissemination activities throughout the grant's term. These will increase systematically over time and culminate in an effort to build a larger, open-source research, development, and user community by the fourth year of the grant.

Products: The main product for this project will be an open-source formative assessment dashboard with a strong validity argument supporting its classroom use. The dashboard will provide a series of automated reports that supply teachers with meaningful and actionable data about student writing. The study results also will provide evidence that establishes the reliability and validity of the metrics employed by the dashboard. The research team will build a public dissemination strategy that includes an open-source researcher/developer community, strategic partnerships with teacher professional development organizations, a social media presence, and a website. The team also will disseminate research results through presentations at practitioner and research-oriented conferences and by publishing research findings in peer reviewed journals.

Structured Abstract

Setting: The project will take place across a diverse set of middle schools.

Sample: The initial classroom study in Year 1 will involve four teacher-collaborators and between 450 and 600 students. The research team also plans to conduct user-based research with teachers in Year 1 to identify meaningful and actionable ways to report targeted metrics. This will involve some 100 survey participants and 20 focus group participants. The Year 2 classroom study will involve an additional four teacher-collaborators and 900 to 1,200 students. The Year 3 pilot involves approximately 12 ELA teachers and 2,000 students at four middle schools. The Year 4  field test will involve at least 24 ELA teachers and 4,000 students in eight middle schools. Schools and teacher samples encompass a wide-range of demographic profiles, including significant numbers of students from minority, low-SES, or low English proficiency households.

Assessment: The research team will develop and validate an open-source data capture utility for formative assessment (installed as a Chrome extension) and an online teacher formative assessment reporting dashboard for writing in language arts classrooms in Grades 6-8. The data capture utility will capture detailed information about the actions that students take while they are planning, writing, revising, and editing their work. The team will use  the results of prior research to define metrics that assess a broad definition of the writing construct that includes four major domains (engagement and effort, domain knowledge/ideation, control over writing strategies/the writing process, and foundational writing skills). The dashboard will provide automated reports featuring individual metrics (or a combination of metrics) translated to be actionable and to support specific formative assessment strategies.

Research Design and Methods: The research team will use argument-based approaches for validation across all the studies. More specifically, in Studies 1 and 2, researchers will select metrics validated as writing measures in prior research and use a design-based approach to identify which metrics teachers consider meaningful and actionable. As specific metrics are implemented, the research team will validate each metric using machine learning methods and iteratively refine its understanding of and theories about how teachers engage with each metric. In Studies 3 and 4, they will use a synchronous mixed-methods design combining qualitative observation and interview data with quantitative analysis.

Control Condition: Due to the nature of this study, there is no control condition.

Key Measures: In Studies 1 and 2, the research team will validate metrics against previous collected data and  human scores and annotations. In Studies 3 and 4, they will collect (i) class grades; (ii) teacher judgements of student writing achievement, (iii) the RISE component reading battery, now termed ReadReady, administered early in the school year, along with a short writing motivation survey; (iii) on-demand essay prompts, administered early and late in the school year, and scores on end-of-year state ELA assessments.

Data Analytic Strategy: In Studies 1 and 2, the research team will take an audience-centric, iterative multistep approach to engage teachers in identifying information they need to make informed instructional decisions. In Study 2, the research team will also examine descriptive statistics of classroom measures to provide initial validity evidence. They will use qualitative analysis of teacher-collaborator feedback to guide development. In later studies, researchers will calculate descriptive statistics and regressions against external metrics for all classroom metrics. The research team will use quantitative methods (including linear mixed methods, DIF analysis, and factor analysis) to analyze the reliability and validity of specific metrics.

Cost Analysis: The research team will conduct a detailed analysis of total economic costs using  the "ingredients method" for both total and net costs of total investment in the formative writing assessment tool (i.e., costs above and below costs associated with alternative approaches to formative assessment of student writing).