Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Response-to-Text Tasks to Assess Students' Use of Evidence and Organization in Writing: Using Natural Language Processing for Scoring Writing and Providing Feedback At-Scale
Center: NCER Year: 2016
Principal Investigator: Litman, Diane Awardee: University of Pittsburgh
Program: Education Technology      [Program Details]
Award Period: 3 years (9/1/2016-6/30/2019) Award Amount: $1,398,590
Type: Measurement Award Number: R305A160245
Description:

Co-Principal Investigators: Richard Correnti and Lindsay Clare Matsumara

Purpose: Researchers for this project will develop and validate an automated assessment of students' analytic writing skills in response to reading text. During prior work the researchers studied an assessment of students' analytic writing to understand progress toward outcomes in the English Language Arts Common Core State Standards, and to understand effective writing instruction by teachers. The researchers focused on response-to-text assessment because: it is an essential skill for secondary and post-secondary success; current assessments typically examine writing outside of responding to text; and increased attention on analytic writing in schools will result in improved interventions. Recent advances in artificial intelligence offer a potential way forward through automated essay scoring of students' analytic writing at-scale and feedback to improve writing and in the teaching instruction.

Project Activities: First, researchers will refine and extend an already-developed response-to-text assessment involving two texts, one of which already includes a prototype to automatically assess writing. Next, they will develop an automated assessment for the second text, and evaluate whether it can provide formative feedback to inform quality of student writing and teacher instruction. The research team will then conduct a series of studies to validate the tool by measuring the inter-rater agreement between human and automatically generated assessments of students' analytic writing.

Products: Researchers will fully develop an online automated machine scoring tool to assess student writing, both in the use of evidence and organization of writing in response to two texts. The tool will automatically generate formative feedback based on students' written responses. The tool is designed for students in grade 5 and 6. The research team will also develop a reporting dashboard to provide formative assessment feedback and reports to teachers to inform instruction.

Structured Abstract

Setting: Development and validation of the assessment will occur in two large suburban districts in New York State. The schools serve a socioeconomic and ethnically diverse population, with half of the students coming from minority backgrounds.

Sample: The sample will consist of 82 English Language Arts teachers in Grade 5 and 6 and students in these classrooms. During Year 1, the study will include 40 teachers and their students. During Year 2, the study will include 12 teachers and their students. During Year 3, the sample will include 30 teachers and their students distributed across the 3 grade levels.

Assessment: The assessment will measure students' ability to reason about texts while writing and to use evidence effectively to support claims. The assessment is intended to be administered by teachers in 60 minutes. The teacher reads a pre-determined text for 15 minutes and students follow along with their own copy of the text. Students then have 45 minutes to write their responses on an online form. The tool will automatically assess students' writing by extracting information on the number of pieces of evidence, the concentration of evidence provided, the specificity, and the word count.

Research Design and Methods: During Year 1, the researchers will develop and test algorithms to compare scores of the computer generated reports and the human generated reports. To expand and replicate the validity investigation, in years 2 and 3 the researchers will collect evidence for whether feedback from the automated essay scoring feature helps students' improve using evidence to support writing and whether the feedback influences the beliefs and practices of teachers.

Control Condition: There is no control condition for this project.

Key Measures: Measures will target two constructs within analytic text-based writing: students' effective use of evidence and their organization of ideas and evidence in support of a claim. Key student outcome measures include the New York State Common Core tests comprised of multiple-choice items, short constructed responses, items requiring extended writing, and the Smarter Balanced Summative Assessment Tests in literacy.

Data Analytic Strategy: Researchers will analyze the data using multi-level models, with classrooms as the unit of analysis and achievement measured at the individual student level and nested within teachers. Analyses will control for prior achievement and background characteristics. The focus of the research is on the effect of the assessment on achievement in analytic writing.

Products and Publications

Conference Proceeding

Zhang, H., Magooda, A., Litman, D., Correnti, R., Wang, E., Matsmura, L.C., ... and Quintana, R. (2019, July). eRevise: Using Natural Language Processing to Provide Formative Feedback on Text Evidence Usage in Student Writing. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 33, pp. 9619-9625).

Preprint

Zhang, H., and Litman, D. (2019). Word Embedding for Response-To-Text Assessment of Evidence. arXiv preprint arXiv:1908.01969.

Zhang, H., and Litman, D. (2019). Co-attention Based Neural Network for Source-Dependent Essay Scoring. arXiv preprint arXiv:1908.01993.


Back