Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: The Development of the Writing Assessment Tool (WAT): An On-line Platform for the Automated Assessment of Writing
Center: NCER Year: 2018
Principal Investigator: McNamara, Danielle Awardee: Arizona State University
Program: Education Technology      [Program Details]
Award Period: 4 years (07/01/2018 – 06/30/2022) Award Amount: $1,399,327
Type: Development and Innovation Award Number: R305A180261
Description:

Co-Principal Investigators: Allen, Laura; Crossley, Scott; Roscoe, Rod; Grimm, Kevin.

Purpose: In this project, researchers will develop and test an on-line tool that produces analytics of high school students' writing assignments, for use by students, their teachers, and researchers. The ability to write high-quality texts is a strong predictor of success in the classroom and workplace. However, many individuals struggle to adequately develop this skill. According to the 2011 National Assessment of Educational Progress, 21% of seniors in the U.S. did not achieve basic proficiency in academic writing and only 3% performed well enough to be considered advanced writers.

Project Activities: Years 1 to 3 will include a series of iterative studies where the tool is developed and refined based on feedback by students, teachers, and researchers. The studies will focus on creating and testing a user interface with an underlying natural language processing algorithm that identifies the patterns and structure of student writing. In year 4, the researchers will conduct an underpowered efficacy trial to measure the promise of the tool to produce results that can be used by students and teachers to improve writing, and by researchers to understand writing.

Products: Researchers will develop an online tool (the Writing Assessment Tool) that generates analytics on student writing. The tool will provide students automated summative and formative feedback on persuasive or independent essays, summaries, and source-based or integrative essays. An interface will support teachers in administering essay assignments, which can then be either automatically scored or hand-graded by scaffolded rubrics. As well, the tool will provide a component for researchers to conduct computational analyses of writing.

Structured Abstract

Setting: The iterative studies will take place in a laboratory at Arizona State University. Pilot research will take place in socioeconomically and ethnically diverse high schools in Georgia and Mississippi.

Sample: Research in years 1 to 3 will involve focus groups and usability studies and will include small groups of high school students and teachers, as well as writing researchers. Participants in the Year 4 feasibility study will include approximately 1,000 high school students, more than two-thirds from low SES backgrounds, across a minimum of 30 classrooms.

Intervention: Researchers will develop an on-line tool (the Writing Assessment Tool) that employs natural language processing algorithms to provide students, teachers, and researchers access to automated writing analytics. The tool will provide students automated summative and formative feedback on persuasive or independent essays, summaries, and source-based or integrative essays. An interface will support teachers in administering essay assignments, which can be either be automatically scored or graded by scaffolded rubrics. As well, the platform will provide a component for researchers to analyze writing.

Research Design and Methods: To develop the tool, the researchers will use a design-based implementation procedure whereby iterations of research and development will occur until feasibility, usability, and learning aims are met. After development is complete, in Year 4 the team will conduct an underpowered efficacy study to explore the implementation of the tool in authentic classrooms. The study will be a pretest-post-test quasi-experimental design and will include delayed treatment control classrooms as the comparison. The study will take place over 14 weeks to allow students multiple interactions with the tool. Researchers will collect multiple sources of data (e.g., surveys, log data) to assess the impact of the tool on student writing within the real-world constraints and needs of the classroom.

Control Condition: The delayed treatment condition will receive business-as-usual instruction in the pilot when the immediate treatment condition students are using the writing tool.

Key Measures: The studies will employ researcher-developed as well as standardized assessments, and student-user logs. The pilot study will investigate the degree to which the quality of students' persuasive essays, summaries, and source-based essays improve as function of interacting with the tool. Pre and post surveys will measure the usability of the tool and the extent to which the tool improves metacognitive knowledge of writing across multiple dimensions. In the randomized control trial pilot study, the researchers will use standardized reading assessments, including the Writing Attitudes and Strategies Self-Report Inventory, the Daly-Miller Writing Apprehension Test, and the Gates-MacGinitie Reading test.

Data Analytic Strategies: Inferential statistical analyses will examine documenting student, teacher, and researcher's usage and will assess student's learning gains in writing. Data mining analyses will focus on log data (e.g., keystroke logs) to identify fine-grained behavioral patterns of usage. The purpose of analyses is to find associations between components of the tool and student outcomes. Results will inform the transition into classroom practice.

Products and Publications

Allen, L. K., Mills, C., Perret, C., & McNamara, D. S. (2019, March). Are You Talking to Me?: Multi-Dimensional Language Analysis of Explanations during Reading. In Proceedings of the 9th International Conference on Learning Analytics & Knowledge (pp. 116-120). ACM.

Balyan, R., Crossley, S. A., Brown III, W., Karter, A. J., McNamara, D. S., Liu, J. Y., ... & Schillinger, D. (2019). Using natural language processing and machine learning to classify health literacy from secure messages: The ECLIPPSE study. PloS one, 14(2), e0212488.

McCarthy, K. S., Roscoe, R. D., Likens, A. D., & McNamara, D. S. (2019, June). Checking It Twice: Does Adding Spelling and Grammar Checkers Improve Essay Quality in an Automated Writing Tutor?. In International Conference on Artificial Intelligence in Education (pp. 270-282). Springer, Cham.

Nicula, B., Perret, C. A., Dascalu, M., & McNamara, D. S. (2019, June). Predicting Multi-document Comprehension: Cohesion Network Analysis. In International Conference on Artificial Intelligence in Education (pp. 358-369). Springer, Cham.

Crossley, S. A., Kim, M., Allen, L., & McNamara, D. (2019, June). Automated Summarization Evaluation (ASE) Using Natural Language Processing Tools. In International Conference on Artificial Intelligence in Education (pp. 84-95). Springer, Cham.

McNamara, D. S., Roscoe, R., Allen, L., Balyan, R., & McCarthy, K. S. (2019). Literacy: From the Perspective of Text and Discourse Theory. Journal of Language and Education, 5(3), 56-69.

Allen, L. K., Likens, A. D., & McNamara, D. S. (2019). Writing flexibility in argumentative essays: a multidimensional analysis. Reading and Writing, 32(6), 1607-1634.


Back