Product: Moby.Read® is a fully automated Oral Reading Fluency (ORF) assessment for students in Kindergarten to Grade 5. Moby.Read, which is fully student self-administered, replaces face-to-face assessments conducted by teachers, thereby saving teachers the time of having to assess each student individually. The Moby.Read app includes 72 narrative and informational passages that cover 18 grade-leveled benchmark assessments and presents stories for students to read aloud, retell the stories, and answer questions aloud about the content. The app records and scores everything the student says and immediately reports ORF scores—reading level, word accuracy, accuracy rate (words correct per minute), comprehension, and expression. In addition, scores and audio recordings can be shared with reading specialists, parents, and students. In the teacher dashboard, Moby.Read shows assessment results at the individual and class levels and gives each teacher detailed reports on the reading strengths and difficulties of each student. Teachers can use these results to individualize student instruction. Moby.Read can be implemented in or out of classrooms to support oral reading instruction and encourage improved fluency. To support use and implementation, Moby.Read includes a Technical Manual and an Installation, User Management, Test Administration, and Scoring Guide.
Research & Development: Prior to the ED/IES SBIR-supported projects, the developers who founded Analytics Measures, Inc. (AMI) designed several early prototypes of ORF assessments as part of awards made by the National Center for Education Statistics (NCES). For the 2002 NAEP ORF study, AMI built a laptop administered ORF with a scoring platform. For the 2004 Fluency Addition to NAAL at NCES, the system was designed to automatically score responses. For a 2014 contract, AMI prototyped an assessment app to score oral readings in classroom settings. And for the 2018 eNAEP, AMI implemented another ORF assessment to score students' read-aloud performances.
The purpose of the ED/IES SBIR awards to AMI was to fully develop a student self-administered ORF to be used in school settings or at home called the Moby.Read. Moby.Read was designed with standards-aligned reading passages for ORF benchmarking, combining automatic speech recognition and refined natural language processing technology to automatically score reading level, words correct per minute, comprehension, and expression. To make classroom administration feasible, Moby.Read incorporates a student-interface for fully automated self-administered assessments. For teachers, the dashboard immediately presents texts, recordings, and results. During the project, AMI iteratively refined Moby.Read to ensure that all new features are clear and useful for teachers, and that the assessment flow is intuitive—even for early elementary students.
Once development was complete, several studies were conducted to test the usability, feasibility, reliability, and validity of Moby.Read. To test usability and classroom feasibility, seven teachers administered the Moby.Read assessment to their students. The teachers agreed that the visual appearance and automated administration was useful, that the app is intuitive for students, and that it provided appropriate content. In addition, 19 of 20 students reported that they preferred self-administering the assessment compared to being tested by a teacher. In another study, 304 grade school students completed grade-level appropriate forms of both Moby.Read (self-administered and automatically scored) and DIBELS (teacher-administered and teacher-scored). Results showed high correlations between the overall ORF scores and the automatic words correct per minute values, demonstrating that the technology-delivered assessment produced consistent results that align closely with a gold-standard paper assessment and that Moby.Read can be a viable replacement for traditional methods of teacher-administered ORF.
Since the SBIR project ended in 2019, AMI has conducted research to extract more detailed and diagnostic information from passage read-alouds to discover how well automated methods can isolate and measure foundational reading skills from passage readings. In 2019, AMI conducted a feasibility study to understand how this information can be used to guide instruction. Five reading specialists produced 1,813 free-response observations and suggestions from listening to 60 students in Grades 1 to 4 reading 27 passages. Results showed that six new machine measures outperformed all 18 of the specialist-assigned observations in predicting recommended suggestions.
Path to Commercialization: Since AMI commercially launched Moby.Read in 2019, students across 30 states have taken over 30,000 ORF assessments. AMI's commercialization strategy includes both direct sales and partnerships with curriculum and assessment publishers. In 2019, K12 Inc. began using Moby.Read as an embedded benchmark assessment in their digital library, Big Universe, and adapted AMI's ORF technology to score shorter, formative assessments. BBB, a distributor in the Southeast, is reselling Moby.Read in Georgia, Florida, and Alabama, alongside other literacy products from larger publishers. In June 2020, AMI announced a partnership with University of Oregon's Behavioral Research and Teaching Unit (BRT) to license reading passage content and machine learning annotations for Moby.Read. AMI is also collaborating with Savvas Learning Company to integrate Moby.Read with Savvas's digital library and core literacy program. AMI uses social media channels such as LinkedIn, Twitter, and Facebook to build the Moby.Read brand and attract district customers, and to support distribution partners.
Awards for Related Projects:
2020: Phase I award from ED/IES SBIR: Early Reading Diagnostic Profiler
Awards & Recognition:
2020: Finalist for an SIIA Codie Award
Peer Reviewed Publications:
Bernstein, J., Cheng, J., Balogh, J., & Downey, R. (2020). Artificial intelligence for scoring oral reading fluency. In H. Jiao & R. Lissitz (Eds.), Applications of artificial intelligence to assessment. Charlotte, NC: Information Age Publisher. View Chapter
Cheng, J. (2018). Real-time scoring of an oral reading assessment on mobile devices. Proceedings of Interspeech 2018, 1621–1625.View Article