Project Activities
The researchers applied natural language processing techniques to identify sources of text and task difficulty and used evidence-centered design modeling of multiple dimensions of student proficiencies. Together these techniques formed the foundation for developing and testing automated or assisted text and task generation and scoring tools and statistical profile and growth modeling. The researchers then applied and investigated technology-enhanced tools for administration, collection, and scoring of evidence.
Structured Abstract
Sample
The project study participants were readers spanning the developmental levels of high school and adult literacy students and included a range of skilled readers for comparison purposes. The participating schools had diverse, urban student populations with high percentages of struggling readers. The researchers regularly engaged and convened teachers to review, examine, and provide guidance concerning all phases of the assessment delivery system.
Research design and methods
The research plan proceeded in three phases. In phase I, the researchers applied natural language processing (NLP) techniques to identify sources of difficulty in a wide range of texts from existing corpora. They modeled the interactions of struggling reader profiles and developed task models using evidence-centered design (ECD) analysis techniques. Finally, they developed assessment tools, and piloted them to assess their viability. These activities continued iteratively throughout the project. In phase II, the researchers combined and administered tested task models to a larger sample of learners in a cross-sectional design to examine relationships between the components and profiles of learners. They used these findings to test and adjust the ECD model. In Phase III, they conducted a longitudinal, repeated measures design to examine how well the NLP tools predict text and task difficulty and how well the assessments accumulates individual profile information over time.
People and institutions involved
IES program contact(s)
Products and publications
ERIC Citations: Find available citations in ERIC for this award here.
Select Publications:
Book chapters
Mislevy, R.J., and Sabatini, J. (2012). How Research on Reading and Research on Assessment are Transforming Reading Assessment (or if They Aren't, how They Ought to). In J.P. Sabatini, E.R. Albro, and T. O'Reilly (Eds.), Measuring Up: Advances in how we Assess Reading Ability (pp. 119-134). Lanham, MD: Rowan and Littlefield.
Sabatini, J.P. (2009). From Health/Medical Analogies to Helping Struggling Middle School Readers: Issues in Applying Research to Practice. In S. Rosenfield, and V. Berninger (Eds.), Translating Science-Supported Instruction into Evidence-Based Practices: Understanding and Applying the Implementation Process (pp. 285-316). New York: Oxford University Press.
Journal articles
Deane, P., Sheehan, K.M., Sabatini, J., Futagi, Y., and Kostin, I. (2006). Differences in Text Structure and its Implications for Assessment of Struggling Readers. Scientific Studies of Reading, 10(3): 257-275.
O'Reilly, T., Sabatini, J., Bruce, K., Pillarisetti, S., and McCormick, C. (2012). Middle School Reading Assessment: Measuring What Matters Under an RTI Framework. Reading Psychology Special Issue: Response to Intervention, 33(1): 162-189.
Nongovernment peer-reviewed report
Sabatini, J.P., Bruce, K., and Steinberg, J. (2013). SARA Reading Components Tests, RISE Form: Test Design and Technical Adequacy. Princeton, NJ: Educational Testing Service.
Related projects
Questions about this project?
To answer additional questions about this project or provide feedback, please contact the program officer.