|Title:||Measuring Reading Progress in Struggling Adolescents|
|Principal Investigator:||Foorman, Barbara||Awardee:||Florida State University|
|Program:||Reading and Writing [Program Details]|
|Award Period:||4 years (3/01/2010-2/28/2014)||Award Amount:||$1,499,743|
Purpose: Similar to national results on the National Assessment of Educational Progress (NAEP), forty percent of Florida's students in grades 3–12 are not proficient readers on the Florida Comprehensive Assessment Test (FCAT). By Florida law, these non-proficient readers must be provided with intensive reading intervention. In an effort to provide schools with formative assessment to guide instruction, the Florida Department of Education contracted with the Florida Center for Reading Research (FCRR) to develop screening, diagnostic, and progress monitoring measures for students in grades K–12. The resulting system is called the Florida Assessment for Instruction in Reading (FAIR) and consists of a teacher-administered assessment for grades K–2 and a computer adaptive assessment for students in grades 3–12. The purpose of this application is to modify and validate the 3–12 FAIR system to enhance its 1) reading comprehension screen; 2) diagnostic tasks of maze and word analysis; and 3) ongoing progress monitoring tasks of reading comprehension and maze.
Project Activities: In this project, researchers will use various methodological approaches to analyze modify and validate the 3–12 FAIR. In Year 1, the research team will examine which procedures are most effective and efficient for ensuring that items and the maze tasks are adaptive. In Year 2, researchers validate changes made to the system based upon the findings from Year 1 analyses. In Year 3, the research team will use Classification and Regression Trees (CART) to analyze reading comprehension, maze, and word analysis scores, in order to identify the best cut-points to maximally separate groups of students into risk categories. In the final year, researchers will complete validation analyses, continue to analyze incoming data from schools, and revise and include additional items as needed.
Products: At the conclusion of this project, the team will produce a revised version of the grades 3–12 Florida Assessment for Instruction in Reading (FAIR), with increased utility, precision, and efficiency in measuring reading comprehension. Peer-reviewed publications will also be prepared.
Setting: This research project will take place in the state of Florida.
Assessment: The focus of this measurement project is on Florida's computer adaptive assessment for students in grades 3–12. This system provides screening, diagnostic, and progress monitoring measures for foundational word reading skills, the components that contribute to reading comprehension, and reading comprehension.
Research Design and Methods: The team will examine and revise the FAIR for grades 3–12 using a variety of methodological approaches. In order to assess the dimensionality of FAIR's reading comprehension screen, the team will use a confirmatory factor analysis of archival FAIR data. Building on that information, the team will examine the gains in efficiency of FCAT prediction of making items as well as passages adaptive. The team will also examine how the weighting of recent prior information can be used to reduce fluctuations and increase reliability and efficiency in measuring growth in reading comprehension for various demographic classifications of students. The team will also test if a theoretical distribution of difficulty can be used to incorporate new items into measuring reading comprehension. To increase the efficiency of the FAIR, the team will examine the value added when the probability of passing FCAT is assessed via monthly progress monitoring as compared to monitoring three times a year, and whether there are more efficient ways to handle item dependency. The team will also create new maze passages, and will examine whether equipercentile equating can be used to make the maze task adaptive. Using archival FAIR date, the team will examine whether the new maze tasks whose items are written to be more cognitively and linguistically demanding will exhibit more growth across the year and predict more strongly to reading comprehension outcomes as measured by FCAT. The team will use Classification and Regression Trees (CART) to analyze reading comprehension, maze, and word analysis scores, in order to identify the best cut-points to maximally separate groups of students into risk categories.
To answer these questions, during Year 1 the research team will conduct simulations as well as confirmatory factor analysis and comparison of item adaptive procedures on 2009-2010 archival FAIR data in Florida's Progress Monitoring and Reporting Network (PMRN) of 959,544 students in grades 3–12. Also in Year 1, the team will also examine the efficiency of using equipercentile equating to make mazes adaptive with an archival sample of 3,200 students in grades 3–12. In Year 2, researchers will analyze 2010-2011 archival FAIR data from 959,544 students in the PMRN to validate (a) scoring adaptations based on testlets, (b) the application of the theoretical distribution of difficulty to new items; and (c) the value added of monthly progress monitoring compared to assessing three times a year. The team will conduct studies in the schools with 760 students in grades 3–12 to (a) validate the algorithms based on weighting of recent priors and (b) test an alternate maze format. In Year 3, researchers will conduct CART analyses using a 2011–2012 archival PMRN sample of 834,803 students in grades 3–12. In Year 4, researchers will complete analyses of the validation of the unrestricted bi-factor model.
Key Measures: Measures used in this study include current historical versions of the FAIR subtests, as well as the Florida state level assessments (FCAT).
Data Analytic Strategy: Analytic methods include the use of simulations, confirmatory factory analysis, and equipercentile equating to modify and validate FAIR through various iterations.
Book chapter, edition specified
Foorman, B., and Wanzek, J. (in press). Classroom Reading Instruction for all Students. In M.K. Burns, and A.M. Vander Heyden (Eds.), The Handbook of Response to Intervention: The Science and Practice of Assessment and Intervention (2nd ed.). New York: Springer.
Journal article, monograph, or newsletter
Foorman, B., Petscher, Y, and Bishop, M.D. (2012). The Incremental Variance of Morphological Knowledge to Reading Comprehension in Grades 3–10 Beyond Prior Reading Comprehension, Spelling, and Text Reading Efficiency. Learning and Individual Differences, 22(6): 792–798.
Foorman, B.R., and Petscher, Y. (2010). Development of Spelling and Differential Relations to Text Reading in Grades 3–12. Assessment for Effective Intervention, 36(1): 7–20.
Petscher, Y. (2017). The Impact of Item Dependency on the Efficiency of Testing and Reliability of Student Scores From a Computer Adaptive Assessment of Reading Comprehension. Journal of Research on Educational Effectiveness, 10(2), 408–423.
Petscher, Y., and Kim, Y.S. (2011). Efficiency of Predicting Risk in Word Reading Using Fewer, Easier Letters. Assessment for Effective Intervention, 37(1): 17–25.
Petscher, Y., Kim, Y.S., and Foorman, B.R. (2011). The Importance of Predictive Power in Early Screening Assessments: Implications for Placement in the Response to Intervention Framework. Assessment for Effective Intervention, 36(3): 158–166.
Petscher, Y., Mitchell, A. M., and Foorman, B. R. (2015). Improving the Reliability of Student Scores From Speeded Assessments: An Illustration of Conditional Item Response Theory Using a Computer-Administered Measure of Vocabulary. Reading and Writing, 28(1), 31–56.
Reed, D. K., Petscher, Y., and Foorman, B. R. (2016). The Contribution of Vocabulary Knowledge and Spelling to the Reading Comprehension of Adolescents who are and are not English Language Learners. Reading and Writing, 29(4), 633–657.
** This project was submitted to and funded under Interventions for Struggling Adolescent and Adult Readers and Writers in FY 2010.