Project Activities
The work will occur in three work phases. First, the research team will create and validate additional assessment items with a greater range of difficulty and an expanded range of item formats. Concurrently, the researchers will develop a computer-adaptive testing (CAT) model for MOCCA. Finally, they will establish the technical adequacy of the CAT version of MOCCA for both screening and progress monitoring. These phases will leverage iterative rounds of simulation and pilot studies, consultations with research experts and teachers, and large-scale field tests with nationally representative samples.
Structured Abstract
Setting
The primary research sites for the development work are Oregon and Georgia. A nationwide sample will participate in the final validation.
Sample
The sample includes over 3000 public school students in grades 3 to 5 with an over-sampling of key minority populations based on race/ethnicity and special services status.
Assessment
The CAT version of MOCCA builds off the computer-based, MOCCA, a reading assessment for third- through fifth-grade students developed and validated under a previous IES-funded project (R305A140185). The original, computer-based MOCCA uses a seven-sentence narrative in which the sixth sentence is deleted. Students must select the missing sixth sentence from among three choices. One of the options is correct and creates a causally coherent connection, one is a paraphrase, and one is a lateral connection, namely an inference or association that is based on background knowledge, may be tangential, and is not causally coherent. The CAT version of MOCCA will continue to have the same general design but with a few changes. The next version of MOCCA will include items with more sentences overall or with five responses options (two of each incorrect type) for the upper grade levels in an attempt to increase the number of items with higher difficulties. It will be adaptive, thereby reducing the overall testing time. And it will have more items, thereby allowing teachers to use it throughout the school year and across years to track student progress and assess student achievement.
Research design and methods
The project has three aims: to develop more items and increase some items' difficulty, to build the CAT version, and to validate the CAT MOCCA version for formative and summative use. During year 1 and 2, the researchers will develop new items by collecting input and feedback from experts, revising the previous MOCCA's item writing specifications, and writing new items that are longer and have more incorrect responses. At the same time, they will program the CAT software and develop simulation software that will allow them to simulate students' progress through the CAT version. Near the end of year 2, they will run a series of mini pilots of the CAT MOCCA with new items to test the administration software with real students using the full set of potential items. They will also conduct a larger field test to improve the CAT administrations and to complete the scaling of the items. After finalizing the items and the CAT administration, they will conduct random-assignment large-scale studies of the CAT version during the typical benchmarking periods in schools (namely August-October, December-February, and April-June) to validate CAT MOCCA against the current MOCCA and other reading measures. They will also conduct two additional administrations using only the CAT MOCCA to explore MOCCA's sensitivity to change for the more frequent assessment administrations typically employed under progress-monitoring.
Control condition
The research team will randomly assign students within both grade and classrooms, students will be randomly assigned to either the fixed MOCCA or CAT MOCCA.
Key measures
Key measures include curriculum-based measures (CBMs) of reading and math and state test data such as DIBELS and easy CBM. They will also collect demographic data on gender, race/ethnicity, language status, special education status, and meals status from schools to determine whether there are systematic differences among groups.
Data analytic strategy
The researchers will use item response theory (IRT) to estimate item difficulties, and they will conduct differential item functioning (DIF) analyses and teacher content review panels will establish item fairness.
People and institutions involved
IES program contact(s)
Products and publications
Products: The final products will include a normed CAT version of MOCCA suitable for use in grades 3 through 5 that will take 15 to 20 minutes to administer and MOCCA user materials. The research team will also produce peer-reviewed publications.
Related projects
Supplemental information
Co-Principal Investigator: Davison, Mark L.; Kennedy, Patrick C.; Weiss, David J.
Questions about this project?
To answer additional questions about this project or provide feedback, please contact the program officer.