Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Multiple-choice Online Cloze Comprehension Assessment (MOCCA): Refining and Validating a Measure of Individual Differences in Reading Comprehension Processes During Reading
Center: NCER Year: 2014
Principal Investigator: Biancarosa, Gina Awardee: University of Oregon
Program: Literacy      [Program Details]
Award Period: 3 years (7/1/2014–6/30/2017) Award Amount: $1,599,776
Type: Measurement Award Number: R305A140185
Description:

Co-Principal Investigators: Carlson, Sarah E.; Seipel, Ben; Davison, Mark L.

Purpose: Numerous children struggle with reading comprehension and previous research shows that a subset of these children struggle with higher level reading skills, such as maintaining causal coherence while reading. Causal coherence is created through inferences that require the reader to synthesize why an event occurs based on relevant goals previously identified in the text and to generate missing information from background knowledge. Students who struggle with causal coherence typically fall into one of two types and tend to respond differently to intervention depending on their type. The first type tends to paraphrase what they have already read. The second type tends to make lateral connections, which are inferences or personal associations that are not causally related to the text. However, without an assessment to identify between these types of poor comprehenders, teachers may be unable to provide the appropriate intervention for each student. The Multiple-choice Online Cloze Comprehension Assessment (MOCCA) is a paper-and-pencil assessment to diagnose specific types of poor comprehension in third through fifth grade students. Each multiple-choice question has a correct answer, an answer that indicates the student has paraphrased, and an answer that indicates a lateral connection. In this project, the researchers refined, expanded, and validated MOCCA, creating multiple forms for each grade level and converting the test to an online platform.

Project Activities: The project team developed an initial pool of 150 items for each grade. Teachers provided content validity for the items. Items were pilot tested in the first year and field tested in two large-scale tests in Years 2 and 3. Additionally, the team began work on moving the administration of MOCCA from paper-and-pencil to a computer-based assessment. Third- through fifth-grade students completed think alouds in each year in order to provide information to the researchers regarding how the items are functioning.

Key Outcomes: The main features of the assessment and outcomes of the validation study are as follows:

Structured Abstract

Setting: This study took place in school districts in Oregon and California. Additionally, a nationally representative sample of students from around the U.S. participated in the field trials.

Sample: Participants included 9,259 third- through fifth-grade students across 4 years of the project.

Assessment: The researchers refined and validated the Multiple-choice Online Cloze Comprehension Assessment (MOCCA), an assessment designed to diagnose specific types of poor comprehension in third- through fifth-grade students. MOCCA is computer based and includes three forms per grade. Each item on the assessment is a seven-sentence narrative with the sixth sentence deleted from the narrative. The missing sixth sentence resolves the goal of the story. Students are asked to choose the best sentence from among four choices to insert as the sixth sentence. One of the answers will be correct (namely, itis causally coherent with the narrative), one of the answers will be a paraphrase, and one of the answers will be a lateral connection (namely, it is an inference or personal association which is not causally coherent with the text). Each form includes 40 items.

Research Design and Methods: In order to create additional forms of the assessment for each grade level, the research team developed an initial pool of 150 items for each grade level. Teachers provided content validity feedback. In Year 1, the MOCCA items were piloted with students in third through fifth grade. A subsample of students  complete think alouds to provide information regarding the functioning of the items. In Year 2, following revisions to the items, students participated in the first MOCCA field test, with additional collection of think aloud data from a subsample of students. Following more revisions, a second field test with accompanying think alouds was conducted in Year 3. The researchers also assessed MOCCA's content, diagnostic, criterion, and construct validity and fairness.

Control Condition: Due to the nature of this research design, there was no control condition.

Key Measures: The researchers established criterion validity by correlating the MOCCA with school-administered measures of reading including easyCBM Vocabulary, the Dynamic Indicators of Basic Early Literacy Skills (DIBELS), and the state's standardized reading test (for example, SmarterBalanced or Standardized Testing and Reporting (STAR) in California). The established construct validity by comparing the MOCCA with the Bridge-IT Picture Version.

Data Analytic Strategy: During the refinement stage of the project, the researchers used data from the think alouds to inform revisions of the incorrect answer choices and to identify any additional profiles of poor comprehenders. They analyzed data using two item response theory (IRT) analyses. The first analysis was a multidimensional IRT based on a full decision choice model, which produces three scores for each person: a causal coherence score, a paraphrase score, and a lateral connection score. The second analysis was a unidimensional two-parameter logistic (2PL) model. They also used correlations, receiver operating curve analyses, and logistic regression to establish validity.

Project websites:
https://mocca.uoregon.edu/
https://blogs.uoregon.edu/mocca/

Related IES Projects: Multiple-choice Online Causal Comprehension Assessment for Postsecondary Students (MOCCA-College): Measuring Individual Differences in Reading Comprehension Ability of Struggling College Readers by Text Type (R305A180417)
Multiple-choice Online Causal Comprehension Assessment Refinement: Achieving Better Discrimination via Experimental Item Types and Adaptive Testing (R305A190393)

Products and Publications

Book Chapters

Biancarosa, G. (2019). Measurement of reading comprehension processing and growth. In V. Grover, P. Uccelli, M. Rowe, & E. Lieven (Eds.), Learning through Language: Towards an Educationally Informed Theory of Language Learning (pp. 147–158). Cambridge, U.K.: Cambridge University Press.

Carlson, S. E., Seipel, B., Biancarosa, G., Davison, M. L., & Clinton, V. (2019). Demonstration of an innovative reading comprehension diagnostic tool. In M. Scheffel, J. Broisin, V. Pammer-Schindler, A. Ioannou, & J. Schneider (Eds.), Transforming Learning with Meaningful Technologies, (pp.769-772), Springer Nature Switzerland AG. https://doi.org/10.1007/978-3-030-29736-7_85

Seipel, B., Biancarosa, G., Carlson, S.E., and Davison, M.L. (2018). The Need, Use, and Future of Cognitive Diagnostic Assessments in Classroom Practice. In Handbook of Research on Program Development and Assessment Methodologies in K–20 Education (pp. 1–23). IGI Global.

Journal Publications

Biancarosa, G., Kennedy, P.C., Carlson, S.E., Yoon, H., Seipel, B., Liu, B., and Davison, M.L. (2019). Constructing Subscores That Add Validity: A Case Study of Identifying Students at Risk. Educational and Psychological Measurement, 79(1), 65–84. [Full Text]

Davison, M. L., Biancarosa, G., Carlson, S. E., Seipel, B., and Liu, B. (2018). Preliminary Findings on the Computer-Administered Multiple-Choice Online Causal Comprehension Assessment, A Diagnostic Reading Comprehension Test. Assessment for Effective Intervention, 43(3), 169–181. [Full Text]

Liu, B., Kennedy, P.C., Seipel, B., Carlson, S.E., Biancarosa, G., and Davison, M.L. (2019). Can We Learn From Student Mistakes in a Formative, Reading Comprehension Assessment?. Journal of Educational Measurement, 56(4), 815–835.

Su, S., and Davison, M.L. (2019). Improving the Predictive Validity of Reading Comprehension Using Response Times of Correct Item Responses. Applied Measurement in Education, 32(2), 166–182.

Technical Report

Davison, M. L., Biancarosa, G., Seipel, B., Carlson, S. E., Liu, B., & Kennedy, P. C. (2018). Technical Manual 2018: Multiple-Choice Online Comprehension Assessment (MOCCA) [MOCCA Technical Report MTR-2018-1].

Additional online resources and information:


Back