Measuring the English Language Vocabulary Acquisition of Latinx Bilingual Students (Project MELVA-S)
Previous Award Number: R305A200362
Previous Institution: Southern Methodist University
Co-Principal Investigators: Kamata, Akihito; Larson, Eric; Richards-Tutor, Catherine
Purpose: The purpose of this project is to develop an online formative assessment that measures the science vocabulary knowledge of Latinx bilingual students (LBS) with different levels of English and Spanish language proficiencies. Results from the assessments can be used to progress monitor students, help teachers differentiate language and vocabulary instruction, and provide additional science vocabulary supports within a Response-to-Intervention approach.
Project Activities: To build the system, the research team will (a) develop assessment content and items; (b) build the interface, the speech recognition system (SRS), and the automated scoring system (AS); (c) develop a psychometric model that accurately estimates vocabulary item parameter and student vocabulary abilities; and (d) carry out three validation studies.
Products: This project will produce an online formative assessment system – the Measuring the English Language Vocabulary Acquisition of Latinx Bilingual Students (MELVA-S). The team will produce 24 equivalent alternate forms of MELVA that can be used by teachers to assess their student initial status and growth of vocabulary knowledge and a preliminary report that can be used by teachers to differentiate instruction and provide additional vocabulary and language development support around science topics. The researchers will also produce peer-reviewed publications and will disseminate their findings via conference presentations.
Setting: The assessment will be tested in second- and third-grade classrooms in Texas in urban and semi-urban settings
Sample: Participating students will include approximately 400 second- and third-grade LBS in the first phase, and 1200 students in second grade and third grade in the second phase. Twenty science teachers with experience working with LBS will also participate. Approximately 100 students will take the MELVA measure in final phase to test feasibility and usability.
Assessment: In the MELVA-S assessment, the system will ask students to define a target word and use the word in a sentence that matches a picture in a prompt. Students provide responses orally and will receive a score for the definition of the word and a score for the use of the word in a sentence.
Research Design and Methods: To build the system, the research team will (a) develop assessment content and items; (b) build the interface, the speech recognition system (SRS), and the automated scoring system (AS); (c) develop a psychometric model that accurately estimates vocabulary item parameter and student vocabulary abilities; and (d) carry out three validation studies. Words included in the assessment will be selected and categorized following a specific criteria and taking into account the Common Core State Standards, the Texas Evaluation of Knowledge and Skills standards, and the Next Generation Science standards. To assemble equivalent forms, the team will conduct an equating study in Y3. At three time points during the equating study, participating students will be randomly assigned to complete 4 forms, for a total of 12 forms across three time points.
Key Measures: Alternate forms of the MELVA assessment, the Picture Peabody Vocabulary Test, the Bilingual Verbal Ability Test, the SAT-10 Science Subtest, and the easyCBM vocabulary subtest. Student demographic information and English level proficiency collected by the districts will be used as moderators.
Data Analytic Strategy: Student scores and response time on items will be modeled as a joint factor model of accuracy of speed, which consists of an ordered-category factor model and a log- normal factor model. This model will be fit to estimate item parameters and vocabulary abilities, conditioned on response time. This model will allow to study the relation between response time and vocabulary ability. The equating will be performed by fitting the proposed joint model to the data from 2,400 students for all 12 forms. A program code for the ability parameter estimation by the joint model will be written in R. To develop the AS, transcribed student responses will be coded and analyzed using regression techniques, rule-based classification and supervised classification. To gather validity evidence (namely, construct, criterion, and predictive validity), researchers will use Pearson product-moment correlations, regressions, growth models, and growth mixture models.