Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Improving the Accuracy of Academic Vocabulary Assessment for English Language Learners
Center: NCER Year: 2017
Principal Investigator: Francis, David Awardee: University of Houston
Program: Policies, Practices, and Programs to Support English Learners      [Program Details]
Award Period: 4 years (08/31/2017 – 08/30/2021) Award Amount: $1,400,000
Type: Measurement Award Number: R305A170151
Description:

Co-Principal Investigators: Kulesz, Paulina; Lawrence, Joshua

Purpose: The researchers sought to improve understanding of factors that affect assessment of vocabulary knowledge among English learners (ELs) in unintended ways by conducting multiple item-level analyses. Based on those findings, the researchers refined an existing academic vocabulary assessment to be psychometrically and theoretically sound for these students. Results from this study can be used to expand what is known about item characteristics that are differentially difficult for ELs as well as provide a set of test recommendations for improving the accuracy and reliability of previously developed vocabulary assessments.

Project Activities: The first phase of the study involved secondary data analysis on item-level data that were collected from a prior IES-funded randomized efficacy trial of Word Generation (R305A090555). The researchers tested whether English-proficient language minority students (many of whom formerly used to be ELs) and ELs responded significantly differently to academic vocabulary test items. In the second phase of the study, the researchers refined and pilot-tested new items with data collected from a new sample of students. Finally, in the third phase of the study, the researchers administered, scored, and validated two forms of the new Word Generation assessment with two cohorts of ELs and non-ELs across sixth through eighth grade. The 2 forms contain a total of 66 new items and 11 retained items that were used to link the forms to one another and to the original Word Generation assessment. These 11 items had been found to be comparable for EL and non-EL students.

Key Outcomes: The main findings of this project are as follows:

  • The researchers identified five dimensions to the lexical features of general-purpose words, as well as domain-general and domain-specific academic words (frequency, complexity, proximity, polysemy, and diversity) (Knoph et al., 2023).
  • The researchers found a strong relation between vocabulary and reading comprehension in a group of English-speaking middle school students. Strong readers were more likely to know the meanings of words than struggling readers; words with more meanings were easier for all students on average. In addition, word frequency was more strongly related to item difficulty among better readers, while word complexity was less strongly related to difficulty among better readers (Lawrence et al., 2022).

Structured Abstract

Setting: This study took place in Houston, Texas. The researchers also used archival student data that had been collected in California under a previous IES grant (R305A090555) and publicly available data on lexical features of wordlists.

Sample: Roughly 13,780 middle school students, who participated in the earlier IES-funded Word Generation efficacy trial, contributed to the existing data set. Approximately 22 percent of the earlier participants were English-only students, 12 percent were initially fluent ELs (started 1st grade as proficient), 47 percent were ELs who were redesignated as proficient, and 18 percent were limited English-proficiency students. The researchers conducted item-level analyses with data collected from these students. For the second phase of the study, the researchers recruited 202 students in grades 6 through 8 to participate in the pilot testing of new items. A total of 1,546 students across grades 6 through 8 participated in phase 3 of the research that focused on validation and norming of the revised test. Students were recruited in two cohorts (2021–2022 and 2022–2023 school years) from a single school district. About half of the students (n=742) were classified by the district as English learners.

Measure: The researchers used the Word Generation Academic Vocabulary Assessment developed in the previous IES grant. The second phase of the study involved piloting a set of revised items. The third phase involved collecting data on two newly developed forms of the Word Generation Academic Vocabulary test along with reading and vocabulary data on multiple forms of a widely used standardized test (i.e., the Gates-McGinitie Reading Comprehension Test and associated vocabulary subtest).

Research Design and Methods: The researchers used explanatory item response models to examine differential effects of target word characteristics on responses to the Word Generation Academic Vocabulary Test for English-only and EL students. They also conducted differential item functioning (DIF) and differential distractor functioning (DDF) analyses to examine which test items were biased for ELs. They then revised the test items based on these results. They administered the final measure to 2 cohorts of students in 4 schools (total N = 1,546) to examine whether the revised assessment more reliably measured EL students' academic vocabulary knowledge.

Control Condition: The focus of the study was how items function for EL students compared to non-EL students. In development analyses, the researchers were able to differentiate among EL, never EL (English-only students; EO), reclassified EL (reclassified fluent English-proficient students; RFEP), and language minority students who started school as English proficient (initial fluent English-proficient students; IFEP).

Key Measures: The researchers coded existing data from the Word Generation Academic Vocabulary Test to understand what factors make an item hard for ELs in unintended ways. They examined the target words' part of speech, frequency, specificity, polyseme (i.e., different meanings for the same word form), dispersion (i.e., rate of occurrence in the set), word length, and semantic similarity to the key. They also examined other item features like semantic complexity and the semantic relationship between keys, distractors, and target words. They refined the test based on the results of these analyses and determined whether the refined items were more reliable for ELs. The researchers generated a total of 66 new items and combined them with 11 items from the original test to create 2 new forms of the Word Generation Vocabulary test with 11 linking items that served to anchor scores from the 2 forms and allow scores to be placed on a common scale with the original test.

Data Analytic Strategy: The research team conducted explanatory item response analyses, DIF, and DDF analyses to examine bias in the Word Generation Academic Vocabulary Test for assessing EL students' vocabulary knowledge. They also made use of Item Mixture Modeling and replication analyses. The analysis on the revised assessment utilized the same analytic strategy as the first phase.

Related IES Projects: Word Generation: An Efficacy Trial (R305A090555)

Products and Publications

ERIC Citations:  Find available citations in ERIC for this award here.

Publicly Available Data: Deidentified data are available from the PI (Dr. David Francis, dfrancis@uh.edu) upon request.  Data on word characteristics from word lists and test items from the Word Generation Vocabulary Test are available on the project website.

Project Website: https://academicvocab.times.uh.edu/

Additional Online Resources and Information:

Select Publications:

Knoph, R.E., Lawrence, J.F., & Francis, D.J. (2023). The Dimensionality of Lexical Features in General, Academic, and Disciplinary Vocabulary. Scientific Studies of Reading, doi.org/10.1080/10888438.2023.2241939

Lawrence, J.F., Knoph, R.E., McIlraith, A., Kulesz, P.A., & Francis, D.J. (2022). Reading Comprehension and Academic Vocabulary: Exploring Relations of Item Features and Reading Proficiency. Reading Research Quarterly, 57, 669–690. doi.org/10.1002/rrq.434


Back