Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Assessing the Comprehension of Language in 2-Year-Olds Using Touch-Screen Technology
Center: NCSER Year: 2016
Principal Investigator: Golinkoff, Roberta Awardee: University of Delaware
Program: Early Intervention and Early Learning      [Program Details]
Award Period: 3 years (8/1/2016-7/31/2019) Award Amount: $1,599,998
Type: Measurement Award Number: R324A160241
Description:

Co-Principal Investigators: Hirsh-Pasek, Kathy; de Villiers, Jill; Iglesias, Aquiles

Purpose: The purpose of this project was to develop a reliable and valid computer-based language assessment for children ages 24–36 months. Past research demonstrated that early language skills are predictive of later language and academic skills. Therefore, early identification of children with language delays can lead to improvement in their later linguistic and academic outcomes. This project was modeled after a previously developed assessment, the Quick Interactive Language Screener or QUILS, for children from 3 to 6 years of age. Like its predecessor, the BabyQUILS uses language comprehension to assess children and is given on a touch screen tablet. The assessment yields individual and group profiles in three areas of language—vocabulary, grammar, and process (strategies children use to learn language)—to allow users to identify children who may be at risk for developing language impairment so that appropriate intervention can begin early.

Project Activities: In the first year, item development began by generating over twice the number of items needed on the final version. Pilot data was collected through laboratory and field testing to help reduce the list of items. The following year, the first item tryout field tested the items with the goal of further reducing the length of the assessment to the desired number (40) for the final version. The third year was a second item tryout during which the research generated national norms for the test. The project team also examined the measure's test-retest reliability, convergent validity, and predictive validity.

Key Outcomes: The results of the research project, as reported by the principal investigator, are as follows:

  • The BabyQUILS successfully measured 2-year-olds' language ability. Over 90% of children tested were able to complete the test. Children received an overall score as well as a score on three unique language components: vocabulary (the words children already know), syntax (the grammatical structures they could understand in sentences), and process (how children learn new language items).
  • The BabyQUILS proved to have excellent validity as children's performance on the BabyQUILS was strongly related to their results on other established language assessments.
  • Scores on the BabyQUILS were found to be stable over time in an examination of test-retest reliability.
  • Because the screener requires no special training and is easy to administer, the screener could be widely implemented in a variety of settings, including homes, childcare centers, and pediatrician's offices. This would allow for early identification of language issues that may otherwise have gone unnoticed until the child entered school, which could reduce the need for intervention later.

Structured Abstract

Setting: The research took place in laboratories in the first year and in education settings (childcare centers, homes, Early Head Start) in the remaining years in Delaware, Maryland, Pennsylvania, Nebraska, Texas, Tennessee, New York, California, and Massachusetts.

Sample: At least 880 children ages 24–36 months participated. Children were balanced for gender and approximately half the sample came from under-resourced environments. Children of all races and ethnic groups were included if they spoke English.

Measure: The BabyQUILS language screener is modeled after the QUILS, which was developed for preschool children with previous IES funding. This project extended the screener to target children between the ages of 24–36 months when language is just emerging. This computer-based language assessment uses touch-screen technology to yield both an overall score that can be compared to national norms, and individual and group profiles in three areas of language development. Two modules measure language products (what children know) in the areas of vocabulary and grammar, and the third module measures process (strategies children use to learn new language, in both the vocabulary and grammar domains). It is designed for ease of use by teachers, paraprofessionals, and other service providers, and it has low response demands for the children completing it. The final assessment has 40 items and takes under 20 minutes to administer to a child.

Research Design and Methods: In the first year, the research team generated 115 items considered appropriate for 2-year-olds and based on the research on language learning. None of the items from the QUILS were appropriate for this age group. These items were pilot tested in a laboratory setting for content validity, with each child receiving only portions of the initial lengthy assessment. Following this, the items that survived (96) from all three modules were tested in the field and were analyzed with Rasch and DIF models. The goal was to reduce the number of items to 60. The team then implemented the item tryout phase in the field, followed by similar item analyses, to reduce the number of items to the best 40 items. For the final assessment, different subsamples of children took additional assessments to determine test-retest reliability and concurrent validity. Finally, the predictive validity study determined whether the BabyQUILS better predicted later QUILS scores than another commonly used measure.

Key Measures: The Peabody Picture Vocabulary Test (PPVT) and the MacArthur-Bates Communication Development Inventory (MCDI) were used to examine the assessment's concurrent validity, and the MDCI was used to for comparison to examine predictive validity. QUILS for preschool children was used as the target for the predictive validity study.

Data Analytic Strategy: For item analyses each year, the researchers used Rasch analyses and DIF analyses for gender and SES to assess item bias. Test-retest reliability, concurrent validity, and predictive validity were examined using correlational analyses.

Related IES Project: Using Developmental Science to Create a Computerized Preschool Language Assessment (R305A110284)

Products

ERIC Citations: Find available citations in ERIC for this award here.

Select Publications

Journal articles

Golinkoff, R. M., Hoff, E., Rowe, M. L., Tamis-LeMonda, C. S., & Hirsh-Pasek, K. (2019). Language matters: Denying the existence of the 30-million-word gap has serious consequences. Child Development, 3, 985–992.

Levine, D., Pace, A., Luo R., Hirsh-Pasek, K., Golinkoff, R. M., de Villiers, J., Iglesias, A., & Wilson, M. S. (2020). Evaluating socioeconomic gaps in preschoolers' vocabulary, syntax, and language process skills with the Quick Interactive Language Screener (QUILS). Early Childhood Research Quarterly, 50, 114–128.

Luo, R., Pace, A., Levine, D., Iglesias, A., de Villiers, J., Golinkoff, R. M., Wilson, M.S., & Hirsh- Pasek, K. (2021). Home literacy environment and existing knowledge mediate the link between socioeconomic status and language learning skills in dual language learners. Early Childhood Research Quarterly, 55, 1–14.

Masek, L. R., Paterson, S. J., Golinkoff, R. M., Bakeman, R., Adamson, L. B., Owen, M. T., Pace, A., &Hirsh-Pasek, K. (2021). Beyond talk: Contributions of quantity and quality of communication to language success across socioeconomic strata. Infancy, 26, 123–147.

Pace, A., Luo, R., Levine, D., Iglesias, A., de Villiers, J., Golinkoff, R.M., Wilson, M., & Hirsh Pasek, K. (2020). Within and across language predictors of word learning processes in dual language learners. Child Development, 92, 35–53.

Valleau, M. J., Konishi, H., Golinkoff, R. M., & Hirsh-Pasek, K., & Arunachalam, S. (2018). An eye-tracking study of receptive verb knowledge. Journal Speech, Language, and Hearing Research, 1–17.


Back