Skip Navigation

icon of glasses and a book Early Learning Programs and Policies


- OR -


- OR -


- OR -

FY Awards

- OR -

Development of the School Readiness Curriculum Based Measurement System

Year: 2011
Name of Institution:
University of Texas Health Science Center at Houston
Goal: Measurement
Principal Investigator:
Anthony, Jason
Award Amount: $1,701,261
Award Period: 4 years
Award Number: R305A110549


Co-Principal Investigators: Michael Assel, Susan Landry, Emily Solari, Paul Swank, and Jeffery Williams

Purpose: A substantial number of children arrive at elementary school without the skills essential for academic success. Early identification and regular monitoring of children's learning in critical school readiness domains are important features of programs that help close the achievement gap. Unfortunately, many existing language literacy measures designed for use with preschoolers have limited reliability and validity, as well as floor and ceiling effects. Most of the existing measures do not cover a range of language and literacy skills, and do not allow for comparison of competencies in English and Spanish. Practitioners need access to assessment tools that can be used with English speaking preschoolers and the growing population of young dual language learners. The majority of dual learners live in households where the primary language is Spanish. Thus, there is a clear and pressing need for reliable, valid, and sensitive assessment tools that teachers can use to efficiently identify children's strengths and weaknesses in English and Spanish, monitor students' learning, and inform instruction. This project will address this weakness and develop a curriculum-based school readiness measure addressing language and literacy skills that can be used with English- and Spanish-speaking 3- to 5-year old children. The researchers will develop a brief form for use by practitioners, and a long form for use by researchers.

Project Activities: Over the course of 4 years, researchers will develop research-based, reliable, valid, and sensitive progress monitoring tools for monolingual English children and Spanish speaking English learners (ELs) aged 3 to 5 years. The research will proceed in four phases: constructing an item pool (Phase 1), piloting and refining assessment methods (Phase 2), scaling and evaluating test items (Phase 3), and conducting a reliability and validity study (Phase 4). In Year 1, the research team will develop test items for each of the four subscales: Letter Identification, Letter Sounds, Vocabulary, and Phonological Awareness. In Years 2 and 3 of the project, the research team will recruit two cohorts of children, administer the test items, and conduct item level analyses. In Year 4, the research team will conduct a validity study of the fully developed set of tools. The researchers will also produce two progress monitoring kits, produce a final version of the assessment materials, develop an in-service test administration protocol, develop a user's manual for assessments, and compile item banks for all scales of the School Readiness Curriculum Based Measurement System (SR-CBMS).

Products: Products from this project include a fully developed and validated set of tools, the School Readiness Curriculum Based Measurement System (SR-CBMS), which will assess children's skill in vocabulary, letter identification, letter sounds, and phonological awareness. There will be a total of four brief, easy-to-administer tests (with two parallel Spanish versions) and two expanded versions (one English and one Spanish). Researchers will also produce peer-reviewed publications.

Structured Abstract

Setting: The preliminary development work will take place in Houston, Texas and the subsequent testing and validation work will take place across the state of Texas.

Population: An ethnically, socioeconomically, and linguistically (English- and Spanish-speaking) sample of approximately 4,000 children ages 3 to 5 will participate.

Intervention: The School Readiness Curriculum Based Measurement System (SR-CBMS) will assess skills in letter identification, letter sounds, phonological awareness, and vocabulary. SR-CBMS will have both an English and a Spanish version with approximately 500 English items and 450 Spanish items. The two different versions will allow for monolingual and bilingual scoring for comparing children's skills across languages. Assessments of letter identification consists of both letter discrimination (e.g., "Point to A" when shown K T A) and letter name identification (e.g., "What letter is this?"). Letter sounds assessments consists of letter sound discrimination (e.g., "Point to the sound that makes _/s/_") and letter sound identification (e.g., children will be asked to provide the sounds associated with individual letters or letter combinations, e.g., ‘h' and ‘sh'). Phonological awareness assessments consists of blending recognition (e.g., "Point to the picture of the ti…ger"), blending (e.g., "Say le…mon together"), and rhyme matching (e.g., "Here is a [point] dog, [point] wall, [point] cup. Point to the picture that sounds like ball"). For each of these skill groups, there will be items intended to assess low, moderate, and high levels of knowledge. To assess vocabulary, children will be shown colored line drawings and asked to describe them (e.g., "What is this?" or "What is he doing?"). The items for the vocabulary assessment can be adjusted such that a single picture naming task can provide good sensitivity across the full range of expressive vocabulary abilities. There will be two sets of items: one in English and one in Spanish.

Research Design and Methods: The researchers will develop test items, scale the items, evaluate the psychometric properties of the brief and expanded versions of the measure, and prepare the test materials for distribution to intended users of the proposed measurement tool.

During Phase 1, the research team will generate the items according to the four skill areas (letter identification, letter sounds, phonological awareness, and vocabulary) in both English and Spanish. These items will be reviewed by expert review panels. In Phase 2, researchers will conduct a pilot study to examine the feasibility of initial testing procedures, verbal and nonverbal directions, and physical materials. During Phase 3, data will be collected over two years, with approximately 1,600 children participating per year. In the first year of Phase 3, children will be administered partially overlapping sets of items so that all items are administered to the same number of children and no items are administered to all children. During the second year of Phase 3, all children will be administered certain items that are included in every item set so that these items are administered to all respondents. By using these two methods, the researchers will have equal numbers of responses to each item while at the same time optimizing the reliability of the parameters by linking items based on answers from all children. In Phase 4, the SR-CBMS will be validated against commonly used assessments to determine whether it reliable and can detect growth.

Control Condition: There is no control condition.

Key Measures: To design and revise the items, researchers will administer sample test items to three different cohorts of children. In order to validate the assessments, researchers will compare their items to those of other language and literacy assessment tools. In particular, some of the tests that the research team will use include are the Indicators of Individual Growth and Development for Infants and Toddlers, Circle Phonological Awareness Language and Literacy System, the Test of Preschool Early Literacy and its Spanish version, the Preschool Comprehensive Test of Phonological and Print Processing, as well as the Woodcock-Muñoz Language Survey.

Data Analytic Strategy: Item response theory will guide analysis of the items. To scale and evaluate the responses, researchers will use traditional item analysis, examining the item means, variances, and corrected item-total score correlations. Researchers will also perform confirmatory factor analyses and will examine administration errors (assessment within examiner) using analysis of variance.