Skip Navigation
archived information

Back to Ask A REL Archived Responses

REL Midwest Ask A REL Response

Literacy

August 2020

Question:

What research is available on the key components to measuring early literacy development in students from prekindergarten through grade 3?



Response:

Following an established Regional Educational Laboratory (REL) Midwest protocol, we conducted a search for research reports, descriptive studies, and policy overviews on key components to measuring early literacy development in students from prekindergarten through grade 3. In addition, we searched for resources on research-based early literacy assessments for these components. For details on the databases and sources, keywords, and selection criteria used to create this response, please see the Methods section at the end of this memo.

Below, we share a sampling of the publicly accessible resources on this topic. References are listed in alphabetical order, not necessarily in order of relevance. The search conducted is not comprehensive; other relevant references and resources may exist. For each reference, we provide an abstract, excerpt, or summary written by the study’s author or publisher. We have not evaluated the quality of these references, but provide them for your information only.

Research References

Coyne, M. D., & Harn, B. A. (2006). Promoting beginning reading success through meaningful assessment of early literacy skills. Psychology in the Schools, 43(1), 33–43. Retrieved from https://eric.ed.gov/?id=EJ761857

From the ERIC abstract: “Recent scientific advances in early literacy assessment have provided schools with access to critical information about students’ foundational beginning reading skills. In this article, we describe how assessment of early literacy skills can help school psychologists promote beginning reading success for all children. First, we identify key skills in early literacy and describe a comprehensive assessment system, ‘Dynamic Indicators of Basic Early Literacy Skills’ (DIBELS), developed to assess essential beginning reading skills. Next, we present a conceptual framework for thinking about early literacy assessment across four distinct purposes: (a) screening, (b) diagnosis, (c) progress monitoring, and (d) student outcomes. Finally, we provide school-based examples that illustrate how DIBELS can be used to assess students’ early literacy skills across each of these four purposes and facilitate informed and ongoing instructional decision making.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Cummings, K. D., Kaminski, R. A., Good, R. H., III, & O’Neil, M. (2011). Assessing phonemic awareness in preschool and kindergarten: Development and initial validation of first sound fluency. Assessment for Effective Intervention, 36(2), 94–106. Retrieved from https://eric.ed.gov/?id=EJ912783

From the ERIC abstract: “This article presents initial findings from a study examining ‘First Sound Fluency’ (FSF), which is a brief measure of early phonemic awareness (PA) skills. Students in prekindergarten and kindergarten (preK and K) were assessed three times (fall, winter, and spring) over one school year, which resulted in multiple reliability and validity coefficients. In addition, a subset of students in both preK and K was assessed monthly between benchmark periods using alternate forms of the FSF measure to estimate delayed alternate-form reliability. The FSF measure displayed adequate reliability and validity for decision making in early literacy for students in both grades. Implications of these findings are discussed.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Gischlar, K. L., & Vesay, J. P. (2018). Literacy curricula and assessment: A survey of early childhood educators in two states. Reading Improvement, 55(3), 106–117. Retrieved from https://eric.ed.gov/?id=EJ1191128

From the ERIC abstract: “Research has consistently demonstrated the importance of early literacy instruction, as these skills are the developmental precursors to conventional reading. In this study, 215 early childhood educators in two states responded to a survey regarding early literacy curricula and assessment. Results indicated that most teachers used either a commercially available general or literacy specific curriculum, despite the fact that most of these programs do not have adequate research support to document effectiveness. Furthermore, the majority of teachers reported use of teacher-made assessments to monitor student progress in these curricula. Generally, teacher-made assessments have been proven psychometrically unsound, which may indicate that they are not accurate indicators of student progress. Given the importance of early literacy skill acquisition, future research should be conducted to explore the efficacy of different commercially available curricula and to identify the most valid means for monitoring student progress.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Hasbrouck, J., & Tindal, G. A. (2006). Oral reading fluency norms: A valuable assessment tool for reading teachers. Reading Teacher, 59(7), 636–644. Retrieved from https://eric.ed.gov/?id=EJ738041

From the ERIC abstract: “In 1992, the authors collaborated to develop a set of norms for oral reading fluency for grades 2–5. Since then, interest in and awareness of fluency has greatly increased, and Hasbrouck and Tindal have collaborated further to compile an updated and expanded set of norms for grades 1–8. This article discusses the application of these norms to three important assessment activities related to improving students’ reading achievement: (1) Screening students for possible reading problems; (2) Diagnosing deficits in students’ fluency; and (3) Monitoring the progress of students receiving supplementary instruction or intensive intervention in reading. An overview of the history and purpose for developing measures of oral reading fluency is also presented.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Hudson, R. F., Lane, H. B., & Pullen, P. C. (2005). Reading fluency assessment and instruction: What, why, and how? Reading Teacher, 58(8), 702–714. Retrieved from https://eric.ed.gov/?id=EJ684440

From the ERIC abstract: “This article explains the elements of reading fluency and ways to assess and teach them. Fluent reading has three elements: accurate reading of connected text, at a conversational rate with appropriate prosody. Word reading accuracy refers to the ability to recognize or decode words correctly. Reading rate refers to both word‐level automaticity and speed in reading text. Prosodic features are variations in pitch, stress patterns, and duration that contribute to expressive reading of a text. To assess reading fluency, including all its aspects, teachers listen to students read aloud. Students’ accuracy can be measured by listening to oral reading and counting the number of errors per 100 words or a running record. Measuring reading rate includes both word-reading automaticity and speed in reading connected text using tests of sight-word knowledge and timed readings. A student’s reading prosody can be measured using a checklist while listening to the student. To provide instruction in rate and accuracy, variations on the repeated readings technique are useful. To develop prosody, readers can listen to fluent models and engage in activities focused on expression and meaning. Opportunities to develop all areas of reading fluency are important for all readers, but especially for those who struggle.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Kaminski, R. A., Abbott, M., Bravo Aguayo, K., Latimer, R., & Good, R. H., III. (2014). The preschool early literacy indicators: Validity and benchmark goals. Topics in Early Childhood Special Education, 34(2), 71–82. Retrieved from https://eric.ed.gov/?id=EJ1031542

From the ERIC abstract: “Assessment is at the center of a decision-making model within a Response to Intervention (RTI) framework. Assessments that can be used for universal screening and progress monitoring in early childhood RTI models are needed that are both psychometrically sound and appropriate to meet developmental needs of young children. The Preschool Early Literacy Indicators (PELI), an assessment tool developed for screening and for progress monitoring, was designed to incorporate psychometrically sound assessment practices within an authentic assessment format. The current study provides data on concurrent and predictive validity of the PELI as well as analyses leading to the development of preliminary benchmark goals on the PELI. The PELI demonstrates significant differences in performance by age and growth in early literacy and language skills across the preschool years. Correlations between the PELI and criterion measures of similar skills are moderate to strong and predictive probabilities with respect to outcome measures are moderate to strong.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Lonigan, C. J. (2006). Development, assessment, and promotion of preliteracy skills. Early Education and Development, 17(1), 91–114. Retrieved from https://eric.ed.gov/?id=EJ757511

From the ERIC abstract: “A large body of research evidence highlights the required conditions for children to become skilled readers. Within the past decade, research also has uncovered the fact that the origins of skilled reading begin to develop even before children start school. The intent of this article is to provide a brief summary of what is known about the development of skilled reading in early elementary grades, to highlight the key findings concerning the developmental precursors to the successful acquisition of skilled reading, and to review recent advances in tools that can be used by early childhood professionals to identify children who may be at risk for reading difficulties before these children experience the negative consequences of reading failure. Use of these tools can provide the means for teachers and other early childhood professionals to provide the focused experiences and activities that will help children succeed in becoming skilled readers.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Marston, D., Pickart, M., Reschly, A., Heistad, D., Muyskens, P., & Tindal, G. (2007). Early literacy measures for improving student reading achievement: Translating research into practice. Exceptionality, 15(2), 97–117. Retrieved from https://eric.ed.gov/?id=EJ772239

From the ERIC abstract: “The importance of early literacy instruction and its role in later reading proficiency is well established; however, measures and procedures to screen and monitor proficiency in the area of early literacy are less well researched. The purpose of this study was to (a) examine the technical adequacy and validity of early curriculum-based literacy measures, Letter–Sound Correspondence, Onset Phoneme Identification, and Phoneme Segmentation, developed for use within the problem-solving model in the Minneapolis Public Schools and (b) describe the district-wide implementation of these measures. In general, these measures were found to have adequate reliability and validity, have moderate to moderately high correlations with criterion measures (oral reading, report cards), and be sensitive to growth across the school year. A case study of how these measures are used for screening and progress monitoring to improve reading achievement within 1 school is included. Limitations and future directions are also presented.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

McAlenney, A. L., & Coyne, M. D. (2011). Identifying at-risk students for early reading intervention: Challenges and possible solutions. Reading & Writing Quarterly, 27(4), 306–323. Retrieved from https://eric.ed.gov/?id=EJ946788

From the ERIC abstract: “Accurate identification of at-risk kindergarten and 1st-grade students through early reading screening is an essential element of responsiveness to intervention models of reading instruction. The authors consider predictive validity and classification accuracy of early reading screening assessments with attention to sensitivity and specificity. They review screening strategies of previous kindergarten and 1st-grade intervention studies. They present practical, intervention-based solutions to low classification accuracy, including strategies that may reduce false-positive risk classifications.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

McBride, J. R., Ysseldyke, J., Milone, M., & Stickney, E. (2010). Technical adequacy and cost benefit of four measures of early literacy. Canadian Journal of School Psychology, 25(2), 189–204. Retrieved from https://eric.ed.gov/?id=EJ883998

From the ERIC abstract: “Technical adequacy and information/cost return were examined for four early reading measures: the Dynamic Indicators of Basic Early Literacy Skills (DIBELS), STAR Early Literacy (SEL), Group Reading Assessment and Diagnostic Evaluation (GRADE), and the Texas Primary Reading Inventory (TPRI). All four assessments were administered to the same students in each of Grades K through 2 over a 5-week period; the samples included 200 students per grade from 7 states. Both SEL and DIBELS were administered twice to establish their retest reliability in each grade. We focused on the convergent validity of each assessment for measuring five critical components of reading development identified by the U.S. National Research Panel: Phonemic awareness, phonics, vocabulary, comprehension, and fluency. DIBELS and TPRI both are asserted to assess all five of these components; GRADE and STAR Early Literacy explicitly measure all except fluency. For all components, correlations among relevant subtests were high and comparable. The pattern of intercorrelations of nonfluency measures with fluency suggests the tests of fluency, vocabulary, comprehension, and word reading are measuring the same underlying construct. A separate cost-benefit study was conducted and showed that STAR Early Literacy was the most cost-effective measure among those studied. In terms of amount of time per unit of test administration or teachers’ time, CAT (computerized adaptive testing) in general, and STAR Early Literacy in particular, is an attractive option for early reading assessment.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Petersen, D. B., Allen, M. M., & Spencer, T. D. (2016). Predicting reading difficulty in first grade using dynamic assessment of decoding in early kindergarten: A large-scale longitudinal study. Journal of Learning Disabilities, 49(2), 200–215. Retrieved from https://eric.ed.gov/?id=EJ1090703

From the ERIC abstract: “The purpose of this study was to examine and compare the classification accuracy of early static prereading measures and early dynamic assessment reading measures administered to 600 kindergarten students. At the beginning of kindergarten, all of the participants were administered two commonly used static prereading measures. The participants were then administered either a dynamic assessment featuring an onset-rime decoding strategy or a dynamic assessment featuring a sound-by-sound strategy. At the end of first grade, those same participants’ reading ability was assessed using multiple reading measures. Results indicated that the dynamic assessments yielded significantly higher classification accuracy over the static measures, but that the classification accuracy of the two dynamic assessments did not differ significantly. Sensitivity for the static measures was less than 80%, and specificity ranged from 33% to 51%. The sensitivity and specificity for the dynamic assessments was greater than 80% for all children, with the exception of specificity for the Hispanic children, which was at or greater than 70%. Results also indicated that the combination of static and dynamic measures did not improve the classification accuracy over the dynamic assessments alone. Dynamic assessment appears to be a promising approach to classifying young children at risk for future reading difficulty.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Piasta, S. B., Farley, K. S., Phillips, B. M., Anthony, J. L., & Bowles, R. P. (2018). Assessment of young children’s letter-sound knowledge: Initial validity evidence for letter-sound short forms. Assessment for Effective Intervention, 43(4), 249–255. Retrieved from https://eric.ed.gov/?id=EJ1188196

From the ERIC abstract: “The Letter-Sound Short Forms (LSSFs) were designed to meet criteria for effective progress monitoring tools by exhibiting strong psychometrics, offering multiple equivalent forms, and being brief and easy to administer and score. The present study expands available psychometric information for the LSSFs by providing an initial examination of their validity in assessing young children’s emerging letter-sound knowledge. In a sample of 998 preschool-aged children, the LSSFs were sensitive to change over time, showed strong concurrent validity with established letter-sound knowledge and related emergent literacy measures, and demonstrated predictive validity with emergent literacy measures. The LSSFs also predicted kindergarten readiness scores available for a subsample of children. These findings have implications for using the LSSFs to monitor children’s alphabet knowledge acquisition and to support differentiated early alphabet instruction.”

Rabinowitz, S., Wong, J., & Filby, N. (2002). Understanding young readers: The role of early literacy assessment (Knowledge Brief). San Francisco, CA: WestEd. Retrieved from https://eric.ed.gov/?id=ED470748

From the ERIC abstract: “With reading, as with all academic content and skills, effective instruction is informed by sound assessment. Teachers’ knowledge about classroom-embedded reading assessment must continue to be developed so that they can use the information it yields to make informed instructional decisions. At the same time, districts and schools must develop systematic, coherent, and reliable assessment programs that ensure consistency within and across grades while complementing and building on informal assessment efforts already underway. This Knowledge Brief explains the importance of early assessment and identifies some of its basic purposes; describes the challenges of assessing young children; explains some basic approaches to literacy assessment and how they align to specific purposes; and identifies some of the issues that need to be addressed if schools are to undertake valid and reliable literacy assessment whose results can help teachers better support all young readers. The brief is intended to help district administrators, principals, and other instructional leaders begin laying the groundwork for more consistent and effective use of reading assessment in the early primary grades. It is also intended to help them better understand the nuances and limitations of various instruments, including what decisions they can support.”

Roehrig, A. D., Petscher, Y., Nettles, S. M., Hudson, R. F., & Torgesen, J. K. (2008). Accuracy of the DIBELS oral reading fluency measure for predicting third grade reading comprehension outcomes. Journal of School Psychology, 46(3), 343–366. Retrieved from https://eric.ed.gov/?id=EJ789802

From the ERIC abstract: “We evaluated the validity of DIBELS (‘Dynamic Indicators of Basic Early Literacy Skills’) ORF (‘Oral Reading Fluency’) for predicting performance on the ‘Florida Comprehensive Assessment Test’ (FCAT-SSS) and ‘Stanford Achievement Test’ (SAT-10) reading comprehension measures. The usefulness of previously established ORF risk-level cutoffs [Good, R.H., Simmons, D.C., and Kame’enui, E.J. (2001). ‘The importance and decision-making utility of a continuum of fluency-based indicators of foundational reading skills for third-grade high-stakes outcomes.’ ‘Scientific Studies of Reading,’ 5, 257-288.] for third grade students were evaluated on calibration (n[subscript S1] = 16,539) and cross-validation (n[subscript S2] = 16,908) samples representative of Florida’s ‘Reading First’ population. The strongest correlations were the third (February/March) administration of ORF with both FCAT-SSS and SAT-10 (r[subscript S] = 0.70-0.71), when the three tests were administered concurrently. Recalibrated ORF risk-level cut scores derived from ROC (receiver-operating characteristic) curve analyses produced more accurate identification of true positives than previously established benchmarks. The recalibrated risk-level cut scores predict performance on the FCAT-SSS equally well for students from different socio-economic, language, and race/ethnicity categories.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Snow, C. E., & Oh, S. S. (2010). Assessment in early literacy research. In S. B. Neuman, & D. K. Dickinson (Eds.), Handbook of early literacy research (Volume 3), pp. 375–395. New York, NY: Guilford Press. Retrieved from https://eric.ed.gov/?id=ED528162

From the ERIC abstract: “Building crucial bridges between theory, research, and practice, this volume brings together leading authorities on the literacy development of young children. The ‘Handbook’ examines the full range of factors that shape learning in and out of the classroom, from basic developmental processes to family and sociocultural contexts, pedagogical strategies, curricula, and policy issues. Highlights of Volume 3 include cutting-edge perspectives on English language learning; innovative ways to support print knowledge, phonological awareness, and other code-related skills; and exemplary approaches to early intervention and teacher professional development.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Tortorelli, L. S., Bowles, R. P., & Skibbe, L. E. (2017). Easy as AcHGzrjq: The Quick Letter Name Knowledge Assessment. Reading Teacher, 71(2), 145–156. Retrieved from https://eric.ed.gov/?id=EJ1152823

From the ERIC abstract: “A firm foundation in alphabet knowledge is critical for children learning to read. Under new literacy standards, letter name knowledge in preschool and kindergarten can function as a gatekeeper to the rest of the curriculum. Teachers need data about their students’ alphabet knowledge early and often to plan differentiated instruction that moves all students forward in their literacy development. This article describes the Quick Letter Name Knowledge Assessment (Q-LNK), a rigorous, research-based letter name knowledge assessment designed for screening and benchmark testing that can be administered in less than a minute per student. The authors discuss the need for alphabet screening and benchmark assessments, the research on how students develop knowledge of letter names, and how the Q-LNK assessment was developed and tested. The procedure for using the Q-LNK is illustrated with the description of a teacher administering, scoring, and interpreting results from the assessment in her kindergarten class.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Wackerle-Hollman, A. K., Rodriguez, M. I., Bradfield, T. A., Rodriguez, M. C., & McConnell, S. R. (2015). Development of early measures of comprehension: Innovation in Individual Growth and Development Indicators. Assessment for Effective Intervention, 40(2), 81–95. Retrieved from https://eric.ed.gov/?id=ED605882

From the ERIC abstract: “Early comprehension is an important, but not well-understood, contribution to early literacy and language development. Specifically, research regarding the nature of skills representative of early comprehension, including how they contribute to later reading success, is needed to support best practices to adequately prepare students. This article describes the process involved in the creation and refinement of the newly developed comprehension Individual Growth and Development Indicators (IGDIs 2.0). Two theoretical models of early comprehension are discussed to highlight the inherent complexity of this domain. Results of three studies are presented: Study 1 outlines the initial piloting process, Study 2 represents a larger-scale investigation, and Study 3 describes further field testing and reveals the final IGDI 2.0 comprehension candidate: Which One Doesn’t Belong (WODB). Results indicated WODB out-performed the other candidate measures across psychometric and pragmatic criteria. The utility of the WODB task within a Response to Intervention (RTI) framework is also discussed.”

Methods

Keywords and Search Strings

The following keywords and search strings were used to search the reference databases and other sources:

  • “Beginning reading” assessment

  • “Beginning reading” measurement

  • “Early literacy assessment”

  • “Early literacy assessment” “letter knowledge”

  • “Early literacy assessment” “letter knowledge”

  • “Early literacy assessment” “phonemic awareness”

  • “Early literacy assessment” “decoding”

  • “Early literacy assessment” “fluency”

  • “Early literacy assessment” “reading comprehension”

Databases and Search Engines

We searched ERIC for relevant resources. ERIC is a free online library of more than 1.6 million citations of education research sponsored by the Institute of Education Sciences (IES). Additionally, we searched IES and Google Scholar.

Reference Search and Selection Criteria

When we were searching and reviewing resources, we considered the following criteria:

  • Date of the publication: References and resources published over the last 15 years, from 2005 to present, were included in the search and review.

  • Search priorities of reference sources: Search priority is given to study reports, briefs, and other documents that are published or reviewed by IES and other federal or federally funded organizations.

  • Methodology: We used the following methodological priorities/considerations in the review and selection of the references: (a) study types—randomized control trials, quasi-experiments, surveys, descriptive data analyses, literature reviews, policy briefs, and so forth, generally in this order, (b) target population, samples (e.g., representativeness of the target population, sample size, volunteered or randomly selected), study duration, and so forth, and (c) limitations, generalizability of the findings and conclusions, and so forth.
This memorandum is one in a series of quick-turnaround responses to specific questions posed by educational stakeholders in the Midwest Region (Illinois, Indiana, Iowa, Michigan, Minnesota, Ohio, Wisconsin), which is served by the Regional Educational Laboratory (REL Midwest) at American Institutes for Research. This memorandum was prepared by REL Midwest under a contract with the U.S. Department of Education’s Institute of Education Sciences (IES), Contract ED-IES-17-C-0007, administered by American Institutes for Research. Its content does not necessarily reflect the views or policies of IES or the U.S. Department of Education nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.