Skip Navigation


Follow us on:

Ask A REL Response

January 2019

Question

What research has been conducted on how to determine the grade level of texts and how to use grade level text with students?

Response

Following an established REL Southeast research protocol, we conducted a search for research reports as well as descriptive study articles on how to determine the grade level of texts and how to use grade level text with students. We focused on identifying resources that specifically addressed how to determine the grade level of texts and how to use grade level text with students. The sources included ERIC and other federally funded databases and organizations, research institutions, academic research databases, and general Internet search engines (For details, please see the methods section at the end of this memo.)

We have not evaluated the quality of references and the resources provided in this response. We offer them only for your reference. These references are listed in alphabetical order, not necessarily in order of relevance. Also, we searched the references in the response from the most commonly used resources of research, but they are not comprehensive and other relevant references and resources may exist."

Research References

  1. Begeny, J. C., & Greene, D. J. (2014). Can readability formulas be used to successfully gauge difficulty of reading materials? Psychology in the Schools, 51(2), 198-215. http://eric.ed.gov/?id=EJ1028357
    From the abstract: "A grade level of reading material is commonly estimated using one or more readability formulas, which purport to measure text difficulty based on specified text characteristics. However, there is limited direction for teachers and publishers regarding which readability formulas (if any) are appropriate indicators of actual text difficulty. Because oral reading fluency (ORF) is considered one primary indicator of an elementary aged student's overall reading ability, the purpose of this study was to assess the link between leveled reading passages and students' actual ORF rates. ORF rates of 360 elementary-aged students were used to determine whether reading passages at varying grade levels are, as would be predicted by readability levels, more or less difficult for students to read. Results showed that a small number of readability formulas were fairly good indicators of text, but this was only true at particular grade levels. Additionally, most of the readability formulas were more accurate for higher ability readers. One implication of the findings suggests that teachers should be cautious when making instructional decisions based on purported "grade-leveled" text, and educational researchers and practitioners should strive to assess difficulty of text materials beyond simply using a readability formula."
  2. Cunningham, J. W., Hiebert, E. H., & Mesmer, H. A. (2018). Investigating the validity of two widely used quantitative text tools. Reading and Writing: An Interdisciplinary Journal, 31(4), 813-833. http://eric.ed.gov/?id=EJ1171526
    From the abstract: "In recent years, readability formulas have gained new prominence as a basis for selecting texts for learning and assessment. Variables that quantitative tools count (e.g., word frequency, sentence length) provide valid measures of text complexity insofar as they accurately predict representative and high-quality criteria. The longstanding consensus of text researchers has been that such criteria will measure readers' comprehension of sample texts. This study used Bormuth's (1969) rigorously developed criterion measure to investigate two of today's most widely used quantitative text tools--the Lexile Framework and the Flesch-Kincaid Grade-Level formula. Correlations between the two tools' complexity scores and Bormuth's measured difficulties of criterion passages were only moderately high in light of the literature and new high stakes uses for such tools. These correlations declined a small amount when passages from the University grade band of use were removed. The ability of these tools to predict measured text difficulties within any single grade band below University was low. Analyses showed that word complexity made a larger contribution relative to sentence complexity when each tool's predictors were regressed on the Bormuth criterion rather than their original criteria. When the criterion was texts' grade band of use instead of mean cloze scores, neither tool classified texts well and errors disproportionally placed texts from higher grade bands into lower ones. Results suggest these two text tools may lack adequate validity for their current uses in educational settings."
  3. Gallagher, T., Fazio, X., & Ciampa, K. A. (2017). Comparison of readability in science-based texts: Implications for elementary teachers. Canadian Journal of Education, 40(1). http://eric.ed.gov/?id=EJ1136164
    From the abstract: "Science curriculum standards were mapped onto various texts (literacy readers, trade books, online articles). Statistical analyses highlighted the inconsistencies among readability formulae for Grades 2-6 levels of the standards. There was a lack of correlation among the readability measures, and also when comparing different text sources. Online texts were the most disparate with respect to text difficulty. These findings suggest implications for elementary teachers to support students who learn through reading online, science-based resources. As 21st-century learning through multi-modal literacies evolves, the readability of online, content-based text should be evaluated to ensure accessibility to all readers."
  4. Gallagher, T. L., Fazio, X., & Gunning, T. G. (2012). Varying readability of science-based text in elementary readers: Challenges for teachers. Reading Improvement, 49(3), 93-112. http://eric.ed.gov/?id=EJ986935
    From the abstract: "This investigation compared readability formulae to publishers' identified reading levels in science-based elementary readers. Nine well-established readability indices were calculated and comparisons were made with the publishers' identified grade designations and between different genres of text. Results revealed considerable variance among the 9 formulae. All formulae tended to inflate readability calculations for nonfiction science-based text, whereas fiction science-based text was more closely aligned to the publishers' grade levels. Implications are discussed for elementary teachers' awareness of readability variances in science-based resources, and the professional learning that is required to support the use of elementary readers, including understanding the limitations of using common readability metrics. (Contains 12 tables and 1 figure.)"
  5. Graesser, A. C., McNamara, D. S., & Kulikowich, J. M. (2011). Coh-Metrix: Providing multilevel analyses of text characteristics. Educational Researcher, 40(5), 223-234. http://eric.ed.gov/?id=EJ930947
    From the abstract: "Computer analyses of text characteristics are often used by reading teachers, researchers, and policy makers when selecting texts for students. The authors of this article identify components of language, discourse, and cognition that underlie traditional automated metrics of text difficulty and their new Coh-Metrix system. Coh- Metrix analyzes texts on multiple measures of language and discourse that are aligned with multilevel theoretical frameworks of comprehension. The authors discuss five major factors that account for most of the variance in texts across grade levels and text categories: word concreteness, syntactic simplicity, referential cohesion, causal cohesion, and narrativity. They consider the importance of both quantitative and qualitative characteristics of texts for assigning the right text to the right student at the right time. (Contains 1 table and 1 figure.)"
  6. Hudson, M. E., Browder, D., & Wakeman, S. (2013) Helping students with moderate and severe intellectual disability access grade-level text. TEACHING Exceptional Children, 45(3), 14-23. http://eric.ed.gov/?id=EJ1008509
    From the abstract: "Teaching students with moderate and severe intellectual disability who are early readers or nonreaders to engage with grade-level text is challenging. For this reason, purposeful thought must be given to promoting text accessibility and teaching text comprehension. Whenever possible, text should be used as it is originally written without adaptations. When adaptations are needed, however, the guiding principle should be to make only the changes necessary to allow the student to work with the text. This article describes research-based strategies for adapting grade-level text and teaching text comprehension for individuals with moderate and severe intellectual disability. A careful review and subsequent application of these strategies by teachers can lead to the development of materials and instruction that clearly promote interaction of students with moderate and severe intellectual disability who are early readers or nonreaders with grade-level text. (Contains 6 figures.)"
  7. Sheehan, K. M., Kostin, I., Napolitano, D., & Flor, M. (2014). The Textevaluator tool: Helping teachers and test developers select texts for use in instruction and assessment. Elementary School Journal, 115(2), 184-209. http://eric.ed.gov/?id=EJ1047777
    From the abstract: "This article describes TextEvaluator, a comprehensive text-analysis system designed to help teachers, textbook publishers, test developers, and literacy researchers select reading materials that are consistent with the text complexity goals outlined in the Common Core State Standards. Three particular aspects of the TextEvaluator measurement approach are highlighted: (1) attending to relevant reader and task considerations, (2) expanding construct coverage beyond the two dimensions of text variation traditionally assessed by readability metrics, and (3) addressing two potential threats to tool validity: genre bias and blueprint bias. We argue that systems that are attentive to these particular measurement issues may be more effective at helping users achieve a key goal of the new Standards: ensuring that students are challenged to read texts at steadily increasing complexity levels as they progress through school, so that all students acquire the advanced reading skills needed for success in college and careers."

Methods

Keywords and Search Strings
The following keywords and search strings were used to search the reference databases and other sources:

  • Grade level texts, instructional strategies
  • Readability formulas
  • Text-level readability

Databases and Resources
We searched ERIC for relevant resources. ERIC is a free online library of over 1.6 million citations of education research sponsored by the Institute of Education Sciences. Additionally, we searched Google Scholar and PsychInfo.

Reference Search and Selection Criteria

When we were searching and reviewing resources, we considered the following criteria:

  • Date of the publication: References and resources published for last 15 years, from 2003 to present, were include in the search and review.
  • Search Priorities of Reference Sources: Search priority is given to study reports, briefs, and other documents that are published and/or reviewed by IES and other federal or federally funded organizations, academic databases, including ERIC, EBSCO databases, JSTOR database, PsychInfo, PsychArticle, and Google Scholar.
  • Methodology: Following methodological priorities/considerations were given in the review and selection of the references: (a) study types - randomized control trials,, quasi experiments, surveys, descriptive data analyses, literature reviews, policy briefs, etc., generally in this order (b) target population, samples (representativeness of the target population, sample size, volunteered or randomly selected, etc.), study duration, etc. (c) limitations, generalizability of the findings and conclusions, etc.

This memorandum is one in a series of quick-turnaround responses to specific questions posed by educational stakeholders in the Southeast Region (Alabama, Florida, Georgia, Mississippi, North Carolina, and South Carolina), which is served by the Regional Educational Laboratory Southeast at Florida State University. This memorandum was prepared by REL Southeast under a contract with the U.S. Department of Education's Institute of Education Sciences (IES), Contract ED-IES-17-C-0011, administered by Florida State University. Its content does not necessarily reflect the views or policies of IES or the U.S. Department of Education nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.