Skip Navigation

Back to Ask A REL Archived Responses

REL Midwest Ask A REL Response

Literacy

October 2017

Questions:

What research is available on the effectiveness of DIBELS at:

1. Assessing student reading skills in kindergarten and first grade?

2. Identifying students who may require additional reading support in kindergarten and first grade?

3. Predicting future reading performance for kindergarten and first-grade students?



Response:

Following an established Regional Educational Laboratory (REL) Midwest protocol, we conducted a search for research reports, descriptive studies, and meta-analyses on the effectiveness of the use of Dynamic Indicators of Basic Early Literacy Skills (DIBELS) for kindergarten and first-grade students. In particular, we focused on identifying resources related to assessing student reading skills, identifying opportunities for reading support, and predicting future reading performance. For details on the databases and sources, keywords and selection criteria used to create this response, please see the Methods section at the end of this memo.

Below, we share a sampling of the publicly accessible resources on this topic. The search conducted is not comprehensive; other relevant references and resources may exist. We have not evaluated the quality of references and resources provided in this response, but offer this list to you for your information only.

Research References

Burke, M. D., & Hagan-Burke, S. (2007). Concurrent criterion-related validity of early literacy indicators for middle of first grade. Assessment for Effective Intervention, 32(2), 66–77. Retrieved from https://eric.ed.gov/?id=EJ793331

From the ERIC abstract: “The purpose of this study was to examine the concurrent criterion-related (or convergent) validity of first grade measures from the ‘Dynamic Indicators of Basic Early Literacy Skills’ (DIBELS; Good & Kaminski, 2002). The DIBELS subtests of Phoneme Segmentation Fluency, Nonsense Word Fluency, Oral Reading Fluency, Retell Fluency, and Word Use Fluency were administered to 213 first graders in the middle of the school year, along with the ‘Test of Word Reading Efficiency’ (TOWRE; Torgesen, Wagner, & Rashotte, 1997), a norm-referenced test with documented technical adequacy. Results from correlation, regression, and factor analyses indicated that the DIBELS subtests of Oral Reading Fluency and of Nonsense Word Fluency had the strongest associations with the TOWRE subtests.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Burke, M. D., Hagan-Burke, S., Kwok, O., & Parker, R. (2009). Predictive validity of early literacy indicators from the middle of kindergarten to second grade. Journal of Special Education, 42(4), 209–226. Retrieved from https://eric.ed.gov/?id=EJ823403

From the ERIC abstract: “Research has emphasized the importance of phonological awareness, phonemic decoding, and automaticity in reading development. Special and general education teachers need valid, efficient, and effective early literacy indicators for schoolwide screening and monitoring that adequately predict reading outcomes. The purpose of this study was to examine the interrelationships and predictiveness of kindergarten early literacy indicators from the ‘Dynamic Indicators of Basic Early Literacy Skills’ (DIBELS) within the context of a path analysis. The results support the validity of kindergarten DIBELS in predicting ever more complex reading skills in a developmental progression from the middle of kindergarten to second grade.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Coker, D. L., Jr., & Ritchey, K. D. (2014). Universal screening for writing risk in kindergarten. Assessment for Effective Intervention, 39(4), 245–256. Retrieved from https://eric.ed.gov/?id=EJ1033215

From the ERIC abstract: “Early identification of students at risk for writing disabilities is an important step in improving writing performance. Kindergarten students (n = 84) were administered a set of researcher-developed writing tasks (letter writing, sound spelling, word spelling, and sentence writing) and school-administered reading tasks (‘Dynamic Indicators of Early Literacy Skills’ [DIBELS], Phoneme Segmentation Fluency [PSF], Nonsense Word Fluency [NWF], and Letter Name Fluency [LNF] subtests [DIBELS]) in January. The students were identified as at risk based on a norm-referenced writing assessment and teacher ratings collected in the spring. The classification accuracy of the writing and reading tasks was estimated using receiver operating characteristic (ROC) curves. For both risk criteria, individual reading and writing assessments demonstrated comparable accuracy (area under the curve [AUC] statistics range = 0.57-0.87). However, classification accuracy was strengthened when reading and writing measures were combined (AUC range = 0.75-0.92). The results suggest that the most accurate approach to universal screening of writing difficulties may include a battery of reading and writing measures.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Goffreda, C. T., & DiPerna, J. C. (2010). An empirical review of psychometric evidence for the Dynamic Indicators of Basic Early Literacy Skills. School Psychology Review, 39(3), 463–483. Retrieved from https://eric.ed.gov/?id=EJ900923

From the ERIC abstract: “The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) are brief measures of early literacy skills for students in Grades K-6 (University of Oregon, 2009; see Kaminski & Good, 1996). School psychologists and other educational professionals use DIBELS to identify students who are in need of early intervention. The purpose of this review was to synthesize the current psychometric evidence for each DIBELS indicator. Strong reliability and validity evidence was observed for DIBELS Oral Reading Fluency; however, evidence for the remaining DIBELS indicators demonstrated greater variability. Although the majority of evidence focused on individual score reliability and validity for single-point decisions, further studies are needed to determine effective practices for progress monitoring.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Goffreda, C. T., Diperna, J. C., & Pedersen, J. A. (2009). Preventive screening for early readers: Predictive validity of the Dynamic Indicators of Basic Early Literacy Skills (DIBELS). Psychology in the Schools, 46(6), 539–552. Retrieved from https://eric.ed.gov/?id=EJ848979

From the ERIC abstract: “Current empirical evidence indicates poor learning trajectories for students with early literacy skill deficits. As such, reliable and valid detection of at-risk students through regular screening and progress monitoring is imperative. This study investigated the predictive validity of scores on the Dynamic Indicators of Basic Early Literacy Skills (DIBELS). Logistic regression analyses were used to test the utility of the DIBELS first grade indicators for predicting reading proficiency on TerraNova California Achievement Test (CAT) Assessment and Pennsylvania System of School Assessment (PSSA) in second and third grade, respectively. Results suggest that students’ first grade Oral Reading Fluency (ORF) DIBELS risk category scores were the only significant predictor of future TerraNova and PSSA reading proficiency. Although the current data present encouraging results for the predictive validity of ORF as a screening tool for early readers, further investigations of the utility of the remaining indicators (Letter Naming Fluency, Nonsense Word Fluency, and Phonemic Segmentation Fluency) are warranted.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Gonzalez, J. E., Vannest, K. J., & Reid, R. (2008). Early classification of reading performance in children identified or at risk for emotional and behavioral disorders: A discriminant analysis using the Dynamic Indicators of Basic Early Literacy Skills (DIBELS). Journal of At-Risk Issues, 14(1), 33–40. Retrieved from https://eric.ed.gov/?&id=EJ942833

From the ERIC abstract: “This study evaluated the ability of the kindergarten and first grade Dynamic Indicators of Basic Early Literacy Skills (DIBELS), measures of early literacy development, to discriminate among low average, average, and above average students considered at risk emotional and behavioral disorders (EBD) on the Total Reading cluster of the Woodcock Reading Mastery Tests-Revised (WRMT-R). The DIBELS consisted of two measures of phonological awareness, one measure of alphabet knowledge, one measure of the alphabetic principle, and one measure of oral reading fluency with connected text. Results indicated that first grade DIBELS differentiated among reading groups and classification accuracy was statistically better than chance. With the exception of alphabet knowledge, DIBELS did not significantly differentiate among the fall kindergarten groups. Oral reading fluency and alphabet knowledge had the greatest discriminating power for first graders. These findings extended the usefulness of the first grade DIBELS to populations other than general education students. Implications for the use and application of DIBELS to non-general education populations are discussed along with caveats for kindergarten discriminant power of the DIBELS.”

Hagan-Burke, S., Burke, M. D., & Crowder, C. (2006). The convergent validity of the Dynamic Indicators of Basic Early Literacy Skills and the Test of Word Reading Efficiency for the beginning of first grade. Assessment for Effective Intervention, 31(4), 1–15. Retrieved from https://eric.ed.gov/?id=EJ793316

From the ERIC abstract: “The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) are a series of fluency-based measures designed to assess early literacy skills. These fluency-based measures function as predictors of future reading performance and target critical component skills required to learn to read. This study was conducted to establish the convergent validity of DIBELS with a standardized measure of phonological decoding ability and sight word reading fluency, the Test of Word Reading Efficiency (TOWRE) (Torgesen, Wagner, & Rashotte, 1997). The TOWRE has been shown to have sufficient reliability and validity as a measure of word reading ability (Torgesen et al., 1997). DIBELS and TOWRE subtests were administered to 202 first grade students. Correlations were examined between scores on the DIBELS subtests of Letter Naming Fluency, Phoneme Segmentation Fluency, Nonsense Word Fluency, and Word Use Fluency with the TOWRE subtests of Phonetic Decoding Efficacy and Sight Words. The DIBELS Nonsense Word Fluency measure was found to have the strongest association with both the TOWRE Phonetic Decoding Efficacy and Sight Word subtests.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Hintze, J. M., Ryan, A. L., & Stoner, G. (2003). Concurrent validity and diagnostic accuracy of the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) and the Comprehensive Test of Phonological Processing. School Psychology Review, 32(4), 541–556. Retrieved from https://eric.ed.gov/?id=EJ823574

From the ERIC abstract: “The purpose of this study was to (a) examine the concurrent validity of the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) with the Comprehensive Test of Phonological Processing (CTOPP), and (b) explore the diagnostic accuracy of the DIBELS in predicting CTOPP performance using suggested and alternative cut-scores. Eighty-six students were administered the DIBELS and the CTOPP in the winter of their kindergarten year. Patterns of correlations between the two sets of measures were examined and decision accuracy studies conducted based on suggested cut-scores and cut-scores determined as a result of Receiver Operator Characteristic (ROC) curve analysis. Results showed moderate to strong correlations between the DIBELS and the CTOPP suggesting that both measure a similar construct. Analysis of decision accuracy indicated that using the author suggested cut-scores resulted in extremely high sensitivity; however, this was at the expense of an inordinate number of false positives. Follow-up analyses using adjusted cut-scores improved specificity and positive predictive power, reduced false positives, and increased the number of correct classifications sizably.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Johnson, E. S., Jenkins, J. R., Petscher, Y., & Catts, H. W. (2009). How can we improve the accuracy of screening instruments? Learning Disabilities Research & Practice, 24(4), 174–185. Retrieved from https://eric.ed.gov/?id=EJ861190

From the ERIC abstract: “Screening for early reading problems is a critical step in early intervention and prevention of later reading difficulties. Evaluative frameworks for determining the utility of a screening process are presented in the literature but have not been applied to many screening measures currently in use in numerous schools across the nation. In this study, the accuracy of several Dynamic Indicators of Basic Early Literacy Skills (DIBELS) subtests in predicting which students were at risk for reading failure in first grade was examined in a sample of 12,055 students in Florida. Findings indicate that the DIBELS Nonsense Word Fluency, Initial Sound Fluency, and Phoneme Segmentation Fluency measures show poor diagnostic utility in predicting end of Grade 1 reading performance. DIBELS Oral Reading Fluency in fall of Grade 1 had higher classification accuracy than other DIBELS measures, but when compared to the classification accuracy obtained by assuming that no student had a disability, suggests the need to reevaluate the use of classification accuracy as a way to evaluate screening measures without discussion of base rates. Additionally, when cut scores on the screening tools were set to capture 90 percent of all students at risk for reading problems, a high number of false positives were identified. Finally, different cut scores were needed for different subgroups, such as English Language Learners. Implications for research and practice are discussed.”

Kamii, C., & Manning, M. (2005). Dynamic Indicators of Basic Early Literacy Skills (DIBELS): A tool for evaluating student learning? Journal of Research in Childhood Education, 20(2), 75–90. Retrieved from https://eric.ed.gov/?id=EJ751955

From the ERIC abstract: “To evaluate the usefulness of two DIBELS subtests (Phonemic Segmentation Fluency and Nonsense Word Fluency), 107 kindergartners and 101 first graders who had taken the DIBELS were given a writing-of-words task and the Slosson Oral Reading Test of sight words. In addition, the first graders’ DIBELS included an Oral Reading Fluency subtest that assessed students’ ability to get meaning from a five-paragraph composition. After analyzing the relationships between scores on each DIBELS subtest and other variables, it was concluded that no evidence was found to justify the use of the DIBELS for the evaluation of a literacy instructional program.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Kuo, N.-C. (2016). Informing instruction of students with autism in public school settings. Journal of Educational Issues, 2(2), 31–47. Retrieved from https://eric.ed.gov/?&id=EJ1127551

From the ERIC abstract: “The number of applied behavior analysis (ABA) classrooms for students with autism is increasing in K-12 public schools. To inform instruction of students with autism in public school settings, this study examined the relation between performance on mastery learning assessments and standardized achievement tests for students with autism spectrum disorders (ASD) in an applied behavior analysis (ABA) classroom. The measures included ABLLS-R, DIBELS-R, and DIBELS-M. Results of the study indicate that all students acquired new skills across domains and met their IEP goals measured by the mastery learning assessment, but they scored low on reading and math for their grade level according to standardized achievement tests. Suggestions for prompting good autism practice in public school settings are discussed.”

McBride, J. R., Ysseldyke, J., Milone, M., & Stickney, E. (2010). Technical adequacy and cost benefit of four measures of early literacy. Canadian Journal of School Psychology, 25(2), 189–204. Retrieved from https://eric.ed.gov/?id=EJ883998

From the ERIC abstract: “Technical adequacy and information/cost return were examined for four early reading measures: the Dynamic Indicators of Basic Early Literacy Skills (DIBELS), STAR Early Literacy (SEL), Group Reading Assessment and Diagnostic Evaluation (GRADE), and the Texas Primary Reading Inventory (TPRI). All four assessments were administered to the same students in each of Grades K through 2 over a 5-week period; the samples included 200 students per grade from 7 states. Both SEL and DIBELS were administered twice to establish their retest reliability in each grade. We focused on the convergent validity of each assessment for measuring five critical components of reading development identified by the U.S. National Research Panel: Phonemic awareness, phonics, vocabulary, comprehension, and fluency. DIBELS and TPRI both are asserted to assess all five of these components; GRADE and STAR Early Literacy explicitly measure all except fluency. For all components, correlations among relevant subtests were high and comparable. The pattern of intercorrelations of nonfluency measures with fluency suggests the tests of fluency, vocabulary, comprehension, and word reading are measuring the same underlying construct. A separate cost-benefit study was conducted and showed that STAR Early Literacy was the most cost-effective measure among those studied. In terms of amount of time per unit of test administration or teachers’ time, CAT (computerized adaptive testing) in general, and STAR Early Literacy in particular, is an attractive option for early reading assessment.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Morris, D., Trathen, W., Perney, J., Gill, T., Schlagal, R., Ward, D., & Frye, E. M. (2017). Three DIBELS tasks vs. three informal reading/spelling tasks: A comparison of predictive validity. Reading Psychology, 38(3), 289–320. Retrieved from https://eric.ed.gov/?id=EJ1126576

From the ERIC abstract: “Within a developmental framework, this study compared the predictive validity of three DIBELS tasks (phoneme segmentation fluency [PSF], nonsense word fluency [NWF], and oral reading fluency [ORF]) with that of three alternative tasks drawn from the field of reading (phonemic spelling [phSPEL], word recognition-timed [WR-t], and graded passage reading [grPASS), an oral reading fluency measure). Two cohorts of students (n = 319) were assessed with the aforementioned tasks multiple times across a four-year period--middle of kindergarten through end of third grade. The results were clear and closely replicated in the two cohorts: (a) phSPEL (moderate) outperformed DIBELS PSF (weak to moderate) in predicting future orthographic-unit processing; (b) WR-t (very strong) outperformed DIBELS NWF (moderate) in predicting future oral reading fluency; and (c) DIBELS ORF and grPASS were equally good predictors (moderately strong) of future reading comprehension.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Nelson, J. M. (2008). Beyond correlational analysis of the Dynamic Indicators of Basic Early Literacy Skills (DIBELS): A classification validity study. School Psychology Quarterly, 23(4), 542–552. Retrieved from https://eric.ed.gov/?id=EJ823890

From the ERIC abstract: “This study investigated the classification validity of the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) using a sample of kindergarteners (N = 177). Results indicated the cutoff scores for determining ‘at-risk’ status on the DIBELS produced substantial false negative rates. Cutoff scores identifying students as at ‘some risk’ produced substantial false positive rates. At both levels of risk status, the DIBELS showed low positive predictive power, but high negative predictive power, indicating it was far better at identifying students with adequate reading skills than those with inadequate reading skills. Recommendations for appropriate use of the DIBELS for reading screening and suggestions for future research are provided.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Oh, D., Haager, D., & Windmeuller, M. (2007). A longitudinal study predicting reading success for English-language learners from kindergarten to grade 1. Multiple Voices for Ethnically Diverse Exceptional Learners, 10(1–2), 107–124. Retrieved from https://eric.ed.gov/?id=EJ887057

From the ERIC abstract: “This article reports findings from a longitudinal investigation of predictors of reading achievement for English-language learners receiving reading instruction in an English-language curriculum. Using the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) assessments, the study examined the predictive relationships of various measures of reading and vocabulary from the beginning of kindergarten to the end of first grade. Regression and path analysis models demonstrated that rapid letter naming was a salient predictor, whereas phonemic awareness, typically a strong predictor of reading achievement, played a diminished role. By first grade, the ability to decode simple words was a strong predictor, and by mid-first grade, reading fluency was a strong predictor. Oral vocabulary was not predictive of reading outcomes in kindergarten or first grade.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Oslund, E. L., Hagan-Burke, S., Taylor, A. B., Simmons, D. C., Simmons, L., Kwok, O.-M., … Coyne, M. D. (2012). Predicting kindergarteners’ response to early reading intervention: An examination of progress-monitoring measures. Reading Psychology, 33(1–2), 78–103. Retrieved from https://eric.ed.gov/?id=EJ969813

From the ERIC abstract: “This study examined the predictive validity of combinations of progress-monitoring measures: (a) curriculum-embedded phonemic awareness and alphabetic/decoding measures, and (b) Dynamic Indicators of Basic Early Literacy Skills (DIBELS; Good & Kaminski, 2002) nonsense word fluency and phoneme segmentation fluency on reading outcomes of kindergarten students in a tier 2 intervention. Results of multiple-regression analyses indicated that curriculum-embedded mastery checks and DIBELS measures each explained a significant amount of variance on the outcome measure. However, curriculum-embedded measures explained statistically significantly more variance at each time point supporting their utility in documenting progress of kindergarten students receiving intervention.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Riedel, B. W. (2007). The relation between DIBELS, reading comprehension, and vocabulary in urban first-grade students. Reading Research Quarterly, 42(4), 546–567. Retrieved from https://eric.ed.gov/?id=EJ776733

From the ERIC abstract: “The relation between Dynamic Indicators of Basic Early Literacy Skills (DIBELS) and reading comprehension at the end of first grade and second grade was examined in a sample of 1,518 first grade students from a large urban school district. Receiver Operating Characteristic (ROC) analyses were used to determine optimal DIBELS cut scores for predicting satisfactory reading comprehension. A measure of reading rate and accuracy, a subtest that the DIBELS assessment refers to as Oral Reading Fluency (ORF), was a better predictor of comprehension than the remaining subtests, including a retell fluency task designed to measure comprehension. Also, use of other subtests in combination with ORF did not substantially improve predictive power beyond that provided by ORF alone. Vocabulary was an important factor in the relation between ORF scores and comprehension. Students with satisfactory ORF scores but poor comprehension had lower vocabulary scores than students with satisfactory ORF scores and satisfactory comprehension.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Rouse, H., & Fantuzzo, J. (2006). Validity of the Dynamic Indicators for Basic Early Literacy Skills as an indicator of early literacy for urban kindergarten children. School Psychology Review, 35(3), 341–355. Retrieved from https://eric.ed.gov/?id=EJ788273

From the ERIC abstract: “The validity of three subtests of the Dynamic Indicators for Basic Early Literacy Skills (DIBELS) was investigated for kindergarten children in a large urban school district. A stratified, random sample of 330 participants was drawn from an entire cohort of kindergarten children. Letter Naming Fluency, Phoneme Segmentation Fluency, and Nonsense Word Fluency evidenced significant concurrent and predictive validity when compared to general reading ability measured by teacher report, individual assessments, and group-administered nationally standardized tests. Evidence for convergent and discriminant validity was also found when comparing these subtests to measures of specific literacy, cognitive, and social-behavioral constructs.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Schilling, S. G., Carlisle, J. F., Scott, S. E., & Zeng, J. (2007). Are fluency measures accurate predictors of reading achievement? Elementary School Journal, 107(5), 429–448. Retrieved from https://eric.ed.gov/?id=EJ765986

From the ERIC abstract: “This study focused on the predictive validity of fluency measures that comprise Dynamic Indicators of Basic Early Literacy Skills (DIBELS). Data were gathered from first through third graders attending 44 schools in 9 districts or local educational agencies that made up the first Reading First cohort in Michigan. Students were administered DIBELS subtests in the fall, winter, and spring, and they took the reading subtests of the Iowa Tests of Basic Skills (ITBS) in the spring. Results showed that DIBELS subtests significantly predicted year-end reading achievement on the ITBS, Reading Total subtest. They also showed that DIBELS at-risk benchmarks for oral reading fluency (ORF) were reasonably accurate at identifying second and third graders who were reading below the twenty-fifth percentile at the end of the year (80% and 76% for second and third graders, respectively). However, 32% of second graders and 37% of third graders who were identified as at low risk by the ORF benchmarks turned out not to be reading at grade level on ITBS in April. We discuss 2 possibilities for improving the assessment of students’ progress in reading: (a) supplementing DIBELS with measures of reading comprehension and vocabulary, and (b) using frequent progress-monitoring assessments for students at risk for reading problems to identify students who are not responding to classroom instruction.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Smolkowski, K., & Cummings, K. D. (2016). Evaluation of the DIBELS (Sixth Edition) Diagnostic System for the selection of native and proficient English speakers at risk of reading difficulties. Journal of Psychoeducational Assessment, 34(2), 103–118. Retrieved from https://eric.ed.gov/?id=EJ1093835

From the ERIC abstract: “This comprehensive evaluation of the Dynamic Indicators of Basic Early Literacy Skills Sixth Edition (DIBELS6) set of measures gives a practical illustration of signal detection methods, the methods used to determine the value of screening and diagnostic systems, and offers an updated set of cut scores (decision thresholds). Data were drawn from a sample of 13,507 English-proficient students in kindergarten through Grade 3, with more than 4,500 students per grade level. Results indicate that most DIBELS6 measures accurately predict comprehensive test performance and that previously published decision thresholds for DIBELS6 are generally appropriate with some key exceptions. For example, the performance of phoneme segmentation fluency did not always meet expectations. The revised DIBELS6 decision thresholds can satisfactorily identify students who may require additional supports.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Additional Organizations to Consult

University of Oregon DIBELS Data System – https://dibels.uoregon.edu/

From the website: “The UO DIBELS Data System was developed and is maintained by education experts at the University of Oregon’s Center on Teaching and Learning. We were THE FIRST web-based data system to support the DIBELS measures. We are a not-for-profit domain organized around successful student evaluation. The UO DIBELS Data System supports educators across the U.S. and internationally. Educators have trusted the DIBELS Data System since 2001. Your data are protected at the highest level, by the University of Oregon’s Institutional Review Board. We reinvest 100% of revenues back into research and customer support.”

Methods

Keywords and Search Strings

The following keywords and search strings were used to search the reference databases and other sources:

  • DIBELS

  • DIBELS kindergarten

  • DIBELS first grade

  • DIBELS assessing reading skills

  • DIBELS assessing reading skills kindergarten

  • DIBELS assessing reading skills first grade

  • DIBELS reading assess

  • DIBELS reading support

  • DIBELS predictive validity

  • DIBELS reliability validity

Databases and Search Engines

We searched ERIC for relevant resources. ERIC is a free online library of more than 1.6 million citations of education research sponsored by the Institute of Education Sciences (IES).

Reference Search and Selection Criteria

When we were searching and reviewing resources, we considered the following criteria:

  • Date of the publication: References and resources published over the last 15 years, from 2002 to present, were include in the search and review.

  • Search priorities of reference sources: Search priority is given to study reports, briefs, and other documents that are published or reviewed by IES and other federal or federally funded organizations.

  • Methodology: We used the following methodological priorities/considerations in the review and selection of the references: (a) study types—randomized control trials, quasi-experiments, surveys, descriptive data analyses, literature reviews, policy briefs, and so forth, generally in this order, (b) target population, samples (e.g., representativeness of the target population, sample size, volunteered or randomly selected), study duration, and so forth, and (c) limitations, generalizability of the findings and conclusions, and so forth.
This memorandum is one in a series of quick-turnaround responses to specific questions posed by educational stakeholders in the Midwest Region (Illinois, Indiana, Iowa, Michigan, Minnesota, Ohio, Wisconsin), which is served by the Regional Educational Laboratory (REL Region) at American Institutes for Research. This memorandum was prepared by REL Midwest under a contract with the U.S. Department of Education’s Institute of Education Sciences (IES), Contract ED-IES-17-C-0007, administered by American Institutes for Research. Its content does not necessarily reflect the views or policies of IES or the U.S. Department of Education nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.