Skip Navigation


Follow us on:

Ask A REL Response

August 2019

Question

What data is available regarding policy surrounding data-based progress monitoring?

Response

Following an established REL Southeast research protocol, we conducted a search for research reports as well as descriptive study articles on policy surrounding data-based progress monitoring. We focused on identifying resources that specifically addressed policy surrounding data-based progress monitoring The sources included ERIC and other federally funded databases and organizations, research institutions, academic research databases, and general Internet search engines (For details, please see the methods section at the end of this memo.)

We have not evaluated the quality of references and the resources provided in this response. We offer them only for your reference. These references are listed in alphabetical order, not necessarily in order of relevance. Also, we searched the references in the response from the most commonly used resources of research, but they are not comprehensive and other relevant references and resources may exist."

Research References

  1. Ardoin, S. P., Christ, T. J., Morena, L. S., Cormier, D. C., & Klingbeil, D. A. (2013). A systematic review and summarization of the recommendations and research surrounding curriculum-based measurement of oral reading fluency (CBM-R) decision rules. Journal of School Psychology, 51(1), 1-18). http://eric.ed.gov/?id=EJ1001681
    From the abstract: "Research and policy have established that data are necessary to guide decisions within education. Many of these decisions are made within problem solving and response to intervention frameworks for service delivery. Curriculum-Based Measurement in Reading (CBM-R) is a widely used data collection procedure within those models of service delivery. Although the evidence for CBM-R as a screening and benchmarking procedure has been summarized multiple times in the literature, there is no comprehensive review of the evidence for its application to monitor and evaluate individual student progress. The purpose of this study was to identify and summarize the psychometric and empirical evidence for CBM-R as it is used to monitor and evaluate student progress. There was an emphasis on the recommended number of data points collected during progress monitoring and interpretive guidelines. The review identified 171 journal articles, chapters, and instructional manuals using online search engines and research databases. Recommendations and evidence from 102 documents that met the study criteria were evaluated and summarized. Results indicate that most decisionmaking practices are based on expert opinion and that there is very limited psychometric or empirical support for such practices. There is a lack of published evidence to support program evaluation and progress monitoring with CBM-R. More research is required to inform data collection procedures and interpretive guidelines. (Contains 3 tables and 2 figures.)"
  2. Clemens, N. H., Shapiro, E. S., Wu, J-Y., Taylor, A. B., & Caskie, G. L. (2014). Monitoring early first-grade reading progress: A comparison of two measures. Journal of Learning Disabilities, 47(3), 254-270. http://eric.ed.gov/?id=EJ1022080
    From the abstract: "This study compared the validity of progress monitoring slope of nonsense word fluency (NWF) and word identification fluency (WIF) with early first-grade readers. Students ("N" = 80) considered to be at risk for reading difficulty were monitored with NWF and WIF on a 1-2 week basis across 11 weeks. Reading skills at the end of first grade were assessed using measures of passage reading fluency, real and pseudoword reading efficiency, and basic comprehension. Latent growth models indicated that although slope on both measures significantly predicted year-end reading skills, models including WIF accounted for more variance in spring reading skills than NWF, and WIF slope was more strongly associated with reading outcomes than NWF slope. Analyses of student growth plots suggested that WIF slope was more positively associated with later reading skills and discriminated more clearly between students according to successful or unsuccessful year-end reading outcomes. Although both measures may be used to monitor reading growth of at-risk students in early first grade, WIF may provide a clearer index of reading growth. Implications for data-based decision-making are discussed."
  3. Cummings, K. D., Park, Y., & Schaper, H. A. B. (2013). Form effects on DIBELS next oral reading fluency progress- monitoring passages. Assessment for Effective Intervention, 38(2), 91-104. http://eric.ed.gov/?id=EJ995832
    From the abstract: "The purpose of this article is to describe passage effects on "Dynamic Indicators of Basic Early Literacy Skills--Next Edition Oral Reading Fluency" ("DIBELS Next ORF") progress-monitoring measures for Grades 1 through 6. Approximately 572 students per grade (total "N" with at least one data point = 3,092) read all three "DIBELS Next" winter benchmark passages in the prescribed order, and within 2 weeks read four additional progress-monitoring passages in a randomly assigned and counterbalanced order. All 20 progress-monitoring passages were read by students in Grades 1 through 4; 16 passages were read in Grade 5 and 12 passages were read in Grade 6. Results focus on the persistence of form effects in spite of a priori criteria used in passage development. The authors describe the utility of three types of equating methods (i.e., mean, linear, and equipercentile equating) in ameliorating these effects. Their conclusions focus on preferred equating methods with small samples, the impact of form effects on progress-monitoring decision making, and recommendations for future use of ORF passages for progress monitoring. (Contains 5 tables and 2 figures.)"
  4. Van Norman, E. R. (2016). Curriculum-based measurement of oral reading: A preliminary investigation of confidence interval overlap to detect reliable growth. School Psychology Quarterly, 31(3), 405-418. http://eric.ed.gov/?id=EJ1113651
    From the abstract: "Curriculum-based measurement of oral reading (CBM-R) progress monitoring data is used to measure student response to instruction. Federal legislation permits educators to use CBM-R progress monitoring data as a basis for determining the presence of specific learning disabilities. However, decision making frameworks originally developed for CBM-R progress monitoring data were not intended for such high stakes assessments. Numerous documented issues with trend line estimation undermine the validity of using slope estimates to infer progress. One proposed recommendation is to use confidence interval overlap as a means of judging reliable growth. This project explored the degree to which confidence interval overlap was related to true growth magnitude using simulation methodology. True and observed CBM-R scores were generated across 7 durations of data collection (range 6--18 weeks), 3 levels of dataset quality or residual variance (5, 10, and 15 words read correct per minute) and 2 types of data collection schedules. Descriptive and inferential analyses were conducted to explore interactions between overlap status, progress monitoring scenarios, and true growth magnitude. A small but statistically significant interaction was observed between overlap status, duration, and dataset quality, b = -0.004, t(20992) =-7.96, p < 0.001. In general, confidence interval overlap does not appear to meaningfully account for variance in true growth across many progress monitoring conditions. Implications for research and practice are discussed. Limitations and directions for future research are addressed."
  5. Van Norman, E. R., Nelson, P. M., & Parker, D. C. (2018). A comparison of nonsense-word fluency and curriculum-based measurement of reading to measure response to phonics instruction. School Psychology Quarterly, 33(4), 573-581. http://eric.ed.gov/?id=EJ1198567
    From the abstract: "Student response to instruction is a key piece of information that school psychologists use to make instructional decisions. Curriculum-based measures (CBMs) are the most commonly used and researched family of academic progressmonitoring assessments. There are a variety of reading CBMs that differ in the type and specificity of skills they assess. The purpose of this study was to determine the degree to which the CBM of oral reading (CBM-R) progress-monitoring data differed from nonsense-word fluency (NWF) progress-monitoring data in the presence of a common intervention. We used multivariate multilevel modeling to compare growth trajectories from CBM-R and NWF progress-monitoring data from a geographically diverse sample of 3,000 1st-grade students receiving Tier-2 phonics interventions. We also evaluated differences in sensitivity to improvement and reliability of improvement from each measure. Improvement on CBM-R was statistically, but not practically, significantly greater than NWF. Although CBM-R was not as direct a measure of decoding, it still captured student response to phonics instruction similarly to NWF. NWF demonstrated slightly better sensitivity to growth, but CBM-R yielded more reliable growth estimates."

Methods

Keywords and Search Strings
The following keywords and search strings were used to search the reference databases and other sources:

  • Research-based progress monitoring
  • Policies regarding data-based progress monitoring

Databases and Resources
We searched ERIC for relevant resources. ERIC is a free online library of over 1.6 million citations of education research sponsored by the Institute of Education Sciences. Additionally, we searched Google Scholar and PsychInfo.

Reference Search and Selection Criteria

When we were searching and reviewing resources, we considered the following criteria:

  • Date of the publication: References and resources published for last 15 years, from 2003 to present, were include in the search and review.
  • Search Priorities of Reference Sources: Search priority is given to study reports, briefs, and other documents that are published and/or reviewed by IES and other federal or federally funded organizations, academic databases, including ERIC, EBSCO databases, JSTOR database, PsychInfo, PsychArticle, and Google Scholar.
  • Methodology: Following methodological priorities/considerations were given in the review and selection of the references: (a) study types - randomized control trials,, quasi experiments, surveys, descriptive data analyses, literature reviews, policy briefs, etc., generally in this order (b) target population, samples (representativeness of the target population, sample size, volunteered or randomly selected, etc.), study duration, etc. (c) limitations, generalizability of the findings and conclusions, etc.

This memorandum is one in a series of quick-turnaround responses to specific questions posed by educational stakeholders in the Southeast Region (Alabama, Florida, Georgia, Mississippi, North Carolina, and South Carolina), which is served by the Regional Educational Laboratory Southeast at Florida State University. This memorandum was prepared by REL Southeast under a contract with the U.S. Department of Education's Institute of Education Sciences (IES), Contract ED-IES-17-C-0011, administered by Florida State University. Its content does not necessarily reflect the views or policies of IES or the U.S. Department of Education nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.