Skip Navigation


Follow us on:

Ask A REL Response

December 2021

Question

What research has been conducted on tools related to observations of after school programming and/or infusing literacy into the after-school settings?

Response

Following an established REL Southeast research protocol, we conducted a search for research reports as well as descriptive study articles on teaching phonological awareness using print at the same time. We focused on identifying resources that specifically addressed teaching phonological awareness using print at the same time. The sources included ERIC and other federally funded databases and organizations, research institutions, academic research databases, and general Internet search engines (For details, please see the methods section at the end of this memo.)

We have not evaluated the quality of references and the resources provided in this response. We offer them only for your reference. These references are listed in alphabetical order, not necessarily in order of relevance. Also, we searched the references in the response from the most commonly used resources of research, but they are not comprehensive and other relevant references and resources may exist.

Research References

  1. Grossman, J. B., Goldsmith, J., Sheldon, J., & Arbreton, A. J. A. (2009). Assessing after-school settings. New Directions for Youth Development, 121, 89–108. http://eric.ed.gov/?id=EJ835988
    From the abstract: “According to previous research, three point-of-service features--strong youth engagement, well-conceived and well-delivered content, and a conducive learning environment--lead to positive impacts in after-school settings, the ultimate gauge of quality. To assess quality at a program's point of service, researchers and program administrators should measure indicators of these three quality features. We argue that youth engagement should be the first of these indicators to be measured because it reflects both the content of program activities and the conditions of the learning environment. Next, content should be assessed to ensure that staff deliver a well-designed sequence of active tasks that are linked explicitly to the development of desired skills or competencies. Finally, assessing the learning environment can help explain whether youths' absorption of the content is inhibited by poor interactions, limited youth decision making, or unsafe conditions. In presenting and evaluating multiple measurement approaches, the authors argue that the most reliable measures are those collected from the agent (either youth or staff members) to whom the indicator is most directly tied. Engagement, for example, is an experience of the youth, content is delivered by staff members, and the learning environment, which is maintained by staff members and experienced by the youth, is tied to both agents. Findings from quality assessments should be used to feed an ongoing process of training, support, and content change aimed at quality improvement. (Contains 1 figure, 1 table, and 16 notes.)”
  2. Kim, J. S., Capotosto, L., Hartry, A., & Fitzgerald, R. (2011). Can a mixed-method literacy intervention improve the reading achievement of low-performing elementary school students in an after-school program? Results from a randomized controlled trial of READ 180 Enterprise. Educational Evaluation and Policy Analysis, 33(2), 183–201. http://eric.ed.gov/?id=EJ927617
    From the abstract: “The authors describe an independent evaluation of the READ 180 Enterprise intervention designed by Scholastic, Inc. Despite widespread use of the program with upper elementary through high school students, there is limited empirical evidence to support its effectiveness. In this randomized controlled trial involving 312 students enrolled in an after-school program, the authors generated intention-to-treat and treatment-on-the-treated estimates of the program's impact on several literacy outcomes of fourth, fifth, and sixth graders reading below proficiency on a state assessment at baseline. READ 180 Enterprise students outperformed control group students on vocabulary (d = 0.23) and reading comprehension (d = 0.32) but not on spelling and oral reading fluency. The authors interpret the findings in light of the theory of instruction underpinning the READ 180 Enterprise intervention. (Contains 2 figures, 7 tables, and 4 notes.)”
  3. Leos-Urbel, J. (2015). What works after school? The relationship between after-school program quality, program attendance, and academic outcomes. Youth & Society, 47(5), 684–706. http://eric.ed.gov/?id=EJ1070820
    From the abstract: “This article examines the relationship between after-school program quality, program attendance, and academic outcomes for a sample of low-income after-school program participants. Regression and hierarchical linear modeling analyses use a unique longitudinal data set including 29 after-school programs that served 5,108 students in Grades 4 to 8 over 2 years. Program quality measures, based on activity observations, include supportive environment, opportunities for purposeful engagement, and structured interactions. Findings suggest that middle school students attend programs with greater purposeful engagement less often, while attendance for younger students is less sensitive to program quality. Greater purposeful engagement is associated with lower test scores for elementary and middle school students. In contrast, a more supportive environment and greater opportunities for structured interactions relate to improvements in test scores. Findings are discussed in light of ongoing policy debate regarding the proper focus of after-school programs and concerns about poor program attendance.”
  4. Lindo, E. J., Weiser, B., Cheatham, J. P., & Allor, J. H. (2018). Benefits of structured after-school literacy tutoring by university students for struggling elementary readers. Reading & Writing Quarterly, 34(2), 117-131. http://eric.ed.gov/?id=EJ1171007
    From the abstract: “This study examines the effectiveness of minimally trained tutors providing a highly structured tutoring intervention for struggling readers. We screened students in Grades K-6 for participation in an after-school tutoring program. We randomly assigned those students not meeting the benchmark on a reading screening measure to either a tutoring group or a control group. Students in the tutoring group met twice per week across one school year to receive tutoring from non-education major college students participating in a service-learning course. The goal of this study was to determine whether tutors without prior teaching experience or instruction could improve student reading outcomes with minimal training, a structured reading curriculum, and access to ongoing coaching. Tutored students displayed significantly more growth than control students in letter-word identification, decoding, and passage comprehension, with robust effect sizes of 0.99, 1.02, and 0.78, respectively. We discuss the implications and limitations of these findings.”
  5. Oh, Y., Osgood, D. W., & Smith, E. P. (2015). Measuring afterschool program quality using setting-level observational approaches. Journal of Early Adolescence, 35(5-6), 681–713. http://eric.ed.gov/?id=EJ1069081
    From the abstract: “The importance of afterschool hours for youth development is widely acknowledged, and afterschool settings have recently received increasing attention as an important venue for youth interventions, bringing a growing need for reliable and valid measures of afterschool quality. This study examined the extent to which the two observational tools, that is, Caregiver Interaction Scales (CIS) and Promising Practices Rating Scales (PPRS), could serve as reliable and valid tools for assessing the various dimensions of afterschool setting quality. The observation data of 44 afterschool programs were analyzed using both standard psychometric procedures (i.e., internal consistency, interrater reliability, and factor analysis) and generalizability theory. The results show the potential promise of the instruments, on one hand, and suggest future directions for improvement of measurement design and development of the field, on the other hand. In particular, our findings suggest the importance of addressing the effect of day-to-day fluctuations in observed afterschool quality.”
  6. Tracy, A., Charmaraman, L., Ceder, I., Richer, A., & Surr, W. (2016). Measuring program quality: Evidence of the scientific validity of the assessment of program practices tool. Afterschool Matters, 24, 3-11. http://eric.ed.gov/?id=EJ1120616
    From the abstract: “Out-of-school time (OST) youth programs are inherently difficult to assess. They are often very dynamic: Many youth interact with one another and with staff members in various physical environments. Despite the challenge, measuring quality is critical to help program directors and policy makers identify where to improve and how to support those improvements. This article describes recent research on the Assessment of Program Practices Tool (APT), establishing its strength as an evaluation and tracking tool for OST programs. The validation was conducted in two phases. The first phase was designed to evaluate the scientific rigor of the tool. Based on the findings from the first phase, the second aimed to inform improvements in the tool and its training. The testing so far shows that online video-based training needs to be more specialized in order to improve rating reliability for high-stakes users, such as third-party evaluators.”

Methods

Keywords and Search Strings
The following keywords and search strings were used to search the reference databases and other sources:

  • Observation tools for after school programs
  • Out of school time observation instrument
  • Assessment of Afterschool Program Practices Tool
  • After school programs, observation tools, literacy achievement
  • Evaluating after-school programs

Databases and Resources
We searched ERIC for relevant resources. ERIC is a free online library of over 1.6 million citations of education research sponsored by the Institute of Education Sciences. Additionally, we searched Google Scholar and PsychInfo.

Reference Search and Selection Criteria

When we were searching and reviewing resources, we considered the following criteria:

  • Date of the publication: References and resources published for last 15 years, from 2003 to present, were include in the search and review.
  • Search Priorities of Reference Sources: Search priority is given to study reports, briefs, and other documents that are published and/or reviewed by IES and other federal or federally funded organizations, academic databases, including ERIC, EBSCO databases, JSTOR database, PsychInfo, PsychArticle, and Google Scholar.
  • Methodology: Following methodological priorities/considerations were given in the review and selection of the references: (a) study types - randomized control trials,, quasi experiments, surveys, descriptive data analyses, literature reviews, policy briefs, etc., generally in this order (b) target population, samples (representativeness of the target population, sample size, volunteered or randomly selected, etc.), study duration, etc. (c) limitations, generalizability of the findings and conclusions, etc.

This memorandum is one in a series of quick-turnaround responses to specific questions posed by educational stakeholders in the Southeast Region (Alabama, Florida, Georgia, Mississippi, North Carolina, and South Carolina), which is served by the Regional Educational Laboratory Southeast at Florida State University. This memorandum was prepared by REL Southeast under a contract with the U.S. Department of Education's Institute of Education Sciences (IES), Contract ED-IES-17-C-0011, administered by Florida State University. Its content does not necessarily reflect the views or policies of IES or the U.S. Department of Education nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.