Skip Navigation
archived information
Stay Up-to-Date:
Skip Navigation

REL Southwest Ask A REL Response

Early Childhood:

Administration and Use of Early Childhood Assessments

June 2019

Question:

What does the research say about effective strategies for preparing teachers of young children to (1) administer assessments reliably and (2) use assessment data effectively to inform instruction?

Response:

Print-friendly version (474 KB) PDF icon

Thank you for the question you submitted to our REL Reference Desk. We have prepared the following memo with research references to help answer your question. For each reference, we provide an abstract, excerpt, or summary written by the study’s author or publisher. Following an established Regional Educational Laboratory (REL) Southwest research protocol, we conducted a search for effective teacher preparation strategies (particularly teachers of young children) for conducting reliable administration of assessments or using assessment data effectively to inform instruction.

We have not evaluated the quality of references and the resources provided in this response. We offer them only for your reference. Also, we searched the references in the response from the most commonly used resources of research, but they are not comprehensive, and other relevant references and resources may exist. References provided are listed in alphabetical order, not necessarily in order of relevance. We do not include sources that are not freely available to the requestor.

Research References

What does the research say about effective strategies for preparing teachers to administer assessments reliably?

Brown, G., Scott-Little, C., Amwake, L., & Wynn, L. (2007). A review of methods and instruments used in state and local school readiness evaluations. (Issues & Answers, REL 2007-004). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Southeast. https://eric.ed.gov/?id=ED497789

From the ERIC abstract: “The report provides detailed information about the methods and instruments used to evaluate school readiness initiatives, discusses important considerations in selecting instruments, and provides resources and recommendations that may be helpful to those who are designing and implementing school readiness evaluations. Study results indicate that state and local evaluators have used a variety of instruments to collect child outcome data, some that are well known and others that are not. In general, many of the better-known instruments demonstrate adequate psychometric properties (reliability and validity, which ensure that the instruments consistently measure what they were intended to measure), but a number of issues, such as the appropriateness of the measure to the study’s purpose and sample, appear to present substantial challenges in evaluations of state- and locally-funded school readiness programs. Recommendations based on the data collected from this sample are provided to help school readiness programs and evaluators as they select instruments for assessing programs and implementing the evaluations include: (1) Careful selection of outcomes for assessment that match the goals of the program and address the components of children's learning and development that are linked with later success in school; (2) Clear definition of the purpose for which assessment data will be collected, and selection of instruments designed and validated for that purpose; (3) Selection of instruments that have a proven track record; (4) Selection of instruments that are culturally and linguistically appropriate; (5) Consideration of data collection (internal/external); (6) Assessment administration, including training and reliability studies; and (7) Data collection in context. Report findings highlight the challenges that evaluators face in ensuring that data are collected in a manner that yields credible, trustworthy, and meaningful information about child outcomes. The report cites a number of resources that can assist evaluators in making decisions about child assessments: resources to guide decisions about how to assess child outcomes, reviews of measures, and web sites with technical information related to measures used in large federal studies.”

Hallam, R., Grisham-Brown, J., Gao, X., & Brookshire, R. (2007). The effects of outcomes-driven authentic assessment on classroom quality. Early Childhood Research & Practice, 9(2), 1–9. https://eric.ed.gov/?id=EJ1084960

From the ERIC abstract: “Twenty-six Head Start preschool classrooms participated in a yearlong intervention designed to link the Head Start Child Outcomes Framework with authentic assessment practices. Teachers in intervention and pilot classrooms implemented an assessment approach that incorporated the use of a curriculum-based assessment tool, the development of portfolios aligned with the mandated Head Start Child Outcomes, and the integration of this child assessment information into individual and classroom instructional planning. During the intervention period, comparison classrooms continued to use the assessment approach adopted by the local Head Start program, which included the use of a standardized assessment tool and the use of an agency-developed lesson plan form. Intervention and pilot classrooms demonstrated significant improvements on some dimensions of classroom quality as measured by the Early Language and Literacy Classroom Observation (ELLCO) toolkit, whereas comparison classrooms exhibited no change in classroom quality. Implications for practice are discussed.”

Snow, C. E., & Van Hemel, S. B. (Eds.). (2008). Early childhood assessment: Why, what, and how. Washington, DC: National Academies Press. https://eric.ed.gov/?id=ED555247. Full text retrieved from https://www.researchgate.net/publication/268342677.

From the ERIC abstract: “The assessment of young children’s development and learning has recently taken on new importance. Private and government organizations are developing programs to enhance the school readiness of all young children, especially children from economically disadvantaged homes and communities and children with special needs. Well-planned and effective assessment can inform teaching and program improvement, and contribute to better outcomes for children. This book affirms that assessments can make crucial contributions to the improvement of children’s well-being, but only if they are well designed, implemented effectively, developed in the context of systematic planning, and are interpreted and used appropriately. Otherwise, assessment of children and programs can have negative consequences for both. The value of assessments therefore requires fundamental attention to their purpose and the design of the larger systems in which they are used. ‘Early Childhood Assessment’ addresses these issues by identifying the important outcomes for children from birth to age 5 and the quality and purposes of different techniques and instruments for developmental assessments. The following are appended: (1) Glossary of Terms Related to Early Childhood Assessment; (2) Information on Stakeholder Forum; (3) Development of State Standards for Early Childhood Education; (4) Sources of Detailed Information on Test and Assessment Instruments; and (5) Biographical Sketches of Committee Members and Staff.”

What does the research say about effective strategies for preparing teachers to use assessment data effectively to inform instruction?

Gerzon, N., & Guckenburg, S. (2015). Toolkit for a workshop on building a culture of data use (REL 2015-063). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Northeast & Islands. https://eric.ed.gov/?id=ED555739

From the ERIC abstract: “The Culture of Data Use Workshop Toolkit helps school and district teams apply research to practice as they establish and support a culture of data use in their educational setting. The field-tested workshop toolkit guides teams through a set of structured activities to develop an understanding of data-use research in schools and to analyze examples from practice. Teams learn to apply these concepts to enhance their own culture of data use and outline effective next steps. The conceptual framework of the toolkit draws on five research-based elements known to support an effective culture of data use, and supporting materials—a facilitator’s guide and agenda, a slide deck, and participant handouts—provide workshop facilitators with all the necessary materials to lead this process in their own setting.”

Hamilton, L., Halverson, R., Jackson, S. S., Mandinach, E., Supovitz, J. A., & Wayman, J. C. (2009). Using student achievement data to support instructional decision making. (NCEE 2009-4067). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance. https://eric.ed.gov/?id=ED506645

From the ERIC abstract: “The purpose of this practice guide is to help K-12 teachers and administrators use student achievement data to make instructional decisions intended to raise student achievement. The panel believes that the responsibility for effective data use lies with district leaders, school administrators, and classroom teachers and has crafted the recommendations accordingly. This guide focuses on how schools can make use of common assessment data to improve teaching and learning. For the purpose of this guide, the panel defined common assessments as those that are administered in a routine, consistent manner by a state, district, or school to measure students’ academic achievement. These include: (1) annual statewide accountability tests such as those required by No Child Left Behind; (2) commercially produced tests—including interim assessments, benchmark assessments, or early-grade reading assessments—administered at multiple points throughout the school year to provide feedback on student learning; (3) end-of-course tests administered across schools or districts; and (4) interim tests developed by districts or schools, such as quarterly writing or mathematics prompts, as long as these are administered consistently and routinely to provide information that can be compared across classrooms or schools. This guide includes five recommendations that the panel believes are a priority to implement: (1) Make data part of an ongoing cycle of instructional improvement; (2) Teach students to examine their own data and set learning goals; (3) Establish a clear vision for schoolwide data use; (4) Provide supports that foster a data-driven culture within the school; and (5) Develop and maintain a districtwide data system.”

Thomas, K., & Huffman, D. (2011). Navigating the challenges of helping teachers use data to inform educational decisions. Administrative Issues Journal: Education, Practice, and Research, 1(2), 94–102. https://eric.ed.gov/?id=EJ1055020

From the ERIC abstract: “In this paper we present a model of collaborative evaluation that has been used to engage teachers in data-based decision making for improving teaching and learning in mathematics and science. We examine three external challenges that threaten the process of continuous school improvement; namely, making sense of data, policy changes, and curriculum changes. In addition, we describe how the collaborative evaluation model facilitated progress beyond these challenges.”

Additional Organizations to Consult

Center on Enhancing Early Learning Outcomes (CEELO) — http://ceelo.org/

From the website: “As one of 22 Comprehensive Centers funded by the U.S. Department of Education’s Office of Elementary and Secondary Education, the Center on Enhancing Early Learning Outcomes (CEELO) is designed to strengthen the capacity of State Education Agencies (SEAs) to lead sustained improvements in early learning opportunities and outcomes. We do this work through strategic and responsive technical assistance, working with SEAs, state and local early childhood leaders, and other federal and national technical assistance (TA) providers to promote innovation and accountability.
An action-oriented partnership among CEELO staff, senior SEA early childhood managers, and other key early education policy leaders will guide what we do. CEELO staff will apply the following principles to achieve results:
  • Ground Policy and Practice in Research—supporting SEAs application of research and promising practices as they set policies and build systems to manage and improve programs for young children.
  • Promote Sustainable Change—encouraging effective organizational structures within SEAs to align state early childhood policies and systems across sectors and improve implementation of state initiatives in diverse local school districts and communitybased early childhood programs.
  • Foster Innovation and Results—Focused Approaches—creating opportunities for states to incubate new ideas for achieving measurable improvements in children’s early learning outcomes and support efforts to pilot, evaluate, implement, scale up, and sustain these initiatives.
  • Reflect and Respect Diversity—addressing the cultural, linguistic, and economic diversity of children, as well as being responsive to the varied geography, demography, political context, and past history of public and private sector early childhood programs and initiatives in states.”
REL Southwest note: CEELO offers the following resources on its website:

National Center on Intensive Intervention (NCII) – https://intensiveintervention.org/

From the website: “NCII is housed at the American Institutes for Research and works in conjunction with many of our nation’s most distinguished data-based individualization (DBI) experts. It is funded by the U.S. Department of Education’s Office of Special Education Programs (OSEP) and is part of OSEP’s Technical Assistance and Dissemination Network (TA&D). The Mission of the NCII is to build capacity of state and local education agencies, universities, practitioners, and other stakeholders to support implementation of intensive intervention in reading, mathematics, and behavior for students with severe and persistent learning and/or behavioral needs.”
REL Southwest note: NCII provides an “Academic Progress Monitoring Tools Chart” that may be filtered by grade level and subject, available at https://charts.intensiveintervention.org/chart/progress-monitoring.

Institute of Education Sciences, Regional Educational Laboratory (REL) Program, REL Northwest Blog – https://ies.ed.gov/ncee/edLabs/regions/northwest/blog/kindergartenreadiness.asp

From the website: “More than half of states have a definition for kindergarten readiness, and at least 25 states require kindergarten entry assessments (KEAs) to help educators better understand what students know upon entering kindergarten. Other states are gradually phasing in KEAs and/or making them optional.
However, nationwide, there is no common understanding or definition of "kindergarten readiness" or "school readiness," which KEAs are intended to measure. As these assessments have become more prominent and prevalent over the past five years, many states have developed their own definitions of kindergarten readiness. Prompted by inquiries from stakeholders in Alaska and Montana, who were curious about what other states were doing in terms of defining and measuring kindergarten readiness, REL Northwest decided to examine the national landscape.”
REL Southwest note: REL Northwest offers a 50 State Scan of Kindergarten Readiness Definitions and Assessments, available at https://ies.ed.gov/ncee/edLabs/regions/northwest/blog/kindergarten-readiness.asp.

Methods

Keywords and Search Strings

The following keywords and search strings were used to search the reference databases and other sources:

  • [(teachers OR “professional development” OR PD OR “teacher effectiveness” OR “teacher competencies” OR “faculty development”) AND (“assessment administration” OR “exam administration” OR “test administration”)]
  • [(assess OR assessing OR assessment OR assessments) AND (“young learners” OR “early childhood”)]
  • [“Early learning scale”]
  • [“kindergarten entry assessment” OR “kindergarten readiness”]
  • [(teachers OR “professional development” OR PD OR “teacher effectiveness” OR “teacher competencies” OR “faculty development”) AND (“data use” OR “assessment data” OR “test data” OR “exam data” OR “data-based decisionmaking”)

Databases and Resources

We searched ERIC for relevant, peer-reviewed research references. ERIC is a free online library of more than 1.7 million citations of education research sponsored by the Institute of Education Sciences (IES). Additionally, we searched the What Works Clearinghouse.

Reference Search and Selection Criteria

When we were searching and reviewing resources, we considered the following criteria:

  • Date of the publication: References and resources published from 2004 to present, were include in the search and review.
  • Search priorities of reference sources: Search priority is given to study reports, briefs, and other documents that are published and/or reviewed by IES and other federal or federally funded organizations, academic databases, including ERIC, EBSCO databases, JSTOR database, PsychInfo, PsychArticle, and Google Scholar.
  • Methodology: The following methodological priorities/considerations were given in the review and selection of the references: (a) study types—randomized control trials, quasi-experiments, surveys, descriptive data analyses, literature reviews, policy briefs, and so forth, generally in this order; (b) target population, samples (representativeness of the target population, sample size, volunteered or randomly selected, and so forth), study duration, and so forth; and (c) limitations, generalizability of the findings and conclusions, and so forth.
This memorandum is one in a series of quick-turnaround responses to specific questions posed by stakeholders in the Southwest Region (Arkansas, Louisiana, New Mexico, Oklahoma, and Texas), which is served by the Regional Educational Laboratory (REL) Southwest at AIR. This memorandum was prepared by REL Southwest under a contract with the U.S. Department of Education’s Institute of Education Sciences (IES), Contract ED-IES-91990018C0002, administered by AIR. Its content does not necessarily reflect the views or policies of IES or the U.S. Department of Education nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.