Project Activities
The research team will develop the measure across four phases by (1) obtaining feedback on items from an expert panel of researchers in early childhood social, emotional, and behavioral interventions; (2) evaluating the psychometric properties of the English and Spanish versions and establishing item and scoring parameters; (3) cross-validating the assessment with a new sample of children across two states; (4) and establishing the usability, predictive validity, and cost of the new screening tool.
Structured Abstract
Setting
The proposed project will be carried out in Head Start classrooms and private providers of early childhood education in rural, suburban, and urban areas of Florida and Wisconsin.
Sample
Approximately 128 early childhood educators will participate in each year of phases 2 and 3. Half of the educators will be monolingual English speakers and half will be monolingual or bilingual Spanish speakers. Each educator will use the assessment to screen 10 randomly selected 3- to 5-year-old children, leading to a sample of approximately 1,280 children per phase. In the final phase of validation, approximately 100 early childhood educators will participate, rating all the children in their classrooms universally, which is expected to be 15-20 students per classroom, for a total of 1,500 to 2,000 students.
This assessment will be an online and application-based universal screening tool for early educator ratings of students, available in both English and Spanish. It will be aligned to research-based social-emotional competencies and aims to be an efficient way to determine student need for intervention services. This tool will be theoretically aligned with the existing SAEBRS K–12 in order to improve communication between early childhood education and kindergarten programs. Similar to the SAEBRS K–12, this tool will measure both child strengths and needs from a dual-factor model across several domains (social, academic, and emotional behaviors).
Research design and methods
In phase 1, the research team will develop and refine the items, determine the extent to which items align with the proposed factor structure, and obtain feedback on the items from an expert panel of researchers in early childhood social, emotional, and behavioral interventions. In phase 2, the team will evaluate the psychometric properties of the English and Spanish versions with preschool children and their educators, including structural validity, criterion-related validity, reliability, usability, and test fairness. In addition, an item scaling study will be used to establish item and scoring parameters using item response theory analyses. In phase 3, the measure will be cross validated with a new sample of children and providers. In phase 4, researchers will evaluate the usability and feasibility of the tool and its predictive validity. Analyses will examine the associations between scores on the new screening assessment and criterion measures as well as examine educator-specific mediators, such as perceived construct knowledge and the ability to identify behavioral problems and social-emotional competencies. Throughout this final phase, the research team will also evaluate test-retest reliability, develop resources to support administration of the measure, and determine the costs to implement the new assessment as a method to improve risk identification.
Control condition
Due to the nature of this research, there is no control condition.
Key measures
The criterion assessments used for external validity will include the Devereaux Early Childhood Assessment (DECA) Preschool Program 2nd Edition and Social-Emotional Assessment/Evaluation Measure (SEAM). Measures to be used in evaluating the predictive validity of the new assessment include extant academic curriculum-based measurements and behavioral measures such as data on office discipline referrals. Educator-specific mediators, including perceived construct knowledge and the ability to identify social-emotional needs, will be identified through educator surveys. The Usage Rating Profile-Assessment will be used to determine factors that may influence the adoption and use of the instrument.
Data analytic strategy
To determine psychometric properties of the screening tool, the research team will use exploratory and confirmatory factor analyses, item response theory, and correlational analyses. Analyses will use hierarchical linear modeling to account for the multi-level nature of the data. Predictive validity analyses will use a structural equation modeling framework, allowing for an analysis of educator-based mediators.
Cost analysis strategy
A cost analysis will be conducted with the ingredients method, using data on the four basic cost categories of personnel, facilities, equipment and materials, and other.
People and institutions involved
Project contributors
Products and publications
Products: This project will result in a fully developed and initially validated early childhood screening tool in English and Spanish and resources to support administration (including a training protocol for raters and a secure software application with the ability to generate data reports). The project will also result in peer-reviewed publications and presentations as well as additional dissemination products that reach education stakeholders such as practitioners and policymakers.
ERIC Citations: Find available citations in ERIC for this award here.
Related projects
Questions about this project?
To answer additional questions about this project or provide feedback, please contact the program officer.