Project Activities
Researchers will develop a massive corpus of text (target: 1 million unique words) that is built to approximate the semantic language-space of a child within the range of grades 3 to 5, which is the range when creativity assessment in schools is most common. This corpus will be used to score the four different divergent thinking (DT) tasks. After administering these tasks to a large and diverse sample of elementary school students, the tasks will be scored using text-mining methods and examined in terms of fairness of the individual items and reliability and validity of the scales. Researchers will also complete a cost analysis.
Structured Abstract
Setting
The MOTES will be developed in collaboration with participating urban and suburban schools across the State of New York. Most participating schools are from the western and central New York area.
Sample
Researchers will collect data from a total of 400 students grades 3 through 5. The sample will include comparable (~200) number of students with low versus middle or high socio-economic status (SES).
Researchers will develop and validate the MOTES, which consists of four different types of divergent thinking tasks (i.e., Uses, Instances, Consequences, and Scenarios) with 5 tasks in each (a total of 20 tasks). The responses will be scored based on the text-mining models using the new corpus that will be developed as part of this project.
Research design and methods
In order to investigate the reliability and validity of MOTES scores, researchers will use a full course of validation methods, including content validity questionnaires for practicing elementary school teachers, targeted cognitive interviews with elementary students, multi-dimensional latent measurement models, modern reliability coefficients (e.g., H and Omega), multi-group measurement models for testing differential item functioning (DIF) in the model parameters, as well as criterion validity correlations to a range of external criteria including academic achievement measures, teacher report of original thinking, and existing standardized creative thinking assessments. At every phase of this validation methodology, the MOTES items and text-mining corpus will be revised to produce maximally valid and reliable scores for use in applied research and educational practice.
Key measures
To validate the MOTES, which is the new battery of divergent thinking tasks described above, three major types of data will be used: (a) students' academic achievement data including grade point average (GPA), standardized achievement tests, and subject-specific grades; and (b) Torrance Tests of Creative Thinking – Figural (TTCT-Figural-Form A) and (c) teacher evaluation of students' originality. Academic achievement data will be collected from the school directly and matched with data to be collected from students and teachers. TTCT-Figural Form A consists of three activities: Picture Construction, Picture Completion, and Lines. Students are given 10 minutes for each activity.
Data analytic strategy
Researchers will fit a number of differently specified multi-dimensional measurement models (i.e., confirmatory factor models) in order to determine the validity of the instrument, with the best-fitting dimensional structure being used to examine DIF across groups of high- and low-SES students.
Cost analysis strategy
A cost analysis will be conducted to determine the costs associated with implementing the MOTES.The research team predicts cost savings to schools and districts when it is used in universal screening efforts for gifted identification due to automatized scoring.
People and institutions involved
IES program contact(s)
Project contributors
Products and publications
The MOTES will be available to education practitioners and researchers who can obtain instant scores, for free or for very low cost. The team will also present findings from the research in conference presentations and in peer-reviewed publications.
Publications:
ERIC Citations: Publications from this project available in ERIC are here.
Select Publications
Acar, S. (2023). Creativity assessment, research, and practice in the age of artificial intelligence. Creativity Research Journal, 1-7.
Acar, S., Berthiaume, K., Grajzel, K., Dumas, D., Flemister, C. T., & Organisciak, P. (2023). Applying automated originality scoring to the verbal form of Torrance tests of creative thinking. Gifted Child Quarterly, 67(1), 3-17.
Acar, S., Dumas, D., Organisciak, P., & Berthiaume, K. (2024). Measuring original thinking in elementary school: Development and validation of a computational psychometric approach. Journal of Educational Psychology.
Organisciak, P., Newman, M., Eby, D., Acar, S., & Dumas, D. (2023). How do the kids speak? Improving educational use of text mining with child-directed language models. Information and Learning Sciences, 124(1/2), 25-47.
Additional project information
Previous award details:
Questions about this project?
To answer additional questions about this project or provide feedback, please contact the program officer.