Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Identifying Young Children's Computational Thinking Processes In Visual Programming Environments Using Telemetry-Based Evidence Collection Methods
Center: NCER Year: 2019
Principal Investigator: Chung, Gregory Awardee: University of California, Los Angeles
Program: Science, Technology, Engineering, and Mathematics (STEM) Education      [Program Details]
Award Period: 3 years (07/01/2019 – 06/30/2022) Award Amount: $1,400,000
Type: Exploration Award Number: R305A190433

Co-Principal Investigators: Hosford, Grant; Shochet, Joe

Purpose: This project team examined associations between computational thinking (CT) processes and external measures of CT using measures based on young learner's interaction with codeSpark Academy. codeSpark Academy allows learners to program with blocks instead of traditional text-based commands. The game format creates an engaging experience for young learners. The importance of introducing computer programming at an early age is less to create young programmers than it is for learners to develop new ways of thinking. Programming can develop CT skills by encouraging systematic thinking and problem solving. Elements of CT are powerful cognitive tools applicable to domains beyond computer science that can help individuals solve problems, design systems, and understand human behavior.

Project Activities: The research team used the block-based programming game, codeSpark Academy, to explore young learner's computational thinking processes using telemetry-based measures in grades 1 and 3. Telemetry in this study refers to the data associated with the capture and logging of player- or game-initiated events as well as game states.

Pre-registration Site: N/A

Structured Abstract

Setting: The project took place in an urban charter elementary school in Los Angeles, California. During the 2018–2019 school year, the total school enrollment was 889, with 97 percent of the students coming from socioeconomically disadvantaged families. Fifty-six percent of the students in the school were English learners, and 96 percent of the students were of Hispanic or Latino ethnicity. Regarding student performance, 50 percent of students met or exceeded the state standard in English language arts, and 39 percent of students met or exceeded the state standard in mathematics.

Sample: Grade 1 and grade 3 students participated in a classroom cognitive lab, a classroom pilot test, and a classroom field test. Across all three studies, a total of 80 1st-grade and 36 3rd-grade students participated. For the first-grade sample, students were on average 6.1 years old, 45 percent were female, 58 percent were classified as English learners, 23 percent classified as English proficient, and 19 percent as English only. For the 3rd-grade sample, students were on average 8.1 years old, 53 percent were female, 53 percent were classified as English learners, 31 percent classified as English proficient, and 17 percent as English only.

Malleable Factor: The project explored young learner's computational thinking processes as they interacted with a block-based programming environment.

Research Design and Methods: The research team used the block-based programming platform, codeSpark Academy, to examine CT processes. CodeSpark Academy allows young learners to program via blocks instead of traditional text-based commands. It is a game-based approach to teaching sequencing, loops, conditionals, and events. The game promotes transfer of learning to other domains by allowing children to devise multiple ways to solve a puzzle. The use of blocks versus text allows pre-readers, non-native speakers, and students with reading disabilities to use the platform.

The research team gathered evidence of the computational thinking concepts, skills, and processes in a series of classroom-based data collections. A classroom-based cognitive lab study examined students' thought processes as they engaged in programming activities. In the classroom pilot study and classroom field study, students played codeSpark Academy once a week for 6 weeks. The researchers administered TechCheck as a pretest and posttest. TechCheck is a validated measure of CT designed for young learners. The research team also developed telemetry-based measures of similarity between an optimal solution and the student's code with respect to the number of commands needed to reach the solution, code content, and code structure. Measures of buggy code and productive and unproductive debugging behavior were also developed. The research team validated the approach through the rating of visualizations of coding behavior against various CT definitions, and correlational analysis of the of measures with TechCheck.

Control Condition: Due to the nature of this study, there is no control condition.

Key Measures: The research team developed telemetry-based measures and used a validated measure as an external criterion to compare the telemetry-based measures to. The first telemetry-based measure was edit distance (i.e., the number of commands needed to reach the solution). The second key measure was similarity-based measures of children's code content and code structure. The content and structure measures were used to form indicators of buggy code and debugging behavior.

Data Analytic Strategy: The research team used qualitative and quantitative approaches to define and operationalize CT in terms of young learner's programming behavior. Consensus definitions of different CT constructs were derived from the definitions reported in the published literature. These definitions were then operationalized in terms of code manipulations. Visualizations were created to depict edit distance, goal attainment, failures, and avatar movement over time. The visualizations were then categorized by raters as representing various CT processes. Rater agreement was high and the CT definitions (in terms of code manipulation) and visualizations formed the basis for telemetry-based measures of CT. Algorithms were developed to detect code clean-up and continued-code-editing (both components of modeling). In addition, the research team used similarity measures based on the assumption that the process of programming the optimal solution involves both the use of the correct commands and the ordering of those commands. Indicators of buggy code were derived and learner's programming behavior was classified as productive or unproductive depending on whether the learner's coding improved or not after the bug was encountered.

Publications and Products

ERIC Citations: Find available citations in ERIC for this award here.

Publicly Available Data: Chung, G. K. W. K. (2023). Gameplay telemetry for a block-based programming game [Data set, codebook, algorithms]. Inter-university Consortium for Political and Social Research (ICPSR).

Select Publications:

Chhin, C. (2019, September 18). Computational Thinking: The New Code for Success. Inside IES Research: Notes from NCER & NCSER.

Chung, G.K.W.K., and Feng, T. (in press). From clicks to constructs: An examination of validity evidence of game-based indicators derived from theory. In M. Sahin & D. Ifenthaler (Eds.), Assessment analytics in education – Designs, methods and solutions. Springer.

Chung, G.K.W.K., Redman, E.J.K.H., and Feng, T. (in press). Using learner-system interactions as evidence of student learning and performance: Validity issues, examples, and challenges. In E. Armour-Thomas, E.L. Baker, H. Everson, E. W. Gordon, S. Sireci, and E. Tucker (Eds.), Handbook for assessment in the service of learning.

Iseli, M., & Feng, T., & Chung, G., & Ruan, Z., & Shochet, J., & Strachman, A. (2021, July), Using visualizations of students' coding processes to detect patterns related to computational thinking. Paper presented at 2021 ASEE Virtual Annual Conference Content Access, Virtual Conference.

Relkin, E., Johnson, S.K., and Bers, M. (2023). A Normative Analysis of the TechCheck Computational Thinking Assessment. Educational Technology & Society, 26(2), 18–130.